id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
144553 | https://en.wikipedia.org/wiki/Projectile | Projectile | A projectile is an object that is propelled by the application of an external force and then moves freely under the influence of gravity and air resistance. Although any objects in motion through space are projectiles, they are commonly found in warfare and sports (for example, a thrown baseball, kicked football, fired bullet, shot arrow, stone released from catapult).
In ballistics, mathematical equations of motion are used to analyze projectile trajectories through launch, flight, and impact.
Motive force
Blowguns and pneumatic rifles use compressed gases, while most other guns and cannons utilize expanding gases liberated by sudden chemical reactions by propellants like smokeless powder. Light-gas guns use a combination of these mechanisms.
Railguns utilize electromagnetic fields to provide a constant acceleration along the entire length of the device, greatly increasing the muzzle velocity.
Some projectiles provide propulsion during flight by means of a rocket engine or jet engine. In military terminology, a rocket is unguided, while a missile is guided. Note the two meanings of "rocket" (weapon and engine): an ICBM is a guided missile with a rocket engine.
An explosion, whether or not by a weapon, causes the debris to act as multiple high velocity projectiles. An explosive weapon or device may also be designed to produce many high velocity projectiles by the break-up of its casing; these are correctly termed fragments.
In sports
In projectile motion the most important force applied to the ‘projectile’ is the propelling force, in this case the propelling forces are the muscles that act upon the ball to make it move, and the stronger the force applied, the more propelling force, which means the projectile (the ball) will travel farther. See pitching, bowling.
As a weapon
Delivery projectiles
Many projectiles, e.g. shells, may carry an explosive charge or another chemical or biological substance. Aside from explosive payload, a projectile can be designed to cause special damage, e.g. fire (see also early thermal weapons), or poisoning (see also arrow poison).
Kinetic projectiles
Wired projectiles
Some projectiles stay connected by a cable to the launch equipment after launching it:
for guidance: wire-guided missile (range up to )
to administer an electric shock, as in the case of a Taser (range up to ); two projectiles are shot simultaneously, each with a cable.
to make a connection with the target, either to tow it towards the launcher, as with a whaling harpoon, or to draw the launcher to the target, as a grappling hook does.
Typical projectile speeds
Equations of motion
An object projected at an angle to the horizontal has both the vertical and horizontal components of velocity. The vertical component of the velocity on the y-axis is given as while the horizontal component of the velocity is . There are various calculations for projectiles at a specific angle :
1. Time to reach maximum height. It is symbolized as (), which is the time taken for the projectile to reach the maximum height from the plane of projection. Mathematically, it is given as where = acceleration due to gravity (app 9.81 m/s²), = initial velocity (m/s) and = angle made by the projectile with the horizontal axis.
2. Time of flight (): this is the total time taken for the projectile to fall back to the same plane from which it was projected. Mathematically it is given as .
3. Maximum Height (): this is the maximum height attained by the projectile OR the maximum displacement on the vertical axis (y-axis) covered by the projectile. It is given as .
4. Range (): The Range of a projectile is the horizontal distance covered (on the x-axis) by the projectile. Mathematically, . The Range is maximum when angle = 45°, i.e. .
| Physical sciences | Classical mechanics | Physics |
144602 | https://en.wikipedia.org/wiki/Active%20transport | Active transport | In cellular biology, active transport is the movement of molecules or ions across a cell membrane from a region of lower concentration to a region of higher concentration—against the concentration gradient. Active transport requires cellular energy to achieve this movement. There are two types of active transport: primary active transport that uses adenosine triphosphate (ATP), and secondary active transport that uses an electrochemical gradient. This process is in contrast to passive transport, which allows molecules or ions to move down their concentration gradient, from an area of high concentration to an area of low concentration, without energy.
Active transport is essential for various physiological processes, such as nutrient uptake, hormone secretion, and nerve impulse transmission. For example, the sodium-potassium pump uses ATP to pump sodium ions out of the cell and potassium ions into the cell, maintaining a concentration gradient essential for cellular function. Active transport is highly selective and regulated, with different transporters specific to different molecules or ions. Dysregulation of active transport can lead to various disorders, including cystic fibrosis, caused by a malfunctioning chloride channel, and diabetes, resulting from defects in glucose transport into cells.
Active cellular transportation (ACT)
Unlike passive transport, which uses the kinetic energy and natural entropy of molecules moving down a gradient, active transport uses cellular energy to move them against a gradient, polar repulsion, or other resistance. Active transport is usually associated with accumulating high concentrations of molecules that the cell needs, such as ions, glucose and amino acids. Examples of active transport include the uptake of glucose in the intestines in humans and the uptake of mineral ions into root hair cells of plants.
History
In 1848, the German physiologist Emil du Bois-Reymond suggested the possibility of active transport of substances across membranes.
In 1926, Dennis Robert Hoagland investigated the ability of plants to absorb salts against a concentration gradient and discovered the dependence of nutrient absorption and translocation on metabolic energy using innovative model systems under controlled experimental conditions.
Rosenberg (1948) formulated the concept of active transport based on energetic considerations, but later it would be redefined.
In 1997, Jens Christian Skou, a Danish physician received the Nobel Prize in Chemistry for his research regarding the sodium-potassium pump.
One category of cotransporters that is especially prominent in research regarding diabetes treatment is sodium-glucose cotransporters. These transporters were discovered by scientists at the National Health Institute. These scientists had noticed a discrepancy in the absorption of glucose at different points in the kidney tubule of a rat. The gene was then discovered for intestinal glucose transport protein and linked to these membrane sodium glucose cotransport systems. The first of these membrane transport proteins was named SGLT1 followed by the discovery of SGLT2. Robert Krane also played a prominent role in this field.
Background
Specialized transmembrane proteins recognize the substance and allow it to move across the membrane when it otherwise would not, either because the phospholipid bilayer of the membrane is impermeable to the substance moved or because the substance is moved against the direction of its concentration gradient. There are two forms of active transport, primary active transport and secondary active transport. In primary active transport, the proteins involved are pumps that normally use chemical energy in the form of ATP. Secondary active transport, however, makes use of potential energy, which is usually derived through exploitation of an electrochemical gradient. The energy created from one ion moving down its electrochemical gradient is used to power the transport of another ion moving against its electrochemical gradient. This involves pore-forming proteins that form channels across the cell membrane. The difference between passive transport and active transport is that the active transport requires energy, and moves substances against their respective concentration gradient, whereas passive transport requires no cellular energy and moves substances in the direction of their respective concentration gradient.
In an antiporter, one substrate is transported in one direction across the membrane while another is cotransported in the opposite direction. In a symporter, two substrates are transported in the same direction across the membrane. Antiport and symport processes are associated with secondary active transport, meaning that one of the two substances is transported against its concentration gradient, utilizing the energy derived from the transport of another ion (mostly Na, K or H ions) down its concentration gradient.
If substrate molecules are moving from areas of lower concentration to areas of higher concentration (i.e., in the opposite direction as, or against the concentration gradient), specific transmembrane carrier proteins are required. These proteins have receptors that bind to specific molecules (e.g., glucose) and transport them across the cell membrane. Because energy is required in this process, it is known as 'active' transport. Examples of active transport include the transportation of sodium out of the cell and potassium into the cell by the sodium-potassium pump. Active transport often takes place in the internal lining of the small intestine.
Plants need to absorb mineral salts from the soil or other sources, but these salts exist in very dilute solution. Active transport enables these cells to take up salts from this dilute solution against the direction of the concentration gradient. For example, chloride (Cl−) and nitrate (NO3−) ions exist in the cytosol of plant cells, and need to be transported into the vacuole. While the vacuole has channels for these ions, transportation of them is against the concentration gradient, and thus movement of these ions is driven by hydrogen pumps, or proton pumps.
Primary active transport
Primary active transport, also called direct active transport, directly uses metabolic energy to transport molecules across a membrane. Substances that are transported across the cell membrane by primary active transport include metal ions, such as Na+, K+, Mg2+, and Ca2+. These charged particles require ion pumps or ion channels to cross membranes and distribute through the body.
Most of the enzymes that perform this type of transport are transmembrane ATPases. A primary ATPase universal to all animal life is the sodium-potassium pump, which helps to maintain the cell potential. The sodium-potassium pump maintains the membrane potential by moving three Na+ ions out of the cell for every two K+ ions moved into the cell. Other sources of energy for primary active transport are redox energy and photon energy (light). An example of primary active transport using redox energy is the mitochondrial electron transport chain that uses the reduction energy of NADH to move protons across the inner mitochondrial membrane against their concentration gradient. An example of primary active transport using light energy are the proteins involved in photosynthesis that use the energy of photons to create a proton gradient across the thylakoid membrane and also to create reduction power in the form of NADPH.
Model of active transport
ATP hydrolysis is used to transport hydrogen ions against the electrochemical gradient (from low to high hydrogen ion concentration). Phosphorylation of the carrier protein and the binding of a hydrogen ion induce a conformational (shape) change that drives the hydrogen ions to transport against the electrochemical gradient. Hydrolysis of the bound phosphate group and release of hydrogen ion then restores the carrier to its original conformation.
Types of primary active transporters
P-type ATPase: sodium potassium pump, calcium pump, proton pump
F-ATPase: mitochondrial ATP synthase, chloroplast ATP synthase
V-ATPase: vacuolar ATPase
ABC (ATP binding cassette) transporter: MDR, CFTR, etc.
Adenosine triphosphate-binding cassette transporters (ABC transporters) comprise a large and diverse protein family, often functioning as ATP-driven pumps. Usually, there are several domains involved in the overall transporter protein's structure, including two nucleotide-binding domains that constitute the ATP-binding motif and two hydrophobic transmembrane domains that create the "pore" component. In broad terms, ABC transporters are involved in the import or export of molecules across a cell membrane; yet within the protein family there is an extensive range of function.
In plants, ABC transporters are often found within cell and organelle membranes, such as the mitochondria, chloroplast, and plasma membrane. There is evidence to support that plant ABC transporters play a direct role in pathogen response, phytohormone transport, and detoxification. Furthermore, certain plant ABC transporters may function in actively exporting volatile compounds and antimicrobial metabolites.
In petunia flowers (Petunia hybrida), the ABC transporter PhABCG1 is involved in the active transport of volatile organic compounds. PhABCG1 is expressed in the petals of open flowers. In general, volatile compounds may promote the attraction of seed-dispersal organisms and pollinators, as well as aid in defense, signaling, allelopathy, and protection. To study the protein PhABCG1, transgenic petunia RNA interference lines were created with decreased PhABCG1 expression levels. In these transgenic lines, a decrease in emission of volatile compounds was observed. Thus, PhABCG1 is likely involved in the export of volatile compounds. Subsequent experiments involved incubating control and transgenic lines that expressed PhABCG1 to test for transport activity involving different substrates. Ultimately, PhABCG1 is responsible for the protein-mediated transport of volatile organic compounds, such as benzyl alcohol and methylbenzoate, across the plasma membrane.
Additionally in plants, ABC transporters may be involved in the transport of cellular metabolites. Pleiotropic Drug Resistance ABC transporters are hypothesized to be involved in stress response and export antimicrobial metabolites. One example of this type of ABC transporter is the protein NtPDR1. This unique ABC transporter is found in Nicotiana tabacum BY2 cells and is expressed in the presence of microbial elicitors. NtPDR1 is localized in the root epidermis and aerial trichomes of the plant. Experiments using antibodies specifically targeting NtPDR1 followed by Western blotting allowed for this determination of localization. Furthermore, it is likely that the protein NtPDR1 actively transports out antimicrobial diterpene molecules, which are toxic to the cell at high levels.
Secondary active transport
In secondary active transport, also known as cotransport or coupled transport, energy is used to transport molecules across a membrane; however, in contrast to primary active transport, there is no direct coupling of ATP. Instead, it relies upon the electrochemical potential difference created by pumping ions in/out of the cell. Permitting one ion or molecule to move down an electrochemical gradient, but possibly against the concentration gradient where it is more concentrated to that where it is less concentrated, increases entropy and can serve as a source of energy for metabolism (e.g. in ATP synthase). The energy derived from the pumping of protons across a cell membrane is frequently used as the energy source in secondary active transport. In humans, sodium (Na+) is a commonly cotransported ion across the plasma membrane, whose electrochemical gradient is then used to power the active transport of a second ion or molecule against its gradient. In bacteria and small yeast cells, a commonly cotransported ion is hydrogen. Hydrogen pumps are also used to create an electrochemical gradient to carry out processes within cells such as in the electron transport chain, an important function of cellular respiration that happens in the mitochondrion of the cell.
In August 1960, in Prague, Robert K. Crane presented for the first time his discovery of the sodium-glucose cotransport as the mechanism for intestinal glucose absorption. Crane's discovery of cotransport was the first ever proposal of flux coupling in biology.
Cotransporters can be classified as symporters and antiporters depending on whether the substances move in the same or opposite directions.
Antiporter
In an antiporter two species of ions or other solutes are pumped in opposite directions across a membrane. One of these species is allowed to flow from high to low concentration, which yields the entropic energy to drive the transport of the other solute from a low concentration region to a high one.
An example is the sodium-calcium exchanger or antiporter, which allows three sodium ions into the cell to transport one calcium out. This antiporter mechanism is important within the membranes of cardiac muscle cells in order to keep the calcium concentration in the cytoplasm low. Many cells also possess calcium ATPases, which can operate at lower intracellular concentrations of calcium and sets the normal or resting concentration of this important second messenger. But the ATPase exports calcium ions more slowly: only 30 per second versus 2000 per second by the exchanger. The exchanger comes into service when the calcium concentration rises steeply or "spikes" and enables rapid recovery. This shows that a single type of ion can be transported by several enzymes, which need not be active all the time (constitutively), but may exist to meet specific, intermittent needs.
Symporter
A symporter uses the downhill movement of one solute species from high to low concentration to move another molecule uphill from low concentration to high concentration (against its concentration gradient). Both molecules are transported in the same direction.
An example is the glucose symporter SGLT1, which co-transports one glucose (or galactose) molecule into the cell for every two sodium ions it imports into the cell. This symporter is located in the small intestines, heart, and brain. It is also located in the S3 segment of the proximal tubule in each nephron in the kidneys. Its mechanism is exploited in glucose rehydration therapy This mechanism uses the absorption of sugar through the walls of the intestine to pull water in along with it. Defects in SGLT2 prevent effective reabsorption of glucose, causing familial renal glucosuria.
Bulk transport
Endocytosis and exocytosis are both forms of bulk transport that move materials into and out of cells, respectively, via vesicles. In the case of endocytosis, the cellular membrane folds around the desired materials outside the cell. The ingested particle becomes trapped within a pouch, known as a vesicle, inside the cytoplasm. Often enzymes from lysosomes are then used to digest the molecules absorbed by this process. Substances that enter the cell via signal mediated electrolysis include proteins, hormones and growth and stabilization factors. Viruses enter cells through a form of endocytosis that involves their outer membrane fusing with the membrane of the cell. This forces the viral DNA into the host cell.
Biologists distinguish two main types of endocytosis: pinocytosis and phagocytosis.
In pinocytosis, cells engulf liquid particles (in humans this process occurs in the small intestine, where cells engulf fat droplets).
In phagocytosis, cells engulf solid particles.
Exocytosis involves the removal of substances through the fusion of the outer cell membrane and a vesicle membrane. An example of exocytosis would be the transmission of neurotransmitters across a synapse between brain cells.
| Biology and health sciences | Cell processes | null |
144652 | https://en.wikipedia.org/wiki/Riemannian%20manifold | Riemannian manifold | In differential geometry, a Riemannian manifold is a geometric space on which many geometric notions such as distance, angles, length, volume, and curvature are defined. Euclidean space, the -sphere, hyperbolic space, and smooth surfaces in three-dimensional space, such as ellipsoids and paraboloids, are all examples of Riemannian manifolds. Riemannian manifolds are named after German mathematician Bernhard Riemann, who first conceptualized them.
Formally, a Riemannian metric (or just a metric) on a smooth manifold is a choice of inner product for each tangent space of the manifold. A Riemannian manifold is a smooth manifold together with a Riemannian metric. The techniques of differential and integral calculus are used to pull geometric data out of the Riemannian metric. For example, integration leads to the Riemannian distance function, whereas differentiation is used to define curvature and parallel transport.
Any smooth surface in three-dimensional Euclidean space is a Riemannian manifold with a Riemannian metric coming from the way it sits inside the ambient space. The same is true for any submanifold of Euclidean space of any dimension. Although John Nash proved that every Riemannian manifold arises as a submanifold of Euclidean space, and although some Riemannian manifolds are naturally exhibited or defined in that way, the idea of a Riemannian manifold emphasizes the intrinsic point of view, which defines geometric notions directly on the abstract space itself without referencing an ambient space. In many instances, such as for hyperbolic space and projective space, Riemannian metrics are more naturally defined or constructed using the intrinsic point of view. Additionally, many metrics on Lie groups and homogeneous spaces are defined intrinsically by using group actions to transport an inner product on a single tangent space to the entire manifold, and many special metrics such as constant scalar curvature metrics and Kähler–Einstein metrics are constructed intrinsically using tools from partial differential equations.
Riemannian geometry, the study of Riemannian manifolds, has deep connections to other areas of math, including geometric topology, complex geometry, and algebraic geometry. Applications include physics (especially general relativity and gauge theory), computer graphics, machine learning, and cartography. Generalizations of Riemannian manifolds include pseudo-Riemannian manifolds, Finsler manifolds, and sub-Riemannian manifolds.
History
In 1827, Carl Friedrich Gauss discovered that the Gaussian curvature of a surface embedded in 3-dimensional space only depends on local measurements made within the surface (the first fundamental form). This result is known as the Theorema Egregium ("remarkable theorem" in Latin).
A map that preserves the local measurements of a surface is called a local isometry. Call a property of a surface an intrinsic property if it is preserved by local isometries and call it an extrinsic property if it is not. In this language, the Theorema Egregium says that the Gaussian curvature is an intrinsic property of surfaces.
Riemannian manifolds and their curvature were first introduced non-rigorously by Bernhard Riemann in 1854. However, they would not be formalized until much later. In fact, the more primitive concept of a smooth manifold was first explicitly defined only in 1913 in a book by Hermann Weyl.
Élie Cartan introduced the Cartan connection, one of the first concepts of a connection. Levi-Civita defined the Levi-Civita connection, a special connection on a Riemannian manifold.
Albert Einstein used the theory of pseudo-Riemannian manifolds (a generalization of Riemannian manifolds) to develop general relativity. Specifically, the Einstein field equations are constraints on the curvature of spacetime, which is a 4-dimensional pseudo-Riemannian manifold.
Definition
Riemannian metrics and Riemannian manifolds
Let be a smooth manifold. For each point , there is an associated vector space called the tangent space of at . Vectors in are thought of as the vectors tangent to at .
However, does not come equipped with an inner product, a measuring stick that gives tangent vectors a concept of length and angle. This is an important deficiency because calculus teaches that to calculate the length of a curve, the length of vectors tangent to the curve must be defined. A Riemannian metric puts a measuring stick on every tangent space.
A Riemannian metric on assigns to each a positive-definite inner product in a smooth way (see the section on regularity below). This induces a norm defined by . A smooth manifold endowed with a Riemannian metric is a Riemannian manifold, denoted . A Riemannian metric is a special case of a metric tensor.
A Riemannian metric is not to be confused with the distance function of a metric space, which is also called a metric.
The Riemannian metric in coordinates
If are smooth local coordinates on , the vectors
form a basis of the vector space for any . Relative to this basis, one can define the Riemannian metric's components at each point by
.
These functions can be put together into an matrix-valued function on . The requirement that is a positive-definite inner product then says exactly that this matrix-valued function is a symmetric positive-definite matrix at .
In terms of the tensor algebra, the Riemannian metric can be written in terms of the dual basis of the cotangent bundle as
Regularity of the Riemannian metric
The Riemannian metric is continuous if its components are continuous in any smooth coordinate chart The Riemannian metric is smooth if its components are smooth in any smooth coordinate chart. One can consider many other types of Riemannian metrics in this spirit, such as Lipschitz Riemannian metrics or measurable Riemannian metrics.
There are situations in geometric analysis in which one wants to consider non-smooth Riemannian metrics. See for instance (Gromov 1999) and (Shi and Tam 2002). However, in this article, is assumed to be smooth unless stated otherwise.
Musical isomorphism
In analogy to how an inner product on a vector space induces an isomorphism between a vector space and its dual given by , a Riemannian metric induces an isomorphism of bundles between the tangent bundle and the cotangent bundle. Namely, if is a Riemannian metric, then
is a isomorphism of smooth vector bundles from the tangent bundle to the cotangent bundle .
Isometries
An isometry is a function between Riemannian manifolds which preserves all of the structure of Riemannian manifolds. If two Riemannian manifolds have an isometry between them, they are called isometric, and they are considered to be the same manifold for the purpose of Riemannian geometry.
Specifically, if and are two Riemannian manifolds, a diffeomorphism is called an isometry if , that is, if
for all and For example, translations and rotations are both isometries from Euclidean space (to be defined soon) to itself.
One says that a smooth map not assumed to be a diffeomorphism, is a local isometry if every has an open neighborhood such that is an isometry (and thus a diffeomorphism).
Volume
An oriented -dimensional Riemannian manifold has a unique -form called the Riemannian volume form. The Riemannian volume form is preserved by orientation-preserving isometries. The volume form gives rise to a measure on which allows measurable functions to be integrated. If is compact, the volume of is .
Examples
Euclidean space
Let denote the standard coordinates on The (canonical) Euclidean metric is given by
or equivalently
or equivalently by its coordinate functions
where is the Kronecker delta
which together form the matrix
The Riemannian manifold is called Euclidean space.
Submanifolds
Let be a Riemannian manifold and let be an immersed submanifold or an embedded submanifold of . The pullback of is a Riemannian metric on , and is said to be a Riemannian submanifold of .
In the case where , the map is given by and the metric is just the restriction of to vectors tangent along . In general, the formula for is
where is the pushforward of by
Examples:
The -sphere
is a smooth embedded submanifold of Euclidean space . The Riemannian metric this induces on is called the round metric or standard metric.
Fix real numbers . The ellipsoid
is a smooth embedded submanifold of Euclidean space .
The graph of a smooth function is a smooth embedded submanifold of with its standard metric.
If is not simply connected, there is a covering map , where is the universal cover of . This is an immersion (since it is locally a diffeomorphism), so automatically inherits a Riemannian metric. By the same principle, any smooth covering space of a Riemannian manifold inherits a Riemannian metric.
On the other hand, if already has a Riemannian metric , then the immersion (or embedding) is called an isometric immersion (or isometric embedding) if . Hence isometric immersions and isometric embeddings are Riemannian submanifolds.
Products
Let and be two Riemannian manifolds, and consider the product manifold . The Riemannian metrics and naturally put a Riemannian metric on which can be described in a few ways.
Considering the decomposition one may define
If is a smooth coordinate chart on and is a smooth coordinate chart on , then is a smooth coordinate chart on Let be the representation of in the chart and let be the representation of in the chart . The representation of in the coordinates is
where
For example, consider the -torus . If each copy of is given the round metric, the product Riemannian manifold is called the flat torus. As another example, the Riemannian product , where each copy of has the Euclidean metric, is isometric to with the Euclidean metric.
Positive combinations of metrics
Let be Riemannian metrics on If are any positive smooth functions on , then is another Riemannian metric on
Every smooth manifold admits a Riemannian metric
Theorem: Every smooth manifold admits a (non-canonical) Riemannian metric.
This is a fundamental result. Although much of the basic theory of Riemannian metrics can be developed using only that a smooth manifold is a locally Euclidean topological space, for this result it is necessary to use that smooth manifolds are Hausdorff and paracompact. The reason is that the proof makes use of a partition of unity.
Let be a smooth manifold and a locally finite atlas so that are open subsets and are diffeomorphisms. Such an atlas exists because the manifold is paracompact.
Let be a differentiable partition of unity subordinate to the given atlas, i.e. such that for all .
Define a Riemannian metric on by
where
Here is the Euclidean metric on and is its pullback along . While is only defined on , the product is defined and smooth on since . It takes the value 0 outside of . Because the atlas is locally finite, at every point the sum contains only finitely many nonzero terms, so the sum converges. It is straightforward to check that is a Riemannian metric.
An alternative proof uses the Whitney embedding theorem to embed into Euclidean space and then pulls back the metric from Euclidean space to . On the other hand, the Nash embedding theorem states that, given any smooth Riemannian manifold there is an embedding for some such that the pullback by of the standard Riemannian metric on is That is, the entire structure of a smooth Riemannian manifold can be encoded by a diffeomorphism to a certain embedded submanifold of some Euclidean space. Therefore, one could argue that nothing can be gained from the consideration of abstract smooth manifolds and their Riemannian metrics. However, there are many natural smooth Riemannian manifolds, such as the set of rotations of three-dimensional space and hyperbolic space, of which any representation as a submanifold of Euclidean space will fail to represent their remarkable symmetries and properties as clearly as their abstract presentations do.
Metric space structure
An admissible curve is a piecewise smooth curve whose velocity is nonzero everywhere it is defined. The nonnegative function is defined on the interval except for at finitely many points. The length of an admissible curve is defined as
The integrand is bounded and continuous except at finitely many points, so it is integrable. For a connected Riemannian manifold, define by
Theorem: is a metric space, and the metric topology on coincides with the topology on .
In verifying that satisfies all of the axioms of a metric space, the most difficult part is checking that implies . Verification of the other metric space axioms is omitted.
There must be some precompact open set around p which every curve from p to q must escape. By selecting this open set to be contained in a coordinate chart, one can reduce the claim to the well-known fact that, in Euclidean geometry, the shortest curve between two points is a line. In particular, as seen by the Euclidean geometry of a coordinate chart around p, any curve from p to q must first pass though a certain "inner radius." The assumed continuity of the Riemannian metric g only allows this "coordinate chart geometry" to distort the "true geometry" by some bounded factor.
To be precise, let be a smooth coordinate chart with and Let be an open subset of with By continuity of and compactness of there is a positive number such that for any and any where denotes the Euclidean norm induced by the local coordinates. Let R denote .
Now, given any admissible curve from p to q, there must be some minimal such that clearly
The length of is at least as large as the restriction of to So
The integral which appears here represents the Euclidean length of a curve from 0 to , and so it is greater than or equal to R. So we conclude
The observation about comparison between lengths measured by g and Euclidean lengths measured in a smooth coordinate chart, also verifies that the metric space topology of coincides with the original topological space structure of .
Although the length of a curve is given by an explicit formula, it is generally impossible to write out the distance function by any explicit means. In fact, if is compact, there always exist points where is non-differentiable, and it can be remarkably difficult to even determine the location or nature of these points, even in seemingly simple cases such as when is an ellipsoid.
If one works with Riemannian metrics that are merely continuous but possibly not smooth, the length of an admissible curve and the Riemannian distance function are defined exactly the same, and, as before, is a metric space and the metric topology on coincides with the topology on .
Diameter
The diameter of the metric space is
The Hopf–Rinow theorem shows that if is complete and has finite diameter, it is compact. Conversely, if is compact, then the function has a maximum, since it is a continuous function on a compact metric space. This proves the following.
If is complete, then it is compact if and only if it has finite diameter.
This is not the case without the completeness assumption; for counterexamples one could consider any open bounded subset of a Euclidean space with the standard Riemannian metric. It is also not true that any complete metric space of finite diameter must be compact; it matters that the metric space came from a Riemannian manifold.
Connections, geodesics, and curvature
Connections
An (affine) connection is an additional structure on a Riemannian manifold that defines differentiation of one vector field with respect to another. Connections contain geometric data, and two Riemannian manifolds with different connections have different geometry.
Let denote the space of vector fields on . An (affine) connection
on is a bilinear map
such that
For every function ,
The product rule holds.
The expression is called the covariant derivative of with respect to .
Levi-Civita connection
Two Riemannian manifolds with different connections have different geometry. Thankfully, there is a natural connection associated to a Riemannian manifold called the Levi-Civita connection.
A connection is said to preserve the metric if
A connection is torsion-free if
where is the Lie bracket.
A Levi-Civita connection is a torsion-free connection that preserves the metric. Once a Riemannian metric is fixed, there exists a unique Levi-Civita connection. Note that the definition of preserving the metric uses the regularity of .
Covariant derivative along a curve
If is a smooth curve, a smooth vector field along is a smooth map such that for all . The set of smooth vector fields along is a vector space under pointwise vector addition and scalar multiplication. One can also pointwise multiply a smooth vector field along by a smooth function :
for
Let be a smooth vector field along . If is a smooth vector field on a neighborhood of the image of such that , then is called an extension of .
Given a fixed connection on and a smooth curve , there is a unique operator , called the covariant derivative along , such that:
If is an extension of , then .
Geodesics
Geodesics are curves with no intrinsic acceleration. Equivalently, geodesics are curves that locally take the shortest path between two points. They are the generalization of straight lines in Euclidean space to arbitrary Riemannian manifolds. An ant living in a Riemannian manifold walking straight ahead without making any effort to accelerate or turn would trace out a geodesic.
Fix a connection on . Let be a smooth curve. The acceleration of is the vector field along . If for all , is called a geodesic.
For every and , there exists a geodesic defined on some open interval containing 0 such that and . Any two such geodesics agree on their common domain. Taking the union over all open intervals containing 0 on which a geodesic satisfying and exists, one obtains a geodesic called a maximal geodesic of which every geodesic satisfying and is a restriction.
Every curve that has the shortest length of any admissible curve with the same endpoints as is a geodesic (in a unit-speed reparameterization).
Examples
The nonconstant maximal geodesics of the Euclidean plane are exactly the straight lines. This agrees with the fact from Euclidean geometry that the shortest path between two points is a straight line segment.
The nonconstant maximal geodesics of with the round metric are exactly the great circles. Since the Earth is approximately a sphere, this means that the shortest path a plane can fly between two locations on Earth is a segment of a great circle.
Hopf–Rinow theorem
The Riemannian manifold with its Levi-Civita connection is geodesically complete if the domain of every maximal geodesic is . The plane is geodesically complete. On the other hand, the punctured plane with the restriction of the Riemannian metric from is not geodesically complete as the maximal geodesic with initial conditions , does not have domain .
The Hopf–Rinow theorem characterizes geodesically complete manifolds.
Theorem: Let be a connected Riemannian manifold. The following are equivalent:
The metric space is complete (every -Cauchy sequence converges),
All closed and bounded subsets of are compact,
is geodesically complete.
Parallel transport
In Euclidean space, all tangent spaces are canonically identified with each other via translation, so it is easy to move vectors from one tangent space to another. Parallel transport is a way of moving vectors from one tangent space to another along a curve in the setting of a general Riemannian manifold. Given a fixed connection, there is a unique way to do parallel transport.
Specifically, call a smooth vector field along a smooth curve parallel along if identically. Fix a curve with and . to parallel transport a vector to a vector in along , first extend to a vector field parallel along , and then take the value of this vector field at .
The images below show parallel transport induced by the Levi-Civita connection associated to two different Riemannian metrics on the punctured plane . The curve the parallel transport is done along is the unit circle. In polar coordinates, the metric on the left is the standard Euclidean metric , while the metric on the right is . This second metric has a singularity at the origin, so it does not extend past the puncture, but the first metric extends to the entire plane.
Warning: This is parallel transport on the punctured plane along the unit circle, not parallel transport on the unit circle. Indeed, in the first image, the vectors fall outside of the tangent space to the unit circle.
Riemann curvature tensor
The Riemann curvature tensor measures precisely the extent to which parallel transporting vectors around a small rectangle is not the identity map. The Riemann curvature tensor is 0 at every point if and only if the manifold is locally isometric to Euclidean space.
Fix a connection on . The Riemann curvature tensor is the map defined by
where is the Lie bracket of vector fields. The Riemann curvature tensor is a -tensor field.
Ricci curvature tensor
Fix a connection on . The Ricci curvature tensor is
where is the trace. The Ricci curvature tensor is a covariant 2-tensor field.
Einstein manifolds
The Ricci curvature tensor plays a defining role in the theory of Einstein manifolds, which has applications to the study of gravity. A (pseudo-)Riemannian metric is called an Einstein metric if Einstein's equation
for some constant
holds, and a (pseudo-)Riemannian manifold whose metric is Einstein is called an Einstein manifold. Examples of Einstein manifolds include Euclidean space, the -sphere, hyperbolic space, and complex projective space with the Fubini-Study metric.
Scalar curvature
Constant curvature and space forms
A Riemannian manifold is said to have constant curvature if every sectional curvature equals the number . This is equivalent to the condition that, relative to any coordinate chart, the Riemann curvature tensor can be expressed in terms of the metric tensor as
This implies that the Ricci curvature is given by and the scalar curvature is , where is the dimension of the manifold. In particular, every Riemannian manifold of constant curvature is an Einstein manifold, thereby having constant scalar curvature. As found by Bernhard Riemann in his 1854 lecture introducing Riemannian geometry, the locally-defined Riemannian metric
has constant curvature . Any two Riemannian manifolds of the same constant curvature are locally isometric, and so it follows that any Riemannian manifold of constant curvature can be covered by coordinate charts relative to which the metric has the above form.
A Riemannian space form is a Riemannian manifold with constant curvature which is additionally connected and geodesically complete. A Riemannian space form is said to be a spherical space form if the curvature is positive, a Euclidean space form if the curvature is zero, and a hyperbolic space form or hyperbolic manifold if the curvature is negative. In any dimension, the sphere with its standard Riemannian metric, Euclidean space, and hyperbolic space are Riemannian space forms of constant curvature , , and respectively. Furthermore, the Killing–Hopf theorem says that any simply-connected spherical space form is homothetic to the sphere, any simply-connected Euclidean space form is homothetic to Euclidean space, and any simply-connected hyperbolic space form is homothetic to hyperbolic space.
Using the covering manifold construction, any Riemannian space form is isometric to the quotient manifold of a simply-connected Riemannian space form, modulo a certain group action of isometries. For example, the isometry group of the -sphere is the orthogonal group . Given any finite subgroup thereof in which only the identity matrix possesses as an eigenvalue, the natural group action of the orthogonal group on the -sphere restricts to a group action of , with the quotient manifold inheriting a geodesically complete Riemannian metric of constant curvature . Up to homothety, every spherical space form arises in this way; this largely reduces the study of spherical space forms to problems in group theory. For instance, this can be used to show directly that every even-dimensional spherical space form is homothetic to the standard metric on either the sphere or real projective space. There are many more odd-dimensional spherical space forms, although there are known algorithms for their classification. The list of three-dimensional spherical space forms is infinite but explicitly known, and includes the lens spaces and the Poincaré dodecahedral space.
The case of Euclidean and hyperbolic space forms can likewise be reduced to group theory, based on study of the isometry group of Euclidean space and hyperbolic space. For example, the class of two-dimensional Euclidean space forms includes Riemannian metrics on the Klein bottle, the Möbius strip, the torus, the cylinder , along with the Euclidean plane. Unlike the case of two-dimensional spherical space forms, in some cases two space form structures on the same manifold are not homothetic. The case of two-dimensional hyperbolic space forms is even more complicated, having to do with Teichmüller space. In three dimensions, the Euclidean space forms are known, while the geometry of hyperbolic space forms in three and higher dimensions remains an area of active research known as hyperbolic geometry.
Riemannian metrics on Lie groups
Left-invariant metrics on Lie groups
Let be a Lie group, such as the group of rotations in three-dimensional space. Using the group structure, any inner product on the tangent space at the identity (or any other particular tangent space) can be transported to all other tangent spaces to define a Riemannian metric. Formally, given an inner product on the tangent space at the identity, the inner product on the tangent space at an arbitrary point is defined by
where for arbitrary , is the left multiplication map sending a point to . Riemannian metrics constructed this way are left-invariant; right-invariant Riemannian metrics could be constructed likewise using the right multiplication map instead.
The Levi-Civita connection and curvature of a general left-invariant Riemannian metric can be computed explicitly in terms of , the adjoint representation of , and the Lie algebra associated to . These formulas simplify considerably in the special case of a Riemannian metric which is bi-invariant (that is, simultaneously left- and right-invariant). All left-invariant metrics have constant scalar curvature.
Left- and bi-invariant metrics on Lie groups are an important source of examples of Riemannian manifolds. Berger spheres, constructed as left-invariant metrics on the special unitary group SU(2), are among the simplest examples of the collapsing phenomena, in which a simply-connected Riemannian manifold can have small volume without having large curvature. They also give an example of a Riemannian metric which has constant scalar curvature but which is not Einstein, or even of parallel Ricci curvature. Hyperbolic space can be given a Lie group structure relative to which the metric is left-invariant. Any bi-invariant Riemannian metric on a Lie group has nonnegative sectional curvature, giving a variety of such metrics: a Lie group can be given a bi-invariant Riemannian metric if and only if it is the product of a compact Lie group with an abelian Lie group.
Homogeneous spaces
A Riemannian manifold is said to be homogeneous if for every pair of points and in , there is some isometry of the Riemannian manifold sending to . This can be rephrased in the language of group actions as the requirement that the natural action of the isometry group is transitive. Every homogeneous Riemannian manifold is geodesically complete and has constant scalar curvature.
Up to isometry, all homogeneous Riemannian manifolds arise by the following construction. Given a Lie group with compact subgroup which does not contain any nontrivial normal subgroup of , fix any complemented subspace of the Lie algebra of within the Lie algebra of . If this subspace is invariant under the linear map for any element of , then -invariant Riemannian metrics on the coset space are in one-to-one correspondence with those inner products on which are invariant under for every element of . Each such Riemannian metric is homogeneous, with naturally viewed as a subgroup of the full isometry group.
The above example of Lie groups with left-invariant Riemannian metrics arises as a very special case of this construction, namely when is the trivial subgroup containing only the identity element. The calculations of the Levi-Civita connection and the curvature referenced there can be generalized to this context, where now the computations are formulated in terms of the inner product on , the Lie algebra of , and the direct sum decomposition of the Lie algebra of into the Lie algebra of and . This reduces the study of the curvature of homogeneous Riemannian manifolds largely to algebraic problems. This reduction, together with the flexibility of the above construction, makes the class of homogeneous Riemannian manifolds very useful for constructing examples.
Symmetric spaces
A connected Riemannian manifold is said to be symmetric if for every point of there exists some isometry of the manifold with as a fixed point and for which the negation of the differential at is the identity map. Every Riemannian symmetric space is homogeneous, and consequently is geodesically complete and has constant scalar curvature. However, Riemannian symmetric spaces also have a much stronger curvature property not possessed by most homogeneous Riemannian manifolds, namely that the Riemann curvature tensor and Ricci curvature are parallel. Riemannian manifolds with this curvature property, which could loosely be phrased as "constant Riemann curvature tensor" (not to be confused with constant curvature), are said to be locally symmetric. This property nearly characterizes symmetric spaces; Élie Cartan proved in the 1920s that a locally symmetric Riemannian manifold which is geodesically complete and simply-connected must in fact be symmetric.
Many of the fundamental examples of Riemannian manifolds are symmetric. The most basic include the sphere and real projective spaces with their standard metrics, along with hyperbolic space. The complex projective space, quaternionic projective space, and Cayley plane are analogues of the real projective space which are also symmetric, as are complex hyperbolic space, quaternionic hyperbolic space, and Cayley hyperbolic space, which are instead analogues of hyperbolic space. Grassmannian manifolds also carry natural Riemannian metrics making them into symmetric spaces. Among the Lie groups with left-invariant Riemannian metrics, those which are bi-invariant are symmetric.
Based on their algebraic formulation as special kinds of homogeneous spaces, Cartan achieved an explicit classification of symmetric spaces which are irreducible, referring to those which cannot be locally decomposed as product spaces. Every such space is an example of an Einstein manifold; among them only the one-dimensional manifolds have zero scalar curvature. These spaces are important from the perspective of Riemannian holonomy. As found in the 1950s by Marcel Berger, any Riemannian manifold which is simply-connected and irreducible is either a symmetric space or has Riemannian holonomy belonging to a list of only seven possibilities. Six of the seven exceptions to symmetric spaces in Berger's classification fall into the fields of Kähler geometry, quaternion-Kähler geometry, G2 geometry, and Spin(7) geometry, each of which study Riemannian manifolds equipped with certain extra structures and symmetries. The seventh exception is the study of 'generic' Riemannian manifolds with no particular symmetry, as reflected by the maximal possible holonomy group.
Infinite-dimensional manifolds
The statements and theorems above are for finite-dimensional manifolds—manifolds whose charts map to open subsets of These can be extended, to a certain degree, to infinite-dimensional manifolds; that is, manifolds that are modeled after a topological vector space; for example, Fréchet, Banach, and Hilbert manifolds.
Definitions
Riemannian metrics are defined in a way similar to the finite-dimensional case. However, there is a distinction between two types of Riemannian metrics:
A weak Riemannian metric on is a smooth function such that for any the restriction is an inner product on
A strong Riemannian metric on is a weak Riemannian metric such that induces the topology on . If is a strong Riemannian metric, then must be a Hilbert manifold.
Examples
If is a Hilbert space, then for any one can identify with The metric for all is a strong Riemannian metric.
Let be a compact Riemannian manifold and denote by its diffeomorphism group. The latter is a smooth manifold (see here) and in fact, a Lie group. Its tangent bundle at the identity is the set of smooth vector fields on Let be a volume form on The weak Riemannian metric on , denoted , is defined as follows. Let Then for ,
.
Metric space structure
Length of curves and the Riemannian distance function are defined in a way similar to the finite-dimensional case. The distance function , called the geodesic distance, is always a pseudometric (a metric that does not separate points), but it may not be a metric. In the finite-dimensional case, the proof that the Riemannian distance function separates points uses the existence of a pre-compact open set around any point. In the infinite case, open sets are no longer pre-compact, so the proof fails.
If is a strong Riemannian metric on , then separates points (hence is a metric) and induces the original topology.
If is a weak Riemannian metric, may fail to separate points. In fact, it may even be identically 0. For example, if is a compact Riemannian manifold, then the weak Riemannian metric on induces vanishing geodesic distance.
Hopf–Rinow theorem
In the case of strong Riemannian metrics, one part of the finite-dimensional Hopf–Rinow still holds.
Theorem: Let be a strong Riemannian manifold. Then metric completeness (in the metric ) implies geodesic completeness.
However, a geodesically complete strong Riemannian manifold might not be metrically complete and it might have closed and bounded subsets that are not compact. Further, a strong Riemannian manifold for which all closed and bounded subsets are compact might not be geodesically complete.
If is a weak Riemannian metric, then no notion of completeness implies the other in general.
| Mathematics | Geometry | null |
144759 | https://en.wikipedia.org/wiki/Cucurbita | Cucurbita | is a genus of herbaceous fruits in the gourd family, Cucurbitaceae (also known as cucurbits or cucurbi), native to the Andes and Mesoamerica. Five edible species are grown and consumed for their flesh and seeds. They are variously known as squash, pumpkin, or gourd, depending on species, variety, and local parlance. Other kinds of gourd, also called bottle-gourds, are native to Africa and belong to the genus Lagenaria, which is in the same family and subfamily as Cucurbita, but in a different tribe, their young fruits are eaten much like those of the Cucurbita species.
Most Cucurbita species are herbaceous vines that grow several meters in length and have tendrils, but non-vining "bush" cultivars of C. pepo and C. maxima have also been developed. The yellow or orange flowers on a Cucurbita plant are of two types: female and male. The female flowers produce the fruit and the male flowers produce pollen. Many North and Central American species are visited by specialist bee pollinators, but other insects with more general feeding habits, such as honey bees, also visit.
There is debate about the taxonomy of the genus and the number of accepted species varies from 13 to 30. The five domesticated species are Cucurbita argyrosperma, C. ficifolia, C. maxima, C. moschata, and C. pepo, all of which can be treated as winter squash because the full-grown fruits can be stored for months. However, C. pepo includes some cultivars that are better used only as summer squash.
The fruits of the genus Cucurbita are good sources of nutrients, such as vitamin A and vitamin C, among other nutrients according to species. The fruits have many culinary uses including pumpkin pie, biscuits, bread, desserts, puddings, beverages, and soups; they are now cultivated worldwide. Although botanical fruits, Cucurbita gourds such as squash are typically cooked and eaten as vegetables. Pumpkins see more varied use, and are eaten both as vegetables and as desserts such as pumpkin pie.
Description
Cucurbita species fall into two main groups. The first group consists of annual or short-lived perennial vines which are mesophytic, meaning they require a more or less continuous water supply. The second group are perennials growing in arid zones which are xerophytic, meaning they tolerate dry conditions. Cultivated Cucurbita species were derived from the first group. Growing in height or length, the plant stem produces tendrils to help it climb adjacent plants and structures or extend along the ground. Most species do not readily root from the nodes; a notable exception is C. ficifolia, and the four other cultivated mesophytes do this to a lesser extent. The vine of the perennial Cucurbita can become semiwoody if left to grow. There is wide variation in size, shape, and color among Cucurbita fruits, and even within a single species. C. ficifolia is an exception, being highly uniform in appearance. The morphological variation in the species C. pepo and C. maxima is so vast that its various subspecies and cultivars have been misidentified as totally separate species.
The typical cultivated Cucurbita species has five-lobed or palmately divided leaves with long petioles, with the leaves alternately arranged on the stem. The stems in some species are angular. All of the above-ground parts may be hairy with various types of trichomes, which are often hardened and sharp. Spring-like tendrils grow from each node and are branching in some species. C. argyrosperma has ovate-cordate (egg-shaped to heart-shaped) leaves. The shape of C. pepo leaves varies widely. C. moschata plants can have light or dense pubescence. C. ficifolia leaves are slightly angular and have light pubescence. The leaves of all four of these species may or may not have white spots.
The species are monoecious, with unisexual male (staminate) and female (pistillate) flowers on a single plant and these grow singly, appearing from the leaf axils. Flowers have five fused yellow to orange petals (the corolla) and a green bell-shaped calyx. Male flowers in Cucurbitaceae generally have five stamens, but in Cucurbita there are only three, and their anthers are joined so that there appears to be one. Female flowers have thick pedicels, and an inferior ovary with 3–5 stigmas that each have two lobes. The female flowers of C. argyrosperma and C. ficifolia have larger corollas than the male flowers. Female flowers of C. pepo have a small calyx, but the calyx of C. moschata male flowers is comparatively short.
Cucurbita fruits are large and fleshy. Botanists classify the Cucurbita fruit as a pepo, which is a special type of berry derived from an inferior ovary, with a thick outer wall or rind with hypanthium tissue forming an exocarp around the ovary, and a fleshy interior composed of mesocarp and endocarp. The term "pepo" is used primarily for Cucurbitaceae fruits, where this fruit type is common, but the fruits of Passiflora and Carica are sometimes also pepos. The seeds, which are attached to the ovary wall (parietal placentation) and not to the center, are large and fairly flat with a large embryo that consists almost entirely of two cotyledons. Fruit size varies considerably: wild fruit specimens can be as small as and some domesticated specimens can weigh well over . The current world record was set in 2014 by Beni Meier of Switzerland with a pumpkin.
Reproductive biology
All species of Cucurbita have 20 pairs of chromosomes.
Many North and Central American species are visited by specialist pollinators in the apid tribe Eucerini, especially the genera Peponapis and Xenoglossa, and these squash bees can be crucial to the flowers producing fruit after pollination.
When there is more pollen applied to the stigma, more seeds are produced in the fruits and the fruits are larger with greater likelihood of maturation, an effect called xenia. Competitively grown specimens are therefore often hand-pollinated to maximize the number of seeds in the fruit. Seedlessness is known to occur in certain cultivars of C. pepo.
Critical factors in flowering and fruit set are physiological, having to do with the age of the plant and whether it already has developing fruit. The plant hormones ethylene and auxin are key in fruit set and development. Ethylene promotes the production of female flowers. When a plant already has a fruit developing, subsequent female flowers on the plant are less likely to mature, a phenomenon called "first-fruit dominance", and male flowers are more frequent, an effect that appears due to reduced natural ethylene production within the plant stem. Ethephon, a plant growth regulator product that is converted to ethylene after metabolism by the plant, can be used to increase fruit and seed production. Although Cucurbita species can generally produce healthy fruit after pollination from the same plant, inbreeding depression can significantly reduce seed number and fruit size.
The plant hormone gibberellin, produced in the stamens, is essential for the development of all parts of the male flowers. The development of female flowers is not yet understood. Gibberellin is also involved in other developmental processes of plants, such as seed and stem growth.
Germination and seedling growth
Seeds with maximum germination potential develop (in C. moschata) by 45 days after anthesis, and seed weight reaches its maximum 70 days after anthesis. Some varieties of C. pepo germinate best with eight hours of sunlight daily and a planting depth of . Seeds planted deeper than are not likely to germinate. In C. foetidissima, a weedy species, plants younger than 19 days old are not able to sprout from the roots after removing the shoots. In a seed batch with 90 percent germination rate, over 90 percent of the plants had sprouted after 29 days from planting.
Experiments have shown that when more pollen is applied to the stigma, as well as the fruit containing more seeds and being larger (the xenia effect mentioned above), the germination of the seeds is also faster and more likely, and the seedlings are larger. Various combinations of mineral nutrients and light have a significant effect during the various stages of plant growth. These effects vary significantly between the different species of Cucurbita. A type of stored phosphorus called phytate forms in seed tissues as spherical crystalline intrusions in protein bodies called globoids. Along with other nutrients, phytate is used completely during seedling growth. Heavy metal contamination, including cadmium, has a significant negative impact on plant growth. Cucurbita plants grown in the spring tend to grow larger than those grown in the autumn.
Taxonomy
Cucurbita was formally described in a way that meets the requirements of modern botanical nomenclature by Linnaeus in his Genera Plantarum, the fifth edition of 1754 in conjunction with the 1753 first edition of Species Plantarum. Cucurbita pepo is the type species of the genus. Linnaeus initially included the species C. pepo, C. verrucosa and C. melopepo (both now included in C. pepo), as well as C. citrullus (watermelon, now Citrullus lanatus) and C. lagenaria (now Lagenaria siceraria) (both are not Cucurbita but are in the family Cucurbitaceae.
The Cucurbita digitata, C. foetidissima, C. galeotti, and C. pedatifolia species groups are xerophytes, arid zone perennials with storage roots; the remainder, including the five domesticated species, are all mesophytic annuals or short-life perennials with no storage roots. The five domesticated species are mostly isolated from each other by sterility barriers and have different physiological characteristics. Some cross pollinations can occur: C. pepo with C. argyrosperma and C. moschata; and C. maxima with C. moschata. Cross pollination does occur readily within the family Cucurbitaceae. The buffalo gourd (C. foetidissima) has been used as an intermediary, as it can be crossed with all the common Cucurbita.
Various taxonomic treatments have been proposed for Cucurbita, ranging from 13 to 30 species. In 1990, Cucurbita expert Michael Nee classified them into the following oft-cited 13 species groups (27 species total), listed by group and alphabetically, with geographic origin:
C. argyrosperma (synonym C. mixta) – cushaw pumpkin; origin: Mexico
C. kellyana, origin: Pacific coast of western Mexico
C. palmeri, origin: Pacific coast of northwestern Mexico
C. sororia, origin: Pacific coast Mexico to Nicaragua, northeastern Mexico
C. digitata – fingerleaf gourd; origin: southwestern United States (USA), northwestern Mexico
C. californica
C. cordata
C. cylindrata
C. palmata
C. ecuadorensis, origin: Ecuador's Pacific coast
C. ficifolia – figleaf gourd, chilacayote, alcayota; origin: Mexico, Panama, northern Chile and Argentina
C. foetidissima – stinking gourd, buffalo gourd; origin: Mexico
C. scabridifolia, likely a natural hybrid of C. foetidissima and C. pedatifolia
C. galeottii, little known; origin: Oaxaca, Mexico
C. lundelliana, origin: Mexico, Guatemala, Belize
C. maxima – winter squash, pumpkin; origin: Argentina, Bolivia, Ecuador
C. andreana, origin – Argentina
C. moschata – butternut squash, 'Dickinson' pumpkin, golden cushaw; origin: Bolivia, Colombia, Ecuador, Mexico, Panama, Puerto Rico, Venezuela
C. okeechobeensis, origin: Florida
C. martinezii, origin: Mexican Gulf Coast and foothills
C. pedatifolia, origin: Querétaro, Mexico
C. moorei
C. pepo – field pumpkin, summer squash, zucchini, vegetable marrow, courgette, acorn squash; origin: Mexico, US
C. fraterna, origin: Tamaulipas and Nuevo León, Mexico
C. texana, origin: Texas, US
C. radicans – calabacilla, calabaza de coyote; origin: Central Mexico
C. gracilior
The taxonomy by Nee closely matches the species groupings reported in a pair of studies by a botanical team led by Rhodes and Bemis in 1968 and 1970 based on statistical groupings of several phenotypic traits of 21 species. Seeds for studying additional species members were not available. Sixteen of the 21 species were grouped into five clusters with the remaining five being classified separately:
C. digitata, C. palmata, C. californica, C. cylindrata, C. cordata
C. martinezii, C. okeechobeensis, C. lundelliana
C. sororia, C. gracilior, C. palmeri; C. argyrosperma (reported as C. mixta) was considered close to the three previous species
C. maxima, C. andreana
C. pepo, C. texana
C. moschata, C. ficifolia, C. pedatifolia, C. foetidissima, and C. ecuadorensis were placed in their own separate species groups as they were not considered significantly close to any of the other species studied.
Phylogeny
The full phylogeny of this genus is unknown, and research was ongoing in 2014. The following cladogram of Cucurbita phylogeny is based upon a 2002 study of mitochondrial DNA by Sanjur and colleagues.
Distribution and habitat
The ancestral species of the genus Cucurbita were present in the Americas before the arrival of humans, and are native to the Americas. The likely center of origin is southern Mexico, spreading south through what is now known as Mesoamerica, into South America, and north to what is now the southwestern United States. Evolutionarily speaking, the genus is relatively recent in origin, dating back to the Holocene, whereas the family Cucurbitaceae, represented in Bryonia-like seeds, dates to the Paleocene. Recent genomic studies support the idea that the Cucurbita genus underwent a whole-genome duplication event, increasing the number of chromosomes and accelerating the rate at which their genomes evolve relative to other cucurbits. No species within the genus is entirely genetically isolated. C. moschata can intercross with all Cucurbita species, though the hybrid offspring may not be fertile unless they become polyploid.
Evidence of domestication of Cucurbita goes back over 8,000 years from the southernmost parts of Canada down to Argentina and Chile. Centers of domestication stretch from the Mississippi River watershed and Texas down through Mexico and Central America to northern and western South America. Of the 27 species that Nee delineates, five are domesticated. Four of these, C. argyrosperma, C. ficifolia, C. moschata, and C. pepo, originated and were domesticated in Mesoamerica; the fifth, C. maxima, originated and was domesticated in South America.
Within C. pepo, the pumpkins, the scallops, and possibly the crooknecks are ancient and were domesticated at different times and places. The domesticated forms of C. pepo have larger fruits than non-domesticated forms and seeds that are larger but fewer in number. In a 1989 study on the origins and development of C. pepo, botanist Harry Paris suggested that the original wild specimen had a small round fruit and that the modern pumpkin is its direct descendant. He suggested that the crookneck, ornamental gourd, and scallop are early variants and that the acorn squash is a cross between the scallop and the pumpkin.
C. argyrosperma is not as widespread as the other species. The wild form C. a. subsp. sororia is found from Mexico to Nicaragua, and cultivated forms are used in a somewhat wider area stretching from Panama to the southeastern United States. It was probably bred for its seeds, which are large and high in oil and protein, but its flesh is of poorer quality than that of C. moschata and C. pepo. It is grown in a wide altitudinal range: from sea level to as high as in dry areas, usually with the use of irrigation, or in areas with a defined rainy season, where seeds are sown in May and June.
C. ficifolia and C. moschata were originally thought to be Asiatic in origin, but this has been disproven. The origin of C. ficifolia is Latin America, most likely southern Mexico, Central America, or the Andes. It grows at elevations ranging from in areas with heavy rainfall. It does not hybridize well with other cultivated species as it has significantly different enzymes and chromosomes.
C. maxima originated in South America over 4,000 years ago, probably in Argentina and Uruguay. The plants are sensitive to frost, and they prefer both bright sunlight and soil with a pH of 6.0 to 7.0. C. maxima did not start to spread into North America until after the arrival of Columbus. Varieties were in use by native peoples of the United States by the 16th century. Types of C. maxima include triloba, zapallito, zipinka, Banana, Delicious, Hubbard, Marrow (C. maxima Marrow), Show, and Turban.
C. moschata is native to Latin America, but the precise location of origin is uncertain. It has been present in Mexico, Belize, Guatemala, and Peru for 4,000–6,000 years and has spread to Bolivia, Ecuador, Panama, Puerto Rico, and Venezuela. This species is closely related to C. argyrosperma. A variety known as the Seminole Pumpkin has been cultivated in Florida since before the arrival of Columbus. Its leaves are wide. It generally grows at low elevations in hot climates with heavy rainfall, but some varieties have been found above . Groups of C. moschata include Cheese, Crookneck (C. moschata), and Bell.
C. pepo is one of the oldest, if not the oldest, domesticated species with the oldest known locations being Oaxaca, Mexico, 8,000–10,000 years ago, and Ocampo, Tamaulipas, Mexico, about 7,000 years ago. It is known to have appeared in Missouri, United States, at least 4,000 years ago. Debates about the origin of C. pepo have been on-going since at least 1857. There have traditionally been two opposing theories about its origin: 1) that it is a direct descendant of C. texana and 2) that C. texana is merely feral C. pepo. A more recent theory by botanist Thomas Andres in 1987 is that descendants of C. fraterna hybridized with C. texana, resulting in two distinct domestication events in two different areas: one in Mexico and one in the eastern United States, with C. fraterna and C. texana, respectively, as the ancestral species. C. pepo may have appeared in the Old World before moving from Mexico into South America. It is found from sea level to slightly above . Leaves have 3–5 lobes and are wide. All the subspecies, varieties, and cultivars are interfertile. In 1986 Paris proposed a revised taxonomy of the edible cultivated C. pepo based primarily on the shape of the fruit, with eight groups. All but a few C. pepo cultivars can be included in these groups. There is one non-edible cultivated variety: C. pepo var. ovifera.
Ecology
Cucurbita species are used as food plants by the larvae of some Lepidoptera species, including the cabbage moth (Mamestra brassicae), Hypercompe indecisa, and the turnip moth (Agrotis segetum). Cucurbita can be susceptible to the pest Bemisia argentifolii (silverleaf whitefly) as well as aphids (Aphididae), cucumber beetles (Acalymma vittatum and Diabrotica undecimpunctata howardi), squash bug (Anasa tristis), the squash vine borer (Melittia cucurbitae), and the two-spotted spidermite (Tetranychus urticae). The squash bug causes major damage to plants because of its very toxic saliva.
The red pumpkin beetle (Aulacophora foveicollis) is a serious pest of cucurbits, especially the pumpkin, which it can defoliate.
Cucurbits are susceptible to diseases such as bacterial wilt (Erwinia tracheiphila), anthracnose (Colletotrichum spp.), fusarium wilt (Fusarium spp.), phytophthora blight (Phytophthora spp. water molds), and powdery mildew (Erysiphe spp.). Defensive responses to viral, fungal, and bacterial leaf pathogens do not involve cucurbitacin.
Species in the genus Cucurbita are susceptible to some types of mosaic virus including: cucumber mosaic virus (CMV), papaya ringspot virus-cucurbit strain (PRSV), squash mosaic virus (SqMV), tobacco ringspot virus (TRSV), watermelon mosaic virus (WMV), and zucchini yellow mosaic virus (ZYMV). PRSV is the only one of these viruses that does not affect all cucurbits. SqMV and CMV are the most common viruses among cucurbits. Symptoms of these viruses show a high degree of similarity, which often results in laboratory investigation being needed to differentiate which one is affecting plants.
Cultivation
History
The genus was part of the culture of almost every native peoples group from southern South America to southern Canada. Modern-day cultivated Cucurbita are not found in the wild. Genetic studies of the mitochondrial gene nad1 show there were at least six independent domestication events of Cucurbita separating domestic species from their wild ancestors. Species native to North America include C. digitata (calabazilla), and C. foetidissima (buffalo gourd), C. palmata (coyote melon), and C. pepo. Some species, such as C. digitata and C. ficifolia, are referred to as gourds. Gourds, also called bottle-gourds, which are used as utensils or vessels, belong to the genus Lagenaria and are native to Africa. Lagenaria are in the same family and subfamily as Cucurbita but in a different tribe.
The earliest known evidence of the domestication of Cucurbita dates back at least 8,000 years ago, predating the domestication of other crops such as maize and beans in the region by about 4,000 years. This evidence was found in the Guilá Naquitz cave in Oaxaca, Mexico, during a series of excavations in the 1960s and 1970s, possibly beginning in 1959. Solid evidence of domesticated C. pepo was found in the Guilá Naquitz cave in the form of increasing rind thickness and larger peduncles in the newer stratification layers of the cave. By c. 8,000 years BP the C. pepo peduncles found are consistently more than thick. Wild Cucurbita peduncles are always below this 10 mm barrier. Changes in fruit shape and color indicate that intentional breeding of C. pepo had occurred by no later than 8,000 years BP. During the same time frame, average rind thickness increased from . Recent genomic studies suggest that Cucurbita argyrosperma was domesticated in Mexico, in the region that is currently known as the state of Jalisco.
Squash was domesticated first, followed by maize and then beans, becoming part of the Three Sisters agricultural system of companion planting. The English word "squash" derives from askutasquash (a green thing eaten raw), a word from the Narragansett language, which was documented by Roger Williams, the founder of Rhode Island, in his 1643 publication A Key Into the Language of America. Similar words for squash exist in related languages of the Algonquian family.
Production
In 2021, world production of squashes (including gourds and pumpkins) was 23.4 million tonnes, led by China with 32% of the total (table). Ukraine, Russia, and the United States were secondary producers.
Toxicity
Cucurbitin is an amino acid and a carboxypyrrolidine that is found in raw Cucurbita seeds. It retards the development of parasitic flukes when administered to infected host mice, although the effect is seen only if administration begins immediately after infection.
Cucurmosin is a ribosome inactivating protein found in the flesh and seed of Cucurbita, notably Cucurbita moschata.
Cucurbitacin is a plant steroid present in wild Cucurbita and in each member of the family Cucurbitaceae. Poisonous to mammals, it is found in quantities sufficient to discourage herbivores. It makes wild Cucurbita and most ornamental gourds, with the exception of an occasional C. fraterna and C. sororia, bitter to taste. Ingesting too much cucurbitacin can cause stomach cramps, diarrhea and even collapse. This bitterness is especially prevalent in wild Cucurbita; in parts of Mexico, the flesh of the fruits is rubbed on a woman's breast to wean children. While the process of domestication has largely removed the bitterness from cultivated varieties, there are occasional reports of cucurbitacin causing illness in humans. Cucurbitacin is also used as a lure in insect traps.
Uses
Nutrition
As an example of Cucurbita, raw summer squash is 94% water, 3% carbohydrates, and 1% protein, with negligible fat content (table). In a 100-gram reference serving, raw squash supplies of food energy and is rich in vitamin C (20% of the Daily Value, DV), moderate in vitamin B6 and riboflavin (12–17% DV), but otherwise devoid of appreciable nutrient content (table), although the nutrient content of different Curcubita species may vary somewhat.
Pumpkin seeds contain vitamin E, crude protein, B vitamins and several dietary minerals (see nutrition table at pepita). Also present in pumpkin seeds are unsaturated and saturated oils, palmitic, oleic and linoleic fatty acids, as well as carotenoids.
Culinary
The family Cucurbitaceae has many species used as human food. Cucurbita species are some of the most important of those, with various species being prepared and eaten in many ways. Although the stems and skins tend to be more bitter than the flesh, the fruits and seeds of cultivated varieties are usually quite edible and need little or no preparation. Cross-pollination with toxic types can cause bitterness in plants of the next generation, and these should not be eaten. The flowers and young leaves and shoot tips can also be consumed. The seeds and fruits of most varieties can be stored for long periods of time, particularly the sweet-tasting winter varieties with their thick, inedible skins. Summer squash have a thin, edible skin. The seeds of both types can be roasted, eaten raw, made into pumpkin seed oil, ground into a flour or meal, or otherwise prepared. Squashes are primarily grown for the fresh food market.
Long before European contact, Cucurbita had been a major food source for the native peoples of the Americas. The species became an important food for European settlers, including the Pilgrims, who even featured it at the first Thanksgiving. Commercially produced pumpkin commonly used in pumpkin pie is most often varieties of C. moschata; Libby's, by far the largest producer of processed pumpkin, uses a proprietary strain of the Dickinson pumpkin variety of C. moschata for its canned pumpkin. Other foods that can be made using members of this genus include biscuits, bread, cheesecake, desserts, donuts, granola, ice cream, lasagna dishes, pancakes, pudding, pumpkin butter, salads, soups, and stuffing. Squash soup is a dish in African cuisine. The xerophytic species are proving useful in the search for nutritious foods that grow well in arid regions. C. ficifolia is used to make soft and mildly alcoholic drinks.
In India, squashes (ghiya) are cooked with seafood such as prawns. In France, marrows (courges) are traditionally served as a gratin, sieved and cooked with butter, milk, and egg, and flavored with salt, pepper, and nutmeg, and as soups. In Italy, zucchini and larger squashes are served in a variety of regional dishes, such as cocuzze alla puviredda cooked with olive oil, salt and herbs from Apulia; as torta di zucca from Liguria, or torta di zucca e riso from Emilia-Romagna, the squashes being made into a pie filling with butter, ricotta, parmesan, egg, and milk; and as a sauce for pasta in dishes like spaghetti alle zucchine from Sicily. In Japan, squashes such as small C. moschata pumpkins (kabocha) are eaten boiled with sesame sauce, fried as a tempura dish, or made into balls with sweet potato and Japanese mountain yam.
In culture
Art, music, and literature
Along with maize and beans, squash has been depicted in the art work of the native peoples of the Americas for at least 2,000 years. For example, cucurbits are often represented in Moche ceramics.
Though native to the western hemisphere, Cucurbita began to spread to other parts of the world after Christopher Columbus's arrival in the New World in 1492. Until recently, the earliest known depictions of this genus in Europe was of Cucurbita pepo in De Historia Stirpium Commentarii Insignes in 1542 by the German botanist Leonhart Fuchs, but in 1992, two paintings, one of C. pepo and one of C. maxima, painted between 1515 and 1518, were identified in festoons at Villa Farnesina in Rome. Also, in 2001 depictions of this genus were identified in Grandes Heures of Anne of Brittany (Les Grandes Heures d'Anne de Bretagne), a French devotional book, an illuminated manuscript created between 1503 and 1508. This book contains an illustration known as Quegourdes de turquie, which was identified by cucurbit specialists as C. pepo subsp. texana in 2006.
In 1952, Stanley Smith Master, using the pen name Edrich Siebert, wrote "The Marrow Song (Oh what a beauty!)" to a tune in time. It became a popular hit in Australia in 1973, and was revived by the Wurzels in Britain on their 2003 album Cutler of the West. John Greenleaf Whittier wrote a poem entitled The Pumpkin in 1850. "The Great Pumpkin" is a fictional holiday figure in the comic strip Peanuts by Charles M. Schulz.
Cleansing and personal care uses
C. foetidissima contains a saponin that can be obtained from the fruit and root. This can be used as a soap, shampoo, and bleach. Prolonged contact can cause skin irritation. Pumpkin is also used in cosmetics.
Folk remedies
Cucurbita have been used in various cultures as folk remedies. Pumpkins have been used by Native Americans to treat intestinal worms and urinary ailments. This Native American remedy was adopted by American doctors in the early nineteenth century as an anthelmintic for the expulsion of worms. In southeastern Europe, seeds of C. pepo were used to treat irritable bladder and benign prostatic hyperplasia. In Germany, pumpkin seed is approved for use by the Commission E, which assesses folk and herbal medicine, for irritated bladder conditions and micturition problems of prostatic hyperplasia stages 1 and 2, although the monograph published in 1985 noted a lack of pharmacological studies that could substantiate empirically found clinical activity. The FDA in the United States, on the other hand, banned the sale of all such non-prescription drugs for the treatment of prostate enlargement in 1990.
In China, C. moschata seeds were also used in traditional Chinese medicine for the treatment of the parasitic disease schistosomiasis and for the expulsion of tape worms.
In Mexico, herbalists use C. ficifolia in the belief that it reduces blood sugar levels.
Festivals
Cucurbita fruits including pumpkins and marrows are celebrated in festivals in countries such as Argentina, Austria, Bolivia, Britain, Canada, Croatia, France, Germany, India, Italy, Japan, Peru, Portugal, Spain, Switzerland, and the United States. Argentina holds an annual nationwide pumpkin festival Fiesta Nacional del Zapallo (), in Ceres, Santa Fe, on the last day of which a Reina Nacional del Zapallo () is chosen. In Portugal the Festival da Abóbora de Lourinhã e Atalaia ("Squashes and Pumpkins Festival in Lourinhã and Atalaia") is held in Lourinhã city, called the Capital Nacional da Abóbora (the "National Capital of Squashes and Pumpkins"). Ludwigsburg, Germany annually hosts the world's largest pumpkin festival. In Britain a giant marrow (zucchini) weighing was displayed in the Harrogate Autumn Flower Show in 2012. In the US, pumpkin chucking is practiced competitively, with machines such as trebuchets and air cannons designed to throw intact pumpkins as far as possible. The Keene Pumpkin Fest is held annually in New Hampshire; in 2013 it held the world record for the most jack-o-lanterns lit in one place, 30,581 on October 19, 2013.
Hallowe'en is widely celebrated with jack-o-lanterns made of large orange pumpkins carved with ghoulish faces and illuminated from inside with candles. The pumpkins used for jack-o-lanterns are C. pepo, not to be confused with the ones typically used for pumpkin pie in the United States, which are C. moschata. Kew Gardens marked Hallowe’en in 2013 with a display of pumpkins, including a towering pyramid made of many varieties of squash, in the Waterlily House during its "IncrEdibles" festival.
| Biology and health sciences | Cucurbitales | null |
144792 | https://en.wikipedia.org/wiki/Russian%20Blue | Russian Blue | The Russian Blue cat (), commonly referred to as just Russian Blue, is a cat breed with colors that vary from a light shimmering silver to a darker, slate grey. The short, dense coat, which stands out from the body, has been the breed's hallmark for more than a century.
Origin
The Russian Blue is a naturally occurring breed that may have originated in the port of Arkhangelsk in Russia. They are also sometimes called Archangel Blues. It is believed that sailors took them from the Archangel Isles to Great Britain and Northern Europe in the 1860s. The first reference to an Archangel Cat appears in British print in 1862. The first recorded appearance of one in a show was in 1872 at The Crystal Palace in England as the Archangel Cat. However, Harrison Weir writing in 1895 reported that the early show cats under the Russian Blue name were British-bred grey tabbies, with separate grey cats arriving from Archangel in Britain in the 1800s with features consistent with the modern breed. The Russian Blue competed in a class including all other blue cats until 1912, when it was given its own class. The breed was developed mainly in England and Scandinavia until after World War II.
Right after the war, a lack of numbers of Russian Blues led to cross breeding with the Siamese. Although Russian Blues were in the United States before the war, it was not until the post-war period that American breeders created the modern Russian Blue that is seen in the United States today. American breeders combined the bloodlines of both the Scandinavian and British Russian Blues. The Siamese traits have now largely been bred out. The short hair and slate-gray/blue color is often seen in mixed-breed cats, which can affect breeders and showers due to mislabeling a cat as a Russian Blue.
Russian Blues are plush short-haired, shimmering pale blue-gray cats with emerald green eyes or yellow eyes. Guard hairs are distinctly silver-tipped giving the cat a silvery sheen or lustrous appearance. They have been used on a limited basis to create other breeds such as the Havana Brown or alter existing breeds such as the Nebelung. They are being used in Italy as a way to make Oriental Shorthairs healthier and more robust called RUS4OSH in FIFe.
Russian Whites and Russian Blacks were created from crosses with domestic white cats which were allegedly imported from Russia. The first line was developed by Frances McLeod (Arctic) in the United Kingdom during the 1960s and the second line produced by Dick and Mavis Jones (Myemgay) in Australia in the 1970s. By the late 1970s, the Russian White and Russian Black colors were accepted by cat fanciers in Australia as well as in South Africa and now also in the United Kingdom as Russian cats (in different classes). However, the Cat Fanciers' Association and FIFe does not recognize any variation of the Russian Blue.
Physical characteristics
The Russian Blue has bright green eyes, pinkish lavender or mauve paws, two layers of short thick fur, and a blue-grey-black coat. The color is a bluish-gray that is the dilute expression of the black gene. However, as dilute genes are recessive ("d") and each parent will have a set of two recessive genes ("dd") two non-Color-Point Carrier (non-CPC) Russian Blues will always produce a blue cat. Due to the breeding with Siamese after World War II, there are color-point genes floating around. If two carriers are bred together, then they will produce a litter of mixed colors—solid blue or white with blue-point like a Siamese. People call these CPC cats "color-point", "whites" or "pointed" Russians. In most registries, one cannot register, breed or show a color-point Russian. These color-point (blue-point) cats are called Color-Point-Russian Blue (Blue Point Russian Blue) or more informally as Pika Blu (or pika blue) cats and have the same general characteristics as the Russian Blue cats.
The coat is known as a "double coat", with the undercoat being soft, downy and equal in length to the guard hairs, which are an even blue with silver tips. However, the tail may have a few very dull, almost unnoticeable stripes. The coat is described as thick, plush and soft to the touch and can be described as being softer than the softest silk. The silver tips give the coat a shimmering appearance. Its eyes are almost always a dark and vivid green. Any white patches of fur or yellow eyes in adulthood are seen as flaws in show cats. Russian Blues should not be confused with British Blues (which are not a distinct breed, but rather a British Shorthair with a blue coat as the British Shorthair breed itself comes in a wide variety of colors and patterns), nor the Chartreux or Korat which are two other naturally occurring breeds of blue cats, although they have similar traits.
They are generally considered to be a quiet breed but there are always exceptions. They are normally reserved around strangers, unless they are brought up in an active household. Many Russian Blues have been trained to do tricks. They can also be fierce hunters, often catching rodents, birds, rabbits, small mammals, or reptiles. As loving and easy going as Russian Blues are, they do not like change, and prefer predictable, routine schedules.
Russian Blue kittens are energetic and require adequate playmates or toys as they can become mischievous if bored. They have exceptional athleticism and rival even Abyssinians for their ability to leap and climb. Slow to mature, Russian Blues retain many of their adolescent traits both good and otherwise until they are 3–4 years old and even much older Blues can be easily enticed into play by their owners. Russian Blues are also highly intelligent. They have an excellent memory and will learn the hiding place of favorite toys and lead their owners to them when they want a game. They also have a keen ability to remember favorite visitors and will race to greet familiar faces even if quite some time has passed between visits.
Growth and maturity
They are small to moderate-sized cats with an average weight of when fully grown. Males will typically be larger than females. Their gestation period is approximately 64 days.
Allergies
Anecdotal evidence suggests that the Russian Blue may be better tolerated by individuals with mild to moderate allergies. There is speculation that the Russian Blue produces less glycoprotein Fel d 1, one source of cat allergies. The thicker coat may also trap more of the allergens closer to the cat's skin. Glycoprotein is one source of cat allergies, but this does not mean that all allergies will be stopped. They can still happen, but not as extreme, and for a much less amount of time than other cat breeds. Because of this, Russian Blues are very popular with people with allergies all around the world.
In popular culture
Arlene is portrayed by a Russian Blue in Garfield: The Movie.
Felicity, a character in the novel and film Felidae, was a Russian Blue.
A Russian Blue kitten is a trained assassin in the Cats & Dogs film. According to audio commentary on the DVD, several kittens were used due to the kittens growing faster than the filming schedule. Catherine from its sequel Cats & Dogs: The Revenge of Kitty Galore is also a Russian Blue.
Eben and Snooch are Russian Blues in the comic Two Lumps.
The Nyan Cat meme was inspired by creator Chris Torres' Russian Blue Marty. Marty died in 2012 from feline infectious peritonitis.
Tom Cat of the Hanna-Barbera cartoon-produced for MGM Tom and Jerry is said to have been inspired by a Russian Blue.
In A Gentleman in Moscow by Amor Towles, the Metropol Hotel's lobby cat is a Russian Blue.
Smokey, the main antagonist in the film Stuart Little, is a Russian Blue.
Kilmousky in the Midsomer Murders episode "Written in Blood" causes DCI Tom Barnaby an allergic reaction
| Biology and health sciences | Cats | Animals |
144837 | https://en.wikipedia.org/wiki/Chronic%20pain | Chronic pain | Chronic pain or chronic pain syndrome is a type of pain that is also known by other titles such as gradual burning pain, electrical pain, throbbing pain, and nauseating pain. This type of pain is sometimes confused with acute pain and can last from three months to several years; various diagnostic manuals such as DSM-5 and ICD-11 have proposed several definitions of chronic pain, but the accepted definition is that it is "pain that lasts longer than the expected period of recovery."
Creating a pain mechanism prevents possible damage to the body, but chronic pain is a pain without biological value (doesn't have a positive effect). This pain has different divisions; cancer, post-traumatic or surgery, musculoskeletal and visceral are the most important of these divisions. Various factors cause the formation of chronic pain, which can be neurogenic (gene-dependent), nociceptive, neuropathic, psychological or unknown. Some diseases such as diabetes (high blood sugar), shingles (some viral diseases), phantom limb pain, hypertension and stroke also play a role in the formation of chronic pain. The most common types of chronic pain are back pain, severe headache, migraine, and facial pain.
Chronic pain can cause very severe psychological and physical effects that sometimes continue until the end of life. Analysis of the grey matter (damage to brain neurons), insomnia and sleep deprivation, metabolic problems, chronic stress, obesity and heart attack are examples of physical disorder; and depression, cognitive disorders, perceived injustice (PI) and neuroticism are examples of mental disorder.
A wide range of treatments are performed for this disease; drug therapy (types of opioid and non-opioid drugs), cognitive behavioral therapy and physical therapy are the most significant of them. Medicines are usually associated with side effects and are prescribed when the effects of pain become severe. Medicines such as aspirin and ibuprofen are used for milder pain and morphine and codeine for severe pain. Other treatment methods, such as behavioral therapy and physiotherapy, are often used as a supplement along with drugs due to their low effectiveness. There is currently no definitive cure for any of these methods, and research continues into a wide variety of new management and therapeutic interventions, such as nerve block and radiation therapy.
Chronic pain is considered a kind of disease; this type of pain has affected the people of the world more than diabetes, cancer and heart diseases. During several epidemiological studies conducted in different countries, wide differences in the prevalence of chronic pain have been reported from 8% to 55.2% in countries; for example, studies evaluate the incidence in Iran and Canada between 10% and 20% and in the United States between 30% and 40%. The results show that an average of 8% to 11.2% of people in different countries have severe chronic pain, and its epidemic is higher in industrialized countries than in other countries. According to the estimates of the American Medical Association, the costs related to this disease in this country are about 560 to 635 billion dollars.
Classification
The International Association for the Study of Pain (IASP) defines chronic pain as a general pain without biological value that sometimes continues even after the healing of the affected area; a type of pain that cannot be classified as acute pain and lasts longer than expected to heal, or typically, pain that has been experienced on most days or daily for the past six months, is considered chronic pain. According to the DSM-5 index, a complication is "chronic" when the resulting complication (pain, disorder, and illness) lasts for a period of more than six months (this type of classification does not have any prerequisites such as physical or mental injury). The classification of chronic pain is not only limited to pains that arise in the presence of real tissue damage (secondary pains resulting from a primary event); the title "nociplastic pain" or primary pain is related to the pains that occur in the absence of a health-threatening factor, such as disease or damage to the body's somatosensory system, and as a result of permanent nerve stimulation.
The International Statistical Classification of Diseases, in its 11th edition (ICD-11), proposed a seven-category classification for chronic pain:
Primary chronic pain: Defined by 3 months of continuous pain in one or more areas of the body, the origin of which is not understood.
Chronic cancer pain: pain in one of the body's organs caused by cancer damage (in internal organs, bone or skeletal muscular) is formed.
Chronic pain post-traumatic or surgery: Pain that occurs 3 months after an injury or surgery, without taking into account infectious conditions and the severity of tissue damage; also, the person's past pain is not important in this classification.
Chronic neuropathic pain: pain caused by damage to the somatosensory nervous system.
Chronic headache and orofacial pain: pain that originates in the head or face, and occurs for 50% or more days over a 3 months period.
Chronic visceral pain: pain originating in an internal organ.
Chronic musculoskeletal pain: pain originating in the bones, muscles, joints or connective tissue.
Also, the World Health Organization (WHO) states that optional criteria or codes can be used in the classification of chronic pain for each of the seven categories of chronic pain (for example, "diabetic neuropathic" pain).
Another classification for chronic pain is "nociceptive" (caused by inflamed or damaged tissue that activates special pain sensors called nociceptors) and "neuropathic" (caused by damage or malfunction of the nervous system). The type of "nociceptive" itself is divided into two parts: "superficial" and "deep"; also, deep pains are divided into two parts: "deep physical" and "deep visceral" pain. "neuropathic" pains are also divided into "peripheral" (source The peripheral nervous system) and "central" (Central nervous system from the brain or spinal cord) are divided. Peripheral neuropathic pain is often described as "burning", "tingling", "electrical", "stabbing", or "pins and needles".
"Superficial pain" is the result of the activation of pain receptors in the skin or superficial tissues; "deep somatic pain" is caused by stimulation of pain receptors in ligaments, tendons, bones, blood vessels, fascia, and muscles. (this type of pain is constant but weak) and "deep visceral pain" is pain that originates from one of the body's organs. Deep pain is often very difficult to localize and occurs in multiple areas of the body when injured or inflamed. In the "deep visceral" type, the feeling of pain exists in a place far from the injury, for this reason it is also called vague pain.
Etiology
Chronic pain has many pathophysiological and environmental causes and can occur in cases such as neuropathy of the central nervous system, after cerebral hemorrhage, tissue damage such as extensive burns, inflammation, autoimmune disorders such as rheumatoid arthritis, psychological stress such as headache, migraine or abdominal pain (caused by emotional, psychological or behavioral) and mechanical pain caused by tissue wear and tear such as arthritis. In some cases, chronic pain can be caused by genetic factors which interfere with neuronal differentiation, leading to a permanently lowered threshold for pain.
The pathophysiological etiology of chronic pain remains unclear. Many theories of chronic pain fail to clearly explain why the same pathological conditions do not invariably result in chronic pain. Patients' anatomical predisposition to proximal neural compression (in particular of peripheral nerves) may be the answer to this conundrum. Proximal neural lesion at the level of the dorsal root ganglion (DRG) may drive a vicious cycle of chronic pain by causing postural protection of the painful site and consequent neural compression in the same spinal region. Difficulties in diagnosing proximal neural lesion may account for the theoretical perplexity of chronic pain.
Pathophysiology
The mechanism of continuous activation and transmission of pain messages, leads the body to an activity to relieve pain (a mechanism to prevent damage in the body), this action causes the release of prostaglandin and increase the sensitivity of that part to stimulation; Prostaglandin secretion causes unbearable and chronic pain. Under persistent activation, the transmission of pain signals to the dorsal horn may produce a pain wind-up phenomenon. This triggers changes that lower the threshold for pain signals to be transmitted. In addition, it may cause non-nociceptive nerve fibers to respond to, generate, and transmit pain signals. Researchers believe that the nerve fibers that cause this type of pain are group C nerve fibers; these fibers are not myelinated (have low transmission speed) and cause long-term pain.
These changes in neural structure can be explained by neuroplasticity. When there is chronic pain, the somatotopic arrangement of the body (the distribution view of nerve cells) is abnormally changed due to continuous stimulation and can cause allodynia or hyperalgesia. In chronic pain, this process is difficult to reverse or stop once established. EEG of people with chronic pain showed that brain activity and synaptic plasticity change as a result of pain, and specifically, the relative activity of beta wave increases and alpha and theta waves decrease.
Inefficient management of dopamine secretion in the brain can act as a common mechanism between chronic pain, insomnia and major depressive disorder and cause its unpleasant side effects. Astrocytes, microglia and satellite glial cells also lose their effective function in chronic pain. Increasing the activity of microglia, changing microglia networks, and increasing the production of chemokines and cytokines by microglia may exacerbate chronic pain. It has also been observed that astrocytes lose their ability to regulate the excitability of neurons and increase the spontaneous activity of neurons in pain circuits.
Management
Pain management is a branch of medicine that uses an interdisciplinary approach. The combined knowledge of various medical professions and allied health professions is used to ease pain and improve the quality of life of those living with pain. The typical pain management team includes medical practitioners (particularly anesthesiologists), rehabilitation psychologists, physiotherapists, occupational therapists, physician assistants, and nurse practitioners. Acute pain usually resolves with the efforts of one practitioner; however, the management of chronic pain frequently requires the coordinated efforts of a treatment team. Complete, longterm remission of many types of chronic pain is rare.
Chronic pain may originate in the body, or in the brain or spinal cord. It is often difficult to treat. Epidemiological studies have found that 8–11.2% of people in various countries have chronic widespread pain. Various non-opioid medicines are initially recommended to treat chronic pain, depending on whether the pain is due to tissue damage or is neuropathic. Psychological treatments including cognitive behavioral therapy and acceptance and commitment therapy may be effective for improving quality of life in those with chronic pain. Some people with chronic pain may benefit from opioid treatment while others can be harmed by it. People with non-cancer pain who have not been helped by non-opioid medicines might be recommended to try opioids if there is no history of substance use disorder and no current mental illness.
Nonopioids
Initially recommended efforts are non-opioid based therapies. Non-opioid treatment of chronic pain with pharmaceutical medicines might include acetaminophen (paracetamol) or NSAIDs.
Various other nonopioid medicines can be used, depending on whether the pain is a result of tissue damage or is neuropathic (pain caused by a damaged or dysfunctional nervous system). There is limited evidence that cancer pain or chronic pain from tissue damage as a result of a conditions (e.g. rheumatoid arthritis) is best treated with opioids. For neuropathic pain other drugs may be more effective than opioids, such as tricyclic antidepressants, serotonin-norepinephrine reuptake inhibitors, and anticonvulsants. Some atypical antipsychotics, such as olanzapine, may also be effective, but the evidence to support this is in very early stages. In women with chronic pain, hormonal medications such as oral contraceptive pills ("the pill") might be helpful. When there is no evidence of a single best fit, doctors may need to look for a treatment that works for the individual person. It is difficult for doctors to predict who will use opioids just for pain management and who will go on to develop an addiction. It is also challenging for doctors to know which patients ask for opioids because they are living with an opioid addiction. Withholding, interrupting or withdrawing opioid treatment in people who benefit from it can cause harm.
Interventional pain management may be appropriate, including techniques such as trigger point injections, neurolytic blocks, and radiotherapy. While there is no high quality evidence to support ultrasound, it has been found to have a small effect on improving function in non-specific chronic low back pain.
Psychological treatments, including cognitive behavioral therapy and acceptance and commitment therapy can be helpful for improving quality of life and reducing pain interference. Brief mindfulness-based treatment approaches have been used, but they are not yet recommended as a first-line treatment. The effectiveness of mindfulness-based pain management (MBPM) has been supported by a range of studies.
Among older adults psychological interventions can help reduce pain and improve self-efficacy for pain management. Psychological treatments have also been shown to be effective in children and teens with chronic headache or mixed chronic pain conditions.
While exercise has been offered as a method to lessen chronic pain and there is some evidence of benefit, this evidence is tentative. For people living with chronic pain, exercise results in few side effects.
Opioids
In those who have not benefited from other measures and have no history of either mental illness or substance use disorder treatment with opioids may be tried. If significant benefit does not occur it is recommended that they be stopped. In those on opioids, stopping or decreasing their use may improve outcomes including pain.
Some people with chronic pain benefit from opioid treatment and others do not; some are harmed by the treatment. Possible harms include reduced sex hormone production, hypogonadism, infertility, impaired immune system, falls and fractures in older adults, neonatal abstinence syndrome, heart problems, sleep-disordered breathing, physical dependence, addiction, abuse, and overdose.
Alternative medicine
Alternative medicine refers to health practices or products that are used to treat pain or illness that are not necessarily considered a part of conventional medicine. When dealing with chronic pain, these practices generally fall into the following four categories: biological, mind-body, manipulative body, and energy medicine.
Implementing dietary changes, which is considered a biological-based alternative medicine practice, has been shown to help improve symptoms of chronic pain over time. Adding supplements to one's diet is a common dietary change when trying to relieve chronic pain, with some of the most studied supplements being: acetyl-L-carnitine, alpha-lipoic acid, and vitamin E. Vitamin E is perhaps the most studied out of the three, with strong evidence that it helps lower neurotoxicity in those with cancer, multiple sclerosis, and cardiovascular diseases.
Hypnosis, including self-hypnosis, has tentative evidence. Hypnosis, specifically, can offer pain relief for most people and may be a safe alternative to pharmaceutical medication. Evidence does not support hypnosis for chronic pain due to a spinal cord injury.
Preliminary studies have found medical marijuana to be beneficial in treating neuropathic pain, but not other kinds of long term pain. , the evidence for its efficacy in treating neuropathic pain or pain associated with rheumatic diseases is not strong for any benefit and further research is needed. For chronic non-cancer pain, a recent study concluded that it is unlikely that cannabinoids are highly effective. However, more rigorous research into cannabis or cannabis-based medicines is needed.
Tai chi has been shown to improve pain, stiffness, and quality of life in chronic conditions such as osteoarthritis, low back pain, and osteoporosis. Acupuncture has also been found to be an effective and safe treatment in reducing pain and improving quality of life in chronic pain including chronic pelvic pain syndrome.
Transcranial magnetic stimulation for reduction of chronic pain is not supported by high quality evidence, and the demonstrated effects are small and short-term.
Spa therapy could potentially improve pain in patients with chronic lower back pain, but more studies are needed to provide stronger evidence of this.
While some studies have investigated the efficacy of St John's Wort or nutmeg for treating neuropathic (nerve) pain, their findings have raised serious concerns about the accuracy of their results.
Kinesio tape has not been shown to be effective in managing chronic non-specific low-back pain.
Myofascial release has been used in some cases of fibromyalgia, chronic low back pain, and tennis elbow but there is not enough evidence to support this as method of treatment.
Epidemiology
Chronic pain varies in different countries affecting anywhere from 8% to 55% of the population. It affects women at a higher rate than men, and chronic pain uses a large amount of healthcare resources around the globe.
A large-scale telephone survey of 15 European countries and Israel found that 19% of respondents over 18 years of age had suffered pain for more than 6 months, including the last month, and more than twice in the last week, with pain intensity of 5 or more for the last episode, on a scale of 1 (no pain) to 10 (worst imaginable). 4839 of these respondents with chronic pain were interviewed in-depth. Sixty-six percent scored their pain intensity at moderate (5–7), and 34% at severe (8–10); 46% had constant pain, 56% intermittent; 49% had suffered pain for 2–15 years; and 21% had been diagnosed with depression due to the pain. Sixty-one percent were unable or less able to work outside the home, 19% had lost a job, and 13% had changed jobs due to their pain. Forty percent had inadequate pain management and less than 2% were seeing a pain management specialist.
In the United States, chronic pain has been estimated to occur in approximately 35% of the population, with approximately 50 million Americans experiencing partial or total disability as a consequence. According to the Institute of Medicine, there are about 116 million Americans living with chronic pain, which suggests that approximately half of American adults have some chronic pain condition. The Mayday Fund estimate of 70 million Americans with chronic pain is slightly more conservative. In an internet study, the prevalence of chronic pain in the United States was calculated to be 30.7% of the population: 34.3% for women and 26.7% for men.
In Canada it is estimated that approximately 1 in 5 Canadians live with chronic pain and half of those people have lived with chronic pain for 10 years or longer. Chronic pain in Canada also occurs more and is more severe in women and Canada's Indigenous communities.
Outcomes
Sleep disturbance, and insomnia due to medication and illness symptoms are often experienced by those with chronic pain. These conditions can be difficult to treat due to the high potential of medication interactions, especially when the conditions are treated by different doctors.
Severe chronic pain is associated with increased risk of death over a ten-year period, particularly from heart disease and respiratory disease. Several mechanisms have been proposed for this increase, such as an abnormal stress response in the body's endocrine system. Additionally, chronic stress seems to affect risks to heart and lung (cardiovascular) health by increasing how quickly plaque can build up on artery walls (arteriosclerosis). However, further research is needed to clarify the relationship between severe chronic pain, stress and cardiovascular health.
People with chronic pain tend to have higher rates of depression and although the exact connection between the comorbidities is unclear, a 2017 study on neuroplasticity found that "injury sensory pathways of body pains have been shown to share the same brain regions involved in mood management." Chronic pain can contribute to decreased physical activity due to fear of making the pain worse. Pain intensity, pain control, and resilience to pain can be influenced by different levels and types of social support that a person with chronic pain receives, and are also influenced by the person's socioeconomic status.
In a study, Mendelian randomization was used to identify causal relationships between chronic pain and certain psychiatric, cardiovascular, and inflammatory conditions that were initially thought to be unrelated to pain. It was found that exposure to depression increases the likelihood of reporting pain, but not the other way around. Exposure to coronary diseases increases the risk of developing chronic pain, and vice versa. An increase in body mass index modestly raises the likelihood of experiencing pain, while high blood HDL levels reduce the probability of suffering from chronic pain. Regarding inflammatory traits, exposure to asthma increases the likelihood of experiencing pain, and vice versa.
Chronic pain of different causes has been characterized as a disease that affects brain structure and function. MRI studies have shown abnormal anatomical and functional connectivity, even during rest involving areas related to the processing of pain. Also, persistent pain has been shown to cause grey matter loss, which is reversible once the pain has resolved.
One approach to predicting a person's experience of chronic pain is the biopsychosocial model, according to which an individual's experience of chronic pain may be affected by a complex mixture of their biology, psychology, and their social environment.
Chronic pain may be an important contributor to suicide.
Psychology
Personality
Two of the most frequent personality profiles found in people with chronic pain by the Minnesota Multiphasic Personality Inventory (MMPI) are the conversion V and the neurotic triad. The conversion V personality expresses exaggerated concern over body feelings, develops bodily symptoms in response to stress, and often fails to recognize their own emotional state, including depression. The neurotic triad personality also expresses exaggerated concern over body feelings and develops bodily symptoms in response to stress, but is demanding and complaining.
Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical evidence points the other way, to chronic pain causing neuroticism. When long term pain is relieved by therapeutic intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in people with chronic pain, also shows improvement once pain has resolved.
It has been suggested that catastrophizing might play a role in the experience of pain. Pain catastrophizing is the tendency to describe a pain experience in more exaggerated terms than the average person, to think a great deal more about the pain when it occurs, or to feel more helpless about the experience. People who score highly on measures of catastrophization are likely to rate a pain experience as more intense than those who score low on such measures. It is often reasoned that the tendency to catastrophize causes the person to experience the pain as more intense. One suggestion is that catastrophizing influences pain perception through altering attention and anticipation, and heightening emotional responses to pain. However, at least some aspects of catastrophization may be the product of an intense pain experience, rather than its cause. That is, the more intense the pain feels to the person, the more likely they are to have thoughts about it that fit the definition of catastrophization.
Comorbidity with trauma
Individuals with post-traumatic stress disorder (PTSD) have a high comorbidity with chronic pain. Patients with both PTSD and chronic pain report higher severity of pain than those who do not have a PTSD comorbidity.
Comorbidity with depression
People with chronic pain may also have symptoms of depression. In 2017, the British Medical Association found that 49% of people with chronic pain had depression.
Effect on cognition
Chronic pain's impact on cognition is an under-researched area, but several tentative conclusions have been published. Most people with chronic pain complain of cognitive impairment, such as forgetfulness, difficulty with attention, and difficulty completing tasks. Objective testing has found that people in chronic pain tend to experience impairment in attention, memory, mental flexibility, verbal ability, speed of response in a cognitive task, and speed in executing structured tasks. A review of studies in 2018 reports a relationship between people in chronic pain and abnormal results in test of memory, attention, and processing speed.
Prognosis
Chronic pain leads to a significant decrease in quality of life, decreased productivity, decreased wages, worsening of other chronic diseases, and mental disorders such as depression, anxiety, and substance use disorder. Many drugs that are often used to treat chronic pain have risks and potential side effects and possible complications associated with their use, and the constant use of opioids is associated with decreased life expectancy and increased mortality of patients. Acetaminophen, a standard drug treatment for chronic pain, can cause hepatotoxicity when taken in excess of four grams per day. In addition, therapeutic doses for patients with chronic liver diseases may also cause hepatotoxicity. Long-term risks and side effects of opioids include constipation, drug tolerance or dependence, nausea, indigestion, arrhythmia (QT prolongation of electrocardiography in methadone treatment), and endocrine gland problems that can lead to amenorrhea, impotence, gynecomastia, and decreased become energy. Also there is a risk of opioid overdose depending on the dose taken by the patient.
Current treatments for chronic pain can reduce pain by 30%. This reduction in pain can significantly improve patients' performance and quality of life. However, the general and long-term prognosis of chronic pain shows decreased function and quality of life. Also, this disease causes many complications and increases the possibility of death of patients and suffering from other chronic diseases and obesity. Similarly, patients with chronic pain who require opioids often develop drug tolerance over time, and this increase in the amount of the dose taken to be effective increases the risk of side effects and death.
Mental disorders can amplify pain signals and make symptoms more severe. In addition, comorbid psychiatric disorders, such as major depressive disorder, can significantly delay the diagnosis of pain disorders. Major depressive disorder and generalized anxiety disorder are the most common comorbidities associated with chronic pain. Patients with underlying pain and comorbid mental disorders receive twice as much medication from doctors annually as compared to patients who do not have such co-morbidities. Studies have shown that when coexisting diseases exist along with chronic pain, the treatment and improvement of one of these disorders can be effective in the improvement of the other. Patients with chronic pain are at higher risk for suicide and suicidal thoughts. Research has shown approximately 20% of people with suicidal thoughts and between 5 and 14% of patients with chronic pain who commit suicide. Of patients who attempted suicide, 53.6% died of gunshot wounds and 16.2% died of opioid overdose.
A multimodal treatment approach is important for better pain control and outcomes, as well as minimizing the need for high-risk treatments such as opioid medications. Managing comorbid depression and anxiety is critical in reducing chronic pain. Also, patients with chronic pain should be carefully monitored for severe depression and any suicidal thoughts and plans. Periodic referral of the patient to the doctor for physical examination and to check the effectiveness of treatment 2 is necessary, and the rapid and correct treatment and management of chronic pain can prevent the occurrence of potential negative consequences on the patient's life and increase in healthcare costs.
Social and personal impacts
Social support
Social support has important consequences for individuals with chronic pain. In particular, pain intensity, pain control, and resiliency to pain have been implicated as outcomes influenced by different levels and types of social support. Much of this research has focused on emotional, instrumental, tangible and informational social support. People with persistent pain conditions tend to rely on their social support as a coping mechanism and therefore have better outcomes when they are a part of larger more supportive social networks. Across a majority of studies investigated, there was a direct significant association between social activities or social support and pain. Higher levels of pain were associated with a decrease in social activities, lower levels of social support, and reduced social functioning.
Racial disparities
Evidence exists for unconscious biases and negative stereotyping against racial minorities requesting pain treatment, although clinical decision making was not affected, according to one 2017 review. Minorities may be denied diagnoses for pain and pain medications, and are more likely to go through substance abuse assessment, and are less likely to transfer for pain specialist referral. A 2010 University of Michigan Health study found that black patients in pain clinics received 50% of the amount of drugs that patients who were white received. Preliminary research showed that health providers might have less empathy for black patients and underestimated their pain levels, resulting in treatment delays. Minorities may experience a language barrier, limiting the high level of engagement between the person with pain and health providers for treatment.
Perceptions of injustice
Similar to the damaging effects seen with catastrophizing, perceived injustice is thought to contribute to the severity and duration of chronic pain. Pain-related injustice perception has been conceptualized as a cognitive appraisal reflecting the severity and irreparability of pain- or injury-related loss (e.g., "I just want my life back"), and externalizing blame and unfairness ("I am suffering because of someone else's negligence."). It has been suggested that understanding problems with top down processing/cognitive appraisals can be used to better understand and treat this problem.
Chronic pain and COVID-19
COVID-19 has disrupted the lives of many, leading to major physical, psychological and socioeconomic impacts in the general population. Social distancing practices defining the response to the pandemic alter familiar patterns of social interaction, creating the conditions for what some psychologists are describing as a period of collective grief. Individuals with chronic pain tend to embody an ambiguous status, at times expressing that their type of suffering places them between and outside of conventional medicine. With a large proportion of the global population enduring prolonged periods of social isolation and distress, one study found that people with chronic pain from COVID-19 experienced more empathy towards their suffering during the pandemic.
Effect of chronic pain in the workplace
In the workplace, chronic pain conditions are a significant problem for both the person with the condition and the organization; a problem only expected to increase in many countries due to an aging workforce. In light of this, it may be helpful for organizations to consider the social environment of their workplace, and how it may be working to ease or worsen chronic pain issues for employees. As an example of how the social environment can affect chronic pain, some research has found that high levels of socially prescribed perfectionism (perfectionism induced by external pressure from others, such as a supervisor) can interact with the guilt felt by a person with chronic pain, thereby increasing job tension, and decreasing job satisfaction.
| Biology and health sciences | Specific diseases | Health |
144865 | https://en.wikipedia.org/wiki/Tendon | Tendon | A tendon or sinew is a tough band of dense fibrous connective tissue that connects muscle to bone. It sends the mechanical forces of muscle contraction to the skeletal system, while withstanding tension.
Tendons, like ligaments, are made of collagen. The difference is that ligaments connect bone to bone, while tendons connect muscle to bone. There are about 4,000 tendons in the adult human body.
Structure
A tendon is made of dense regular connective tissue, whose main cellular components are special fibroblasts called tendon cells (tenocytes). Tendon cells synthesize the tendon's extracellular matrix, which abounds with densely-packed collagen fibers. The collagen fibers run parallel to each other and are grouped into fascicles. Each fascicle is bound by an endotendineum, which is a delicate loose connective tissue containing thin collagen fibrils and elastic fibers. A set of fascicles is bound by an epitenon, which is a sheath of dense irregular connective tissue. The whole tendon is enclosed by a fascia. The space between the fascia and the tendon tissue is filled with the paratenon, a fatty areolar tissue. Normal healthy tendons are anchored to bone by Sharpey's fibres.
Extracellular matrix
The dry mass of normal tendons, which is 30–45% of their total mass, is made of:
60–85% collagen
60–80% collagen I
0–10% collagen III
2% collagen IV
small amounts of collagens V, VI, and others
15–40% non-collagenous extracellular matrix components, including:
3% cartilage oligomeric matrix protein,
1–2% elastin,
1–5% proteoglycans,
0.2% inorganic components such as copper, manganese, and calcium.
Although most of a tendon's collagen is type I collagen, many minor collagens are present that play vital roles in tendon development and function. These include type II collagen in the cartilaginous zones, type III collagen in the reticulin fibres of the vascular walls, type IX collagen, type IV collagen in the basement membranes of the capillaries, type V collagen in the vascular walls, and type X collagen in the mineralized fibrocartilage near the interface with the bone.
Ultrastructure and collagen synthesis
Collagen fibres coalesce into macroaggregates. After secretion from the cell, cleaved by procollagen N- and C-proteases, the tropocollagen molecules spontaneously assemble into insoluble fibrils. A collagen molecule is about 300 nm long and 1–2 nm wide, and the diameter of the fibrils that are formed can range from 50–500 nm. In tendons, the fibrils then assemble further to form fascicles, which are about 10 mm in length with a diameter of 50–300 μm, and finally into a tendon fibre with a diameter of 100–500 μm.
The collagen in tendons are held together with proteoglycan (a compound consisting of a protein bonded to glycosaminoglycan groups, present especially in connective tissue) components including decorin and, in compressed regions of tendon, aggrecan, which are capable of binding to the collagen fibrils at specific locations. The proteoglycans are interwoven with the collagen fibrils their glycosaminoglycan (GAG) side chains have multiple interactions with the surface of the fibrils showing that the proteoglycans are important structurally in the interconnection of the fibrils. The major GAG components of the tendon are dermatan sulfate and chondroitin sulfate, which associate with collagen and are involved in the fibril assembly process during tendon development. Dermatan sulfate is thought to be responsible for forming associations between fibrils, while chondroitin sulfate is thought to be more involved with occupying volume between the fibrils to keep them separated and help withstand deformation. The dermatan sulfate side chains of decorin aggregate in solution, and this behavior can assist with the assembly of the collagen fibrils. When decorin molecules are bound to a collagen fibril, their dermatan sulfate chains may extend and associate with other dermatan sulfate chains on decorin that is bound to separate fibrils, therefore creating interfibrillar bridges and eventually causing parallel alignment of the fibrils.
Tenocytes
The tenocytes produce the collagen molecules, which aggregate end-to-end and side-to-side to produce collagen fibrils. Fibril bundles are organized to form fibres with the elongated tenocytes closely packed between them. There is a three-dimensional network of cell processes associated with collagen in the tendon. The cells communicate with each other through gap junctions, and this signalling gives them the ability to detect and respond to mechanical loading. These communications happen by two proteins essentially: connexin 43, present where the cells processes meet and in cell bodies connexin 32, present only where the processes meet.
Blood vessels may be visualized within the endotendon running parallel to collagen fibres, with occasional branching transverse anastomoses.
The internal tendon bulk is thought to contain no nerve fibres, but the epitenon and paratenon contain nerve endings, while Golgi tendon organs are present at the myotendinous junction between tendon and muscle.
Tendon length varies in all major groups and from person to person. Tendon length is, in practice, the deciding factor regarding actual and potential muscle size. For example, all other relevant biological factors being equal, a man with a shorter tendons and a longer biceps muscle will have greater potential for muscle mass than a man with a longer tendon and a shorter muscle. Successful bodybuilders will generally have shorter tendons. Conversely, in sports requiring athletes to excel in actions such as running or jumping, it is beneficial to have longer than average Achilles tendon and a shorter calf muscle.
Tendon length is determined by genetic predisposition, and has not been shown to either increase or decrease in response to environment, unlike muscles, which can be shortened by trauma, use imbalances and a lack of recovery and stretching. In addition tendons allow muscles to be at an optimal distance from the site where they actively engage in movement, passing through regions where space is premium, like the carpal tunnel.
List of tendons
There are about 4,000 tendons in the human body, of which 55 are listed here:
Naming convention for the table:
Functions
Traditionally, tendons have been considered to be a mechanism by which muscles connect to bone as well as muscles itself, functioning to transmit forces. This connection allows tendons to passively modulate forces during locomotion, providing additional stability with no active work. However, over the past two decades, much research has focused on the elastic properties of some tendons and their ability to function as springs. Not all tendons are required to perform the same functional role, with some predominantly positioning limbs, such as the fingers when writing (positional tendons) and others acting as springs to make locomotion more efficient (energy storing tendons). Energy storing tendons can store and recover energy at high efficiency. For example, during a human stride, the Achilles tendon stretches as the ankle joint dorsiflexes. During the last portion of the stride, as the foot plantar-flexes (pointing the toes down), the stored elastic energy is released. Furthermore, because the tendon stretches, the muscle is able to function with less or even no change in length, allowing the muscle to generate more force.
The mechanical properties of the tendon are dependent on the collagen fiber diameter and orientation. The collagen fibrils are parallel to each other and closely packed, but show a wave-like appearance due to planar undulations, or crimps, on a scale of several micrometers. In tendons, the collagen fibres have some flexibility due to the absence of hydroxyproline and proline residues at specific locations in the amino acid sequence, which allows the formation of other conformations such as bends or internal loops in the triple helix and results in the development of crimps. The crimps in the collagen fibrils allow the tendons to have some flexibility as well as a low compressive stiffness. In addition, because the tendon is a multi-stranded structure made up of many partially independent fibrils and fascicles, it does not behave as a single rod, and this property also contributes to its flexibility.
The proteoglycan components of tendons also are important to the mechanical properties. While the collagen fibrils allow tendons to resist tensile stress, the proteoglycans allow them to resist compressive stress. These molecules are very hydrophilic, meaning that they can absorb a large amount of water and therefore have a high swelling ratio. Since they are noncovalently bound to the fibrils, they may reversibly associate and disassociate so that the bridges between fibrils can be broken and reformed. This process may be involved in allowing the fibril to elongate and decrease in diameter under tension. However, the proteoglycans may also have a role in the tensile properties of tendon. The structure of tendon is effectively a fibre composite material, built as a series of hierarchical levels. At each level of the hierarchy, the collagen units are bound together by either collagen crosslinks, or the proteoglycans, to create a structure highly resistant to tensile load. The elongation and the strain of the collagen fibrils alone have been shown to be much lower than the total elongation and strain of the entire tendon under the same amount of stress, demonstrating that the proteoglycan-rich matrix must also undergo deformation, and stiffening of the matrix occurs at high strain rates. This deformation of the non-collagenous matrix occurs at all levels of the tendon hierarchy, and by modulating the organisation and structure of this matrix, the different mechanical properties required by different tendons can be achieved. Energy storing tendons have been shown to utilise significant amounts of sliding between fascicles to enable the high strain characteristics they require, whilst positional tendons rely more heavily on sliding between collagen fibres and fibrils. However, recent data suggests that energy storing tendons may also contain fascicles which are twisted, or helical, in nature - an arrangement that would be highly beneficial for providing the spring-like behaviour required in these tendons.
Mechanics
Tendons are viscoelastic structures, which means they exhibit both elastic and viscous behaviour. When stretched, tendons exhibit typical "soft tissue" behavior. The force-extension, or stress-strain curve starts with a very low stiffness region, as the crimp structure straightens and the collagen fibres align suggesting negative Poisson's ratio in the fibres of the tendon. More recently, tests carried out in vivo (through MRI) and ex vivo (through mechanical testing of various cadaveric tendon tissue) have shown that healthy tendons are highly anisotropic and exhibit a negative Poisson's ratio (auxetic) in some planes when stretched up to 2% along their length, i.e. within their normal range of motion. After this 'toe' region, the structure becomes significantly stiffer, and has a linear stress-strain curve until it begins to fail. The mechanical properties of tendons vary widely, as they are matched to the functional requirements of the tendon. The energy storing tendons tend to be more elastic, or less stiff, so they can more easily store energy, whilst the stiffer positional tendons tend to be a little more viscoelastic, and less elastic, so they can provide finer control of movement. A typical energy storing tendon will fail at around 12–15% strain, and a stress in the region of 100–150 MPa, although some tendons are notably more extensible than this, for example the superficial digital flexor in the horse, which stretches in excess of 20% when galloping. Positional tendons can fail at strains as low as 6–8%, but can have moduli in the region of 700–1000 MPa.
Several studies have demonstrated that tendons respond to changes in mechanical loading with growth and remodeling processes, much like bones. In particular, a study showed that disuse of the Achilles tendon in rats resulted in a decrease in the average thickness of the collagen fiber bundles comprising the tendon. In humans, an experiment in which people were subjected to a simulated micro-gravity environment found that tendon stiffness decreased significantly, even when subjects were required to perform restiveness exercises. These effects have implications in areas ranging from treatment of bedridden patients to the design of more effective exercises for astronauts.
Clinical significance
Injury
Tendons are subject to many types of injuries. There are various forms of tendinopathies or tendon injuries due to overuse. These types of injuries generally result in inflammation and degeneration or weakening of the tendons, which may eventually lead to tendon rupture. Tendinopathies can be caused by a number of factors relating to the tendon extracellular matrix (ECM), and their classification has been difficult because their symptoms and histopathology often are similar.
Types of tendinopathy include:
Tendinosis: non-inflammatory injury to the tendon at the cellular level. The degradation is caused by damage to collagen, cells, and the vascular components of the tendon, and is known to lead to rupture. Observations of tendons that have undergone spontaneous rupture have shown the presence of collagen fibrils that are not in the correct parallel orientation or are not uniform in length or diameter, along with rounded tenocytes, other cell abnormalities, and the ingrowth of blood vessels. Other forms of tendinosis that have not led to rupture have also shown the degeneration, disorientation, and thinning of the collagen fibrils, along with an increase in the amount of glycosaminoglycans between the fibrils.
Tendinitis: degeneration with inflammation of the tendon as well as vascular disruption.
Paratenonitis: inflammation of the paratenon, or paratendinous sheet located between the tendon and its sheath.
Tendinopathies may be caused by several intrinsic factors including age, body weight, and nutrition. The extrinsic factors are often related to sports and include excessive forces or loading, poor training techniques, and environmental conditions.
Healing
It was believed that tendons could not undergo matrix turnover and that tenocytes were not capable of repair. However, it has since been shown that, throughout the lifetime of a person, tenocytes in the tendon actively synthesize matrix components as well as enzymes such as matrix metalloproteinases (MMPs) can degrade the matrix. Tendons are capable of healing and recovering from injuries in a process that is controlled by the tenocytes and their surrounding extracellular matrix.
The three main stages of tendon healing are inflammation, repair or proliferation, and remodeling, which can be further divided into consolidation and maturation. These stages can overlap with each other. In the first stage, inflammatory cells such as neutrophils are recruited to the injury site, along with erythrocytes. Monocytes and macrophages are recruited within the first 24 hours, and phagocytosis of necrotic materials at the injury site occurs. After the release of vasoactive and chemotactic factors, angiogenesis and the proliferation of tenocytes are initiated. Tenocytes then move into the site and start to synthesize collagen III. After a few days, the repair or proliferation stage begins. In this stage, the tenocytes are involved in the synthesis of large amounts of collagen and proteoglycans at the site of injury, and the levels of GAG and water are high. After about six weeks, the remodeling stage begins. The first part of this stage is consolidation, which lasts from about six to ten weeks after the injury. During this time, the synthesis of collagen and GAGs is decreased, and the cellularity is also decreased as the tissue becomes more fibrous as a result of increased production of collagen I and the fibrils become aligned in the direction of mechanical stress. The final maturation stage occurs after ten weeks, and during this time there is an increase in crosslinking of the collagen fibrils, which causes the tissue to become stiffer. Gradually, over about one year, the tissue will turn from fibrous to scar-like.
Matrix metalloproteinases (MMPs) have a very important role in the degradation and remodeling of the ECM during the healing process after a tendon injury. Certain MMPs including MMP-1, MMP-2, MMP-8, MMP-13, and MMP-14 have collagenase activity, meaning that, unlike many other enzymes, they are capable of degrading collagen I fibrils. The degradation of the collagen fibrils by MMP-1 along with the presence of denatured collagen are factors that are believed to cause weakening of the tendon ECM and an increase in the potential for another rupture to occur. In response to repeated mechanical loading or injury, cytokines may be released by tenocytes and can induce the release of MMPs, causing degradation of the ECM and leading to recurring injury and chronic tendinopathies.
A variety of other molecules are involved in tendon repair and regeneration. There are five growth factors that have been shown to be significantly upregulated and active during tendon healing: insulin-like growth factor 1 (IGF-I), platelet-derived growth factor (PDGF), vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), and transforming growth factor beta (TGF-β). These growth factors all have different roles during the healing process. IGF-1 increases collagen and proteoglycan production during the first stage of inflammation, and PDGF is also present during the early stages after injury and promotes the synthesis of other growth factors along with the synthesis of DNA and the proliferation of tendon cells. The three isoforms of TGF-β (TGF-β1, TGF-β2, TGF-β3) are known to play a role in wound healing and scar formation. VEGF is well known to promote angiogenesis and to induce endothelial cell proliferation and migration, and VEGF mRNA has been shown to be expressed at the site of tendon injuries along with collagen I mRNA. Bone morphogenetic proteins (BMPs) are a subgroup of TGF-β superfamily that can induce bone and cartilage formation as well as tissue differentiation, and BMP-12 specifically has been shown to influence formation and differentiation of tendon tissue and to promote fibrogenesis.
Effects of activity on healing
In animal models, extensive studies have been conducted to investigate the effects of mechanical strain in the form of activity level on tendon injury and healing. While stretching can disrupt healing during the initial inflammatory phase, it has been shown that controlled movement of the tendons after about one week following an acute injury can help to promote the synthesis of collagen by the tenocytes, leading to increased tensile strength and diameter of the healed tendons and fewer adhesions than tendons that are immobilized. In chronic tendon injuries, mechanical loading has also been shown to stimulate fibroblast proliferation and collagen synthesis along with collagen realignment, all of which promote repair and remodeling. To further support the theory that movement and activity assist in tendon healing, it has been shown that immobilization of the tendons after injury often has a negative effect on healing. In rabbits, collagen fascicles that are immobilized have shown decreased tensile strength, and immobilization also results in lower amounts of water, proteoglycans, and collagen crosslinks in the tendons.
Several mechanotransduction mechanisms have been proposed as reasons for the response of tenocytes to mechanical force that enable them to alter their gene expression, protein synthesis, and cell phenotype, and eventually cause changes in tendon structure. A major factor is mechanical deformation of the extracellular matrix, which can affect the actin cytoskeleton and therefore affect cell shape, motility, and function. Mechanical forces can be transmitted by focal adhesion sites, integrins, and cell-cell junctions. Changes in the actin cytoskeleton can activate integrins, which mediate "outside-in" and "inside-out" signaling between the cell and the matrix. G-proteins, which induce intracellular signaling cascades, may also be important, and ion channels are activated by stretching to allow ions such as calcium, sodium, or potassium to enter the cell.
Society and culture
Sinew was widely used throughout pre-industrial eras as a tough, durable fiber. Some specific uses include using sinew as thread for sewing, attaching feathers to arrows (see fletch), lashing tool blades to shafts, etc. It is also recommended in survival guides as a material from which strong cordage can be made for items like traps or living structures. Tendon must be treated in specific ways to function usefully for these purposes. Inuit and other circumpolar people utilized sinew as the only cordage for all domestic purposes due to the lack of other suitable fiber sources in their ecological habitats. The elastic properties of particular sinews were also used in composite recurved bows favoured by the steppe nomads of Eurasia, and Native Americans. The first stone throwing artillery also used the elastic properties of sinew.
Sinew makes for an excellent cordage material for three reasons: It is extremely strong, it contains natural glues, and it shrinks as it dries, doing away with the need for knots.
Culinary uses
Tendon (in particular, beef tendon) is used as a food in some Asian cuisines (often served at yum cha or dim sum restaurants). One popular dish is suan bao niu jin, in which the tendon is marinated in garlic. It is also sometimes found in the Vietnamese noodle dish phở.
Other animals
In some organisms, notably birds, and ornithischian dinosaurs, portions of the tendon can become ossified. In this process, osteocytes infiltrate the tendon and lay down bone as they would in sesamoid bone such as the patella. In birds, tendon ossification primarily occurs in the hindlimb, while in ornithischian dinosaurs, ossified axial muscle tendons form a latticework along the neural and haemal spines on the tail, presumably for support.
| Biology and health sciences | Tissues | null |
144929 | https://en.wikipedia.org/wiki/Boron%20group | Boron group | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 2
|
|-
! 3
|
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
The boron group are the chemical elements in group 13 of the periodic table, consisting of boron (B), aluminium (Al), gallium (Ga), indium (In), thallium (Tl) and nihonium (Nh). This group lies in the p-block of the periodic table. The elements in the boron group are characterized by having three valence electrons. These elements have also been referred to as the triels.
Several group 13 elements have biological roles in the ecosystem. Boron is a trace element in humans and is essential for some plants. Lack of boron can lead to stunted plant growth, while an excess can also cause harm by inhibiting growth. Aluminium has neither a biological role nor significant toxicity and is considered safe. Indium and gallium can stimulate metabolism; gallium is credited with the ability to bind itself to iron proteins. Thallium is highly toxic, interfering with the function of numerous vital enzymes, and has seen use as a pesticide.
Characteristics
Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior:
The boron group is notable for trends in the electron configuration, as shown above, and in some of its elements' characteristics. Boron differs from the other group members in its hardness, refractivity and reluctance to participate in metallic bonding. An example of a trend in reactivity is boron's tendency to form reactive compounds with hydrogen.
Although situated in p-block, the group is notorious for violation of the octet rule by its members boron and (to a lesser extent) aluminium. All members of the group are characterized as trivalent.
Chemical reactivity
Hydrides
Most of the elements in the boron group show increasing reactivity as the elements get heavier in atomic mass and higher in atomic number. Boron, the first element in the group, is generally unreactive with many elements except at high temperatures, although it is capable of forming many compounds with hydrogen, sometimes called boranes. The simplest borane is diborane, or B2H6. Another example is B10H14.
The next group-13 elements, aluminium and gallium, form fewer stable hydrides, although both AlH3 and GaH3 exist. Indium, the next element in the group, is not known to form many hydrides, except in complex compounds such as the phosphine complex (Cy=cyclohexyl). No stable compound of thallium and hydrogen has been synthesized in any laboratory.
Oxides
All of the boron-group elements are known to form a trivalent oxide, with two atoms of the element bonded covalently with three atoms of oxygen. These elements show a trend of increasing pH (from acidic to basic). Boron oxide (B2O3) is slightly acidic, aluminium and gallium oxide (Al2O3 and Ga2O3 respectively) are amphoteric, indium(III) oxide (In2O3) is nearly amphoteric, and thallium(III) oxide (Tl2O3) is a Lewis base because it dissolves in acids to form salts. Each of these compounds are stable, but thallium oxide decomposes at temperatures higher than 875 °C.
Halides
The elements in group 13 are also capable of forming stable compounds with the halogens, usually with the formula MX3 (where M is a boron-group element and X is a halogen.) Fluorine, the first halogen, is able to form stable compounds with every element that has been tested (except neon and helium), and the boron group is no exception. It is even hypothesized that nihonium could form a compound with fluorine, NhF3, before spontaneously decaying due to nihonium's radioactivity. Chlorine also forms stable compounds with all of the elements in the boron group, including thallium, and is hypothesized to react with nihonium. All of the elements will react with bromine under the right conditions, as with the other halogens but less vigorously than either chlorine or fluorine. Iodine will react with all natural elements in the periodic table except for the noble gases, and is notable for its explosive reaction with aluminium to form AlI3. Astatine, the fifth halogen, has only formed a few compounds, due to its radioactivity and short half-life, and no reports of a compound with an At–Al, –Ga, –In, –Tl, or –Nh bond have been seen, although scientists think that it should form salts with metals. Tennessine, the sixth and final member of group 17, may also form compounds with the elements in the boron group; however, because Tennessine is purely synthetic and thus must be created artificially, its chemistry has not been investigated, and any compounds would likely decay nearly instantly after formation due to its extreme radioactivity.
Physical properties
It has been noticed that the elements in the boron group have similar physical properties, although most of boron's are exceptional. For example, all of the elements in the boron group, except for boron itself, are soft. Moreover, all of the other elements in group 13 are relatively reactive at moderate temperatures, while boron's reactivity only becomes comparable at very high temperatures. One characteristic that all do have in common is having three electrons in their valence shells. Boron, being a metalloid, is a thermal and electrical insulator at room temperature, but a good conductor of heat and electricity at high temperatures. Unlike boron, the metals in the group are good conductors under normal conditions. This is in accordance with the long-standing generalization that all metals conduct heat and electricity better than most non-metals.
Oxidation states
The inert s-pair effect is significant in the group-13 elements, especially the heavier ones like thallium. This results in a variety of oxidation states. In the lighter elements, the +3 state is the most stable, but the +1 state becomes more prevalent with increasing atomic number, and is the most stable for thallium. Boron is capable of forming compounds with lower oxidization states, of +1 or +2, and aluminium can do the same. Gallium can form compounds with the oxidation states +1, +2 and +3. Indium is like gallium, but its +1 compounds are more stable than those of the lighter elements. The strength of the inert-pair effect is maximal in thallium, which is generally only stable in the oxidation state of +1, although the +3 state is seen in some compounds. Stable and monomeric gallium, indium and thallium radicals with a formal oxidation state of +2 have since been reported. Nihonium may have +5 oxidation state.
Periodic trends
There are several trends that can be observed in the properties of the boron group members. The boiling points of these elements drop from period to period, while densities tend to rise.
Nuclear
With the exception of the synthetic nihonium, all of the elements of the boron group have stable isotopes. Because all their atomic numbers are odd, boron, gallium and thallium have only two stable isotopes, while aluminium and indium are monoisotopic, having only one, although most indium found in nature is the weakly radioactive 115In. 10B and 11B are both stable, as are 27Al, 69Ga and 71Ga, 113In, and 203Tl and 205Tl. All of these isotopes are readily found in macroscopic quantities in nature. In theory, though, all isotopes with an atomic number greater than 66 are supposed to be unstable to alpha decay. Conversely, all elements with atomic numbers are less than or equal to 66 (except Tc, Pm, Sm and Eu) have at least one isotope that is theoretically energetically stable to all forms of decay (with the exception of proton decay, which has never been observed, and spontaneous fission, which is theoretically possible for elements with atomic numbers greater than 40).
Like all other elements, the elements of the boron group have radioactive isotopes, either found in trace quantities in nature or produced synthetically. The longest-lived of these unstable isotopes is the indium isotope 115In, with its extremely long half-life of . This isotope makes up the vast majority of all naturally occurring indium despite its slight radioactivity. The shortest-lived is 7B, with a half-life of a mere , being the boron isotope with the fewest neutrons and a enough to measure. Some radioisotopes have important roles in scientific research; a few are used in the production of goods for commercial use or, more rarely, as a component of finished products.
History
The boron group has had many names over the years. According to former conventions it was Group IIIB in the European naming system and Group IIIA in the American. The group has also gained two collective names, "earth metals" and "triels". The latter name is derived from the Latin prefix tri- ("three") and refers to the three valence electrons that all of these elements, without exception, have in their valence shells. The name "triels" was first suggested by International Union of Pure and Applied Chemistry (IUPAC) in 1970.
Boron was known to the ancient Egyptians, but only in the mineral borax. The metalloid element was not known in its pure form until 1808, when Humphry Davy was able to extract it by the method of electrolysis. Davy devised an experiment in which he dissolved a boron-containing compound in water and sent an electric current through it, causing the elements of the compound to separate into their pure states. To produce larger quantities he shifted from electrolysis to reduction with sodium. Davy named the element boracium. At the same time two French chemists, Joseph Louis Gay-Lussac and Louis Jacques Thénard, used iron to reduce boric acid. The boron they produced was oxidized to boron oxide.
Aluminium, like boron, was first known in minerals before it was finally extracted from alum, a common mineral in some areas of the world. Antoine Lavoisier and Humphry Davy had each separately tried to extract it. Although neither succeeded, Davy had given the metal its current name. It was only in 1825 that the Danish scientist Hans Christian Ørsted successfully prepared a rather impure form of the element. Many improvements followed, a significant advance being made just two years later by Friedrich Wöhler, whose slightly modified procedure still yielded an impure product. The first pure sample of aluminium is credited to Henri Etienne Sainte-Claire Deville, who substituted sodium for potassium in the procedure. At that time aluminium was considered precious, and it was displayed next to such metals as gold and silver. The method used today, electrolysis of aluminium oxide dissolved in cryolite, was developed by Charles Martin Hall and Paul Héroult in the late 1880s.
Thallium, the heaviest stable element in the boron group, was discovered by William Crookes and Claude-Auguste Lamy in 1861. Unlike gallium and indium, thallium had not been predicted by Dmitri Mendeleev, having been discovered before Mendeleev invented the periodic table. As a result, no one was really looking for it until the 1850s when Crookes and Lamy were examining residues from sulfuric acid production. In the spectra they saw a completely new line, a streak of deep green, which Crookes named after the Greek word θαλλός (), referring to a green shoot or twig. Lamy was able to produce larger amounts of the new metal and determined most of its chemical and physical properties.
Indium is the fourth element of the boron group but was discovered before the third, gallium, and after the fifth, thallium. In 1863 Ferdinand Reich and his assistant, Hieronymous Theodor Richter, were looking in a sample of the mineral zinc blende, also known as sphalerite (ZnS), for the spectroscopic lines of the newly discovered element thallium. Reich heated the ore in a coil of platinum metal and observed the lines that appeared in a spectroscope. Instead of the green thallium lines that he expected, he saw a new line of deep indigo-blue. Concluding that it must come from a new element, they named it after the characteristic indigo color it had produced.
Gallium minerals were not known before August 1875, when the element itself was discovered. It was one of the elements that the inventor of the periodic table, Dmitri Mendeleev, had predicted to exist six years earlier. While examining the spectroscopic lines in zinc blende the French chemist Paul Emile Lecoq de Boisbaudran found indications of a new element in the ore. In just three months he was able to produce a sample, which he purified by dissolving it in a potassium hydroxide (KOH) solution and sending an electric current through it. The next month he presented his findings to the French Academy of Sciences, naming the new element after the Greek name for Gaul, modern France.
The last confirmed element in the boron group, nihonium, was not discovered but rather created or synthesized. The element's synthesis was first reported by the Dubna Joint Institute for Nuclear Research team in Russia and the Lawrence Livermore National Laboratory in the United States, though it was the Dubna team who successfully conducted the experiment in August 2003. Nihonium was discovered in the decay chain of moscovium, which produced a few precious atoms of nihonium. The results were published in January of the following year. Since then around 13 atoms have been synthesized and various isotopes characterized. However, their results did not meet the stringent criteria for being counted as a discovery, and it was the later RIKEN experiments of 2004 aimed at directly synthesizing nihonium that were acknowledged by IUPAC as the discovery.
Etymology
The name "boron" comes from the Arabic word for the mineral borax, (بورق, boraq) which was known before boron was ever extracted. The "-on" suffix is thought to have been taken from "carbon". Aluminium was named by Humphry Davy in the early 1800s. It is derived from the Greek word alumen, meaning bitter salt, or the Latin alum, the mineral. Gallium is derived from the Latin Gallia, referring to France, the place of its discovery. Indium comes from the Latin word indicum, meaning indigo dye, and refers to the element's prominent indigo spectroscopic line. Thallium, like indium, is named after the Greek word for the color of its spectroscopic line: , meaning a green twig or shoot. "Nihonium" is named after Japan (Nihon in Japanese), where it was discovered.
Occurrence and abundance
Boron
Boron, with its atomic number of 5, is a very light element. Almost never found free in nature, it is very low in abundance, composing only 0.001% (10 ppm) of the Earth's crust. It is known to occur in over a hundred different minerals and ores, however: the main source is borax, but it is also found in colemanite, boracite, kernite, tusionite, berborite and fluoborite. Major world miners and extractors of boron include Turkey, the United States, Argentina, China, Bolivia and Peru. Turkey is by far the most prominent of these, accounting for around 70% of all boron extraction in the world. The United States is second, most of its yield coming from the state of California.
Aluminium
Aluminium, in contrast to boron, is the most abundant metal in the Earth's crust, and the third most abundant element. It composes about 8.2% (82,000 ppm) of the Earth's crust, surpassed only by oxygen and silicon. It is like boron, however, in that it is uncommon in nature as a free element. This is due to aluminium's tendency to attract oxygen atoms, forming several aluminium oxides. Aluminium is now known to occur in nearly as many minerals as boron, including garnets, turquoises and beryls, but the main source is the ore bauxite. The world's leading countries in the extraction of aluminium are Ghana, Suriname, Russia and Indonesia, followed by Australia, Guinea and Brazil.
Gallium
Gallium is a relatively rare element in the Earth's crust and is not found in as many minerals as its lighter homologues. Its abundance on the Earth is a mere 0.0018% (18 ppm). Its production is very low compared to other elements, but has increased greatly over the years as extraction methods have improved. Gallium can be found as a trace in a variety of ores, including bauxite and sphalerite, and in such minerals as diaspore and germanite. Trace amounts have been found in coal as well.
The gallium content is greater in a few minerals, including gallite (CuGaS2), but these are too rare to be counted as major sources and make negligible contributions to the world's supply.
Indium
Indium is another rare element in the boron group at only 0.000005% (0.05 ppm),. Very few indium-containing minerals are known, all of them scarce: an example is indite. Indium is found in several zinc ores, but only in minute quantities; likewise some copper and lead ores contain traces. As is the case for most other elements found in ores and minerals, the indium extraction process has become more efficient in recent years, ultimately leading to larger yields. Canada is the world's leader in indium reserves, but both the United States and China have comparable amounts.
Thallium
Thallium is of intermediate abundance in the Earth's crust, estimated to be 0.00006% (0.6 ppm). It is found on the ground in some rocks, in the soil and in clay. Many sulfide ores of iron, zinc and cobalt contain thallium. In minerals it is found in moderate quantities: some examples are crookesite (in which it was first discovered), lorandite, routhierite, bukovite, hutchinsonite and sabatierite. There are other minerals that contain small amounts of thallium, but they are very rare and do not serve as primary sources.
Nihonium
Nihonium is an element that is never found in nature but has been created in a laboratory. It is therefore classified as a synthetic element with no stable isotopes.
Applications
With the exception of synthetic nihonium, all the elements in the boron group have numerous uses and applications in the production and content of many items.
Boron
Boron has found many industrial applications in recent decades, and new ones are still being found. A common application is in fiberglass. There has been rapid expansion in the market for borosilicate glass; most notable among its special qualities is a much greater resistance to thermal expansion than regular glass. Another commercially expanding use of boron and its derivatives is in ceramics. Several boron compounds, especially the oxides, have unique and valuable properties that have led to their substitution for other materials that are less useful. Boron may be found in pots, vases, plates, and ceramic pan-handles for its insulating properties.
The compound borax is used in bleaches, for both clothes and teeth. The hardness of boron and some of its compounds give it a wide array of additional uses. A small part (5%) of the boron produced finds use in agriculture.
Aluminium
Aluminium is a metal with numerous familiar uses in everyday life. It is most often encountered in construction materials, in electrical devices, especially as the conductor in cables, and in tools and vessels for cooking and preserving food. Aluminium's lack of reactivity with food products makes it particularly useful for canning. Its high affinity for oxygen makes it a powerful reducing agent. Finely powdered pure aluminium oxidizes rapidly in air, generating a huge amount of heat in the process (burning at about or ), leading to applications in welding and elsewhere that a large amount of heat is needed. Aluminium is a component of alloys used for making lightweight bodies for aircraft. Cars also sometimes incorporate aluminium in their framework and body, and there are similar applications in military equipment. Less common uses include components of decorations and some guitars. The element is also sees use in a diverse range of electronics.
Gallium
Gallium and its derivatives have only found applications in recent decades. Gallium arsenide has been used in semiconductors, in amplifiers, in solar cells (for example in satellites) and in tunnel diodes for FM transmitter circuits. Gallium alloys are used mostly for dental purposes. Gallium ammonium chloride is used for the leads in transistors. A major application of gallium is in LED lighting. The pure element has been used as a dopant in semiconductors, and has additional uses in electronic devices with other elements. Gallium has the property of being able to 'wet' glass and porcelain, and thus can be used to make mirrors and other highly reflective objects. Gallium can be added to alloys of other metals to lower their melting points.
Indium
Indium's uses can be divided into four categories: the largest part (70%) of the production is used for coatings, usually combined as indium tin oxide (ITO); a smaller portion (12%) goes into alloys and solders; a similar amount is used in electrical components and in semiconductors; and the final 6% goes to minor applications. Among the items in which indium may be found are platings, bearings, display devices, heat reflectors, phosphors, and nuclear control rods. Indium tin oxide has found a wide range of applications, including glass coatings, solar panels, streetlights, electrophosetic displays (EPDs), electroluminescent displays (ELDs), plasma display panels (PDPs), electrochemic displays (ECs), field emission displays (FEDs), sodium lamps, windshield glass and cathode-ray tubes, making it the single most important indium compound.
Thallium
Thallium is used in its elemental form more often than the other boron-group elements. Uncompounded thallium is used in low-melting glasses, photoelectric cells, switches, mercury alloys for low-range glass thermometers, and thallium salts. It can be found in lamps and electronics, and is also used in myocardial imaging. The possibility of using thallium in semiconductors has been researched, and it is a known catalyst in organic synthesis. Thallium hydroxide (TlOH) is used mainly in the production of other thallium compounds. Thallium sulfate (Tl2SO4) is an outstanding vermin-killer, and it is a principal component in some rat and mouse poisons. However, the United States and some European countries have banned the substance because of its high toxicity to humans. In other countries, though, the market for the substance is growing. Tl2SO4 is also used in optical systems.
Biological role
None of the group-13 elements has a major biological role in complex animals, but some are at least associated with a living being. As in other groups, the lighter elements usually have more biological roles than the heavier. The heaviest ones are toxic, as are the other elements in the same periods. Boron is essential in most plants, whose cells use it for such purposes as strengthening cell walls. It is found in humans, certainly as a essential trace element, but there is ongoing debate over its significance in human nutrition. Boron's chemistry does allow it to form complexes with such important molecules as carbohydrates, so it is plausible that it could be of greater use in the human body than previously thought. Boron has also been shown to be able to replace iron in some of its functions, particularly in the healing of wounds. Aluminium has no known biological role in plants or animals, despite its widespread occurrence in nature. Gallium is not essential for the human body, but its relation to iron(III) allows it to become bound to proteins that transport and store iron. Gallium can also stimulate metabolism. Indium and its heavier homologues have no biological role, although indium salts in small doses, like gallium, can stimulate metabolism.
Toxicity
Each element of the boron group has a unique toxicity profile to plants and animals.
As an example of boron toxicity, it has been observed to harm barley in concentrations exceeding 20 mM. The symptoms of boron toxicity are numerous in plants, complicating research: they include reduced cell division, decreased shoot and root growth, decreased production of leaf chlorophyll, inhibition of photosynthesis, lowering of stomata conductance, reduced proton extrusion from roots, and deposition of lignin and suberin.
Aluminium does not present a prominent toxicity hazard in small quantities, but very large doses are slightly toxic. Gallium is not considered toxic, although it may have some minor effects. Indium is not toxic and can be handled with nearly the same precautions as gallium, but some of its compounds are slightly to moderately toxic.
Thallium, unlike gallium and indium, is extremely toxic, and has caused many poisoning deaths. Its most noticeable effect, apparent even from tiny doses, is hair loss all over the body, but it causes a wide range of other symptoms, disrupting and eventually halting the functions of many organs. The nearly colorless, odorless and tasteless nature of thallium compounds has led to their use by murderers. The incidence of thallium poisoning, intentional and accidental, increased when thallium (with its similarly toxic compound, thallium sulfate) was introduced to control rats and other pests. The use of thallium pesticides has therefore been prohibited since 1975 in many countries, including the USA.
Nihonium is a highly unstable element and decays by emitting alpha particles. Due to its strong radioactivity, it would definitely be extremely toxic, although significant quantities of nihonium (larger than a few atoms) have not yet been assembled.
| Physical sciences | Group 13 | Chemistry |
144940 | https://en.wikipedia.org/wiki/Longitudinal%20wave | Longitudinal wave | Longitudinal waves are waves which oscillate in the direction which is parallel to the direction in which the wave travels and displacement of the medium is in the same (or opposite) direction of the wave propagation. Mechanical longitudinal waves are also called compressional or compression waves, because they produce compression and rarefaction when travelling through a medium, and pressure waves, because they produce increases and decreases in pressure. A wave along the length of a stretched Slinky toy, where the distance between coils increases and decreases, is a good visualization. Real-world examples include sound waves (vibrations in pressure, a particle of displacement, and particle velocity propagated in an elastic medium) and seismic P waves (created by earthquakes and explosions).
The other main type of wave is the transverse wave, in which the displacements of the medium are at right angles to the direction of propagation. Transverse waves, for instance, describe some bulk sound waves in solid materials (but not in fluids); these are also called "shear waves" to differentiate them from the (longitudinal) pressure waves that these materials also support.
Nomenclature
"Longitudinal waves" and "transverse waves" have been abbreviated by some authors as "L-waves" and "T-waves", respectively, for their own convenience.
While these two abbreviations have specific meanings in seismology (L-wave for Love wave or long wave) and electrocardiography (see T wave), some authors chose to use "ℓ-waves" (lowercase 'L') and "t-waves" instead, although they are not commonly found in physics writings except for some popular science books.
Sound waves
For longitudinal harmonic sound waves, the frequency and wavelength can be described by the formula
where:
is the displacement of the point on the traveling sound wave;
is the distance from the point to the wave's source;
is the time elapsed;
is the amplitude of the oscillations,
is the speed of the wave; and
is the angular frequency of the wave.
The quantity is the time that the wave takes to travel the distance
The ordinary frequency () of the wave is given by
The wavelength can be calculated as the relation between a wave's speed and ordinary frequency.
For sound waves, the amplitude of the wave is the difference between the pressure of the undisturbed air and the maximum pressure caused by the wave.
Sound's propagation speed depends on the type, temperature, and composition of the medium through which it propagates.
Speed of longitudinal waves
Isotropic medium
For isotropic solids and liquids, the speed of a longitudinal wave can be described by
where
is the elastic modulus, such that
where is the shear modulus and is the bulk modulus;
is the mass density of the medium.
Attenuation of longitudinal waves
The attenuation of a wave in a medium describes the loss of energy a wave carries as it propagates throughout the medium. This is caused by the scattering of the wave at interfaces, the loss of energy due to the friction between molecules, or geometric divergence. The study of attenuation of elastic waves in materials has increased in recent years, particularly within the study of polycrystalline materials where researchers aim to "nondestructively evaluate the degree of damage of engineering components" and to "develop improved procedures for characterizing microstructures" according to a research team led by R. Bruce Thompson in a Wave Motion publication.
Attenuation in viscoelastic materials
In viscoelastic materials, the attenuation coefficients per length for longitudinal waves and for transverse waves must satisfy the following ratio:
where and are the transverse and longitudinal wave speeds respectively.
Attenuation in polycrystalline materials
Polycrystalline materials are made up of various crystal grains which form the bulk material. Due to the difference in crystal structure and properties of these grains, when a wave propagating through a poly-crystal crosses a grain boundary, a scattering event occurs causing scattering based attenuation of the wave. Additionally it has been shown that the ratio rule for viscoelastic materials,
applies equally successfully to polycrystalline materials.
A current prediction for modeling attenuation of waves in polycrystalline materials with elongated grains is the second-order approximation (SOA) model which accounts the second order of inhomogeneity allowing for the consideration multiple scattering in the crystal system. This model predicts that the shape of the grains in a poly-crystal has little effect on attenuation.
Pressure waves
The equations for sound in a fluid given above also apply to acoustic waves in an elastic solid. Although solids also support transverse waves (known as S-waves in seismology), longitudinal sound waves in the solid exist with a velocity and wave impedance dependent on the material's density and its rigidity, the latter of which is described (as with sound in a gas) by the material's bulk modulus.
In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster.
Electromagnetics
Maxwell's equations lead to the prediction of electromagnetic waves in a vacuum, which are strictly transverse waves; due to the fact that they would need particles to vibrate upon, the electric and magnetic fields of which the wave consists are perpendicular to the direction of the wave's propagation. However plasma waves are longitudinal since these are not electromagnetic waves but density waves of charged particles, but which can couple to the electromagnetic field.
After Heaviside's attempts to generalize Maxwell's equations, Heaviside concluded that electromagnetic waves were not to be found as longitudinal waves in "free space" or homogeneous media. Maxwell's equations, as we now understand them, retain that conclusion: in free-space or other uniform isotropic dielectrics, electro-magnetic waves are strictly transverse. However electromagnetic waves can display a longitudinal component in the electric and/or magnetic fields when traversing birefringent materials, or inhomogeneous materials especially at interfaces (surface waves for instance) such as Zenneck waves.
In the development of modern physics, Alexandru Proca (1897–1955) was known for developing relativistic quantum field equations bearing his name (Proca's equations) which apply to the massive vector spin-1 mesons. In recent decades some other theorists, such as Jean-Pierre Vigier and Bo Lehnert of the Swedish Royal Society, have used the Proca equation in an attempt to demonstrate photon mass as a longitudinal electromagnetic component of Maxwell's equations, suggesting that longitudinal electromagnetic waves could exist in a Dirac polarized vacuum. However photon rest mass is strongly doubted by almost all physicists and is incompatible with the Standard Model of physics.
| Physical sciences | Waves | Physics |
145020 | https://en.wikipedia.org/wiki/Sintering | Sintering | Sintering or frittage is the process of compacting and forming a solid mass of material by pressure or heat without melting it to the point of liquefaction. Sintering happens as part of a manufacturing process used with metals, ceramics, plastics, and other materials. The atoms/molecules in the sintered material diffuse across the boundaries of the particles, fusing the particles together and creating a solid piece.
Since the sintering temperature does not have to reach the melting point of the material, sintering is often chosen as the shaping process for materials with extremely high melting points, such as tungsten and molybdenum. The study of sintering in metallurgical powder-related processes is known as powder metallurgy.
An example of sintering can be observed when ice cubes in a glass of water adhere to each other, which is driven by the temperature difference between the water and the ice. Examples of pressure-driven sintering are the compacting of snowfall to a glacier, or the formation of a hard snowball by pressing loose snow together.
The material produced by sintering is called sinter. The word sinter comes from the Middle High German , a cognate of English cinder.
General sintering
Sintering is generally considered successful when the process reduces porosity and enhances properties such as strength, electrical conductivity, translucency and thermal conductivity. In some special cases, sintering is carefully applied to enhance the strength of a material while preserving porosity (e.g. in filters or catalysts, where gas adsorption is a priority). During the sintering process, atomic diffusion drives powder surface elimination in different stages, starting at the formation of necks between powders to final elimination of small pores at the end of the process.
The driving force for densification is the change in free energy from the decrease in surface area and lowering of the surface free energy by the replacement of solid-vapor interfaces. It forms new but lower-energy solid-solid interfaces with a net decrease in total free energy. On a microscopic scale, material transfer is affected by the change in pressure and differences in free energy across the curved surface. If the size of the particle is small (and its curvature is high), these effects become very large in magnitude. The change in energy is much higher when the radius of curvature is less than a few micrometers, which is one of the main reasons why much ceramic technology is based on the use of fine-particle materials.
The ratio of bond area to particle size is a determining factor for properties such as strength and electrical conductivity. To yield the desired bond area, temperature and initial grain size are precisely controlled over the sintering process. At steady state, the particle radius and the vapor pressure are proportional to (p0)2/3 and to (p0)1/3, respectively.
The source of power for solid-state processes is the change in free or chemical potential energy between the neck and the surface of the particle. This energy creates a transfer of material through the fastest means possible; if transfer were to take place from the particle volume or the grain boundary between particles, particle count would decrease and pores would be destroyed. Pore elimination is fastest in samples with many pores of uniform size because the boundary diffusion distance is smallest. during the latter portions of the process, boundary and lattice diffusion from the boundary become important.
Control of temperature is very important to the sintering process, since grain-boundary diffusion and volume diffusion rely heavily upon temperature, particle size, particle distribution, material composition, and often other properties of the sintering environment itself.
Ceramic sintering
Sintering is part of the firing process used in the manufacture of pottery and other ceramic objects. Sintering and vitrification (which requires higher temperatures) are the two main mechanisms behind the strength and stability of ceramics. Sintered ceramic objects are made from substances such as glass, alumina, zirconia, silica, magnesia, lime, beryllium oxide, and ferric oxide. Some ceramic raw materials have a lower affinity for water and a lower plasticity index than clay, requiring organic additives in the stages before sintering.
Sintering begins when sufficient temperatures have been reached to mobilize the active elements in the ceramic material, which can start below their melting point (typically at 50–80% of their melting point), e.g. as premelting. When sufficient sintering has taken place, the ceramic body will no longer break down in water; additional sintering can reduce the porosity of the ceramic, increase the bond area between ceramic particles, and increase the material strength.
Industrial procedures to create ceramic objects via sintering of powders generally include:
mixing water, binder, deflocculant, and unfired ceramic powder to form a slurry
spray-drying the slurry
putting the spray dried powder into a mold and pressing it to form a green body (an unsintered ceramic item)
heating the green body at low temperature to burn off the binder
sintering at a high temperature to fuse the ceramic particles together.
All the characteristic temperatures associated with phase transformation, glass transitions, and melting points, occurring during a sinterisation cycle of a particular ceramic's formulation (i.e., tails and frits) can be easily obtained by observing the expansion-temperature curves during optical dilatometer thermal analysis. In fact, sinterisation is associated with a remarkable shrinkage of the material because glass phases flow once their transition temperature is reached, and start consolidating the powdery structure and considerably reducing the porosity of the material.
Sintering is performed at high temperature. Additionally, a second and/or third external force (such as pressure, electric current) could be used. A commonly used second external force is pressure. Sintering performed by only heating is generally termed "pressureless sintering", which is possible with graded metal-ceramic composites, utilising a nanoparticle sintering aid and bulk molding technology. A variant used for 3D shapes is called hot isostatic pressing.
To allow efficient stacking of product in the furnace during sintering and to prevent parts sticking together, many manufacturers separate ware using ceramic powder separator sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are additionally categorized by fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading.
Sintering of metallic powders
Most, if not all, metals can be sintered. This applies especially to pure metals produced in vacuum which suffer no surface contamination. Sintering under atmospheric pressure requires the use of a protective gas, quite often endothermic gas. Sintering, with subsequent reworking, can produce a great range of material properties. Changes in density, alloying, and heat treatments can alter the physical characteristics of various products. For instance, the Young's modulus En of sintered iron powders remains somewhat insensitive to sintering time, alloying, or particle size in the original powder for lower sintering temperatures, but depends upon the density of the final product:
where D is the density, E is Young's modulus and d is the maximum density of iron.
Sintering is static when a metal powder under certain external conditions may exhibit coalescence, and yet reverts to its normal behavior when such conditions are removed. In most cases, the density of a collection of grains increases as material flows into voids, causing a decrease in overall volume. Mass movements that occur during sintering consist of the reduction of total porosity by repacking, followed by material transport due to evaporation and condensation from diffusion. In the final stages, metal atoms move along crystal boundaries to the walls of internal pores, redistributing mass from the internal bulk of the object and smoothing pore walls. Surface tension is the driving force for this movement.
A special form of sintering (which is still considered part of powder metallurgy) is liquid-state sintering in which at least one but not all elements are in a liquid state. Liquid-state sintering is required for making cemented carbide and tungsten carbide.
Sintered bronze in particular is frequently used as a material for bearings, since its porosity allows lubricants to flow through it or remain captured within it. Sintered copper may be used as a wicking structure in certain types of heat pipe construction, where the porosity allows a liquid agent to move through the porous material via capillary action. For materials that have high melting points such as molybdenum, tungsten, rhenium, tantalum, osmium and carbon, sintering is one of the few viable manufacturing processes. In these cases, very low porosity is desirable and can often be achieved.
Sintered metal powder is used to make frangible shotgun shells called breaching rounds, as used by military and SWAT teams to quickly force entry into a locked room. These shotgun shells are designed to destroy door deadbolts, locks and hinges without risking lives by ricocheting or by flying on at lethal speed through the door. They work by destroying the object they hit and then dispersing into a relatively harmless powder.
Sintered bronze and stainless steel are used as filter materials in applications requiring high temperature resistance while retaining the ability to regenerate the filter element. For example, sintered stainless steel elements are employed for filtering steam in food and pharmaceutical applications, and sintered bronze in aircraft hydraulic systems.
Sintering of powders containing precious metals such as silver and gold is used to make small jewelry items. Evaporative self-assembly of colloidal silver nanocubes into supercrystals has been shown to allow the sintering of electrical joints at temperatures lower than 200 °C.
Advantages
Particular advantages of the powder technology include:
Very high levels of purity and uniformity in starting materials
Preservation of purity, due to the simpler subsequent fabrication process (fewer steps) that it makes possible
Stabilization of the details of repetitive operations, by control of grain size during the input stages
Absence of binding contact between segregated powder particles – or "inclusions" (called stringering) – as often occurs in melting processes
No deformation needed to produce directional elongation of grains
Capability to produce materials of controlled, uniform porosity.
Capability to produce nearly net-shaped objects.
Capability to produce materials which cannot be produced by any other technology.
Capability to fabricate high-strength material like turbine blades.
After sintering the mechanical strength to handling becomes higher.
The literature contains many references on sintering dissimilar materials to produce solid/solid-phase compounds or solid/melt mixtures at the processing stage. Almost any substance can be obtained in powder form, through either chemical, mechanical or physical processes, so basically any material can be obtained through sintering. When pure elements are sintered, the leftover powder is still pure, so it can be recycled.
Disadvantages
Particular disadvantages of the powder technology include:
sintering cannot create uniform sizes
micro- and nanostructures produced before sintering are often destroyed.
Plastics sintering
Plastic materials are formed by sintering for applications that require materials of specific porosity. Sintered plastic porous components are used in filtration and to control fluid and gas flows. Sintered plastics are used in applications requiring caustic fluid separation processes such as the nibs in whiteboard markers, inhaler filters, and vents for caps and liners on packaging materials. Sintered ultra high molecular weight polyethylene materials are used as ski and snowboard base materials. The porous texture allows wax to be retained within the structure of the base material, thus providing a more durable wax coating.
Liquid phase sintering
For materials that are difficult to sinter, a process called liquid phase sintering is commonly used. Materials for which liquid phase sintering is common are Si3N4, WC, SiC, and more. Liquid phase sintering is the process of adding an additive to the powder which will melt before the matrix phase. The process of liquid phase sintering has three stages:
rearrangement – As the liquid melts capillary action will pull the liquid into pores and also cause grains to rearrange into a more favorable packing arrangement.
solution-precipitation – In areas where capillary pressures are high (particles are close together) atoms will preferentially go into solution and then precipitate in areas of lower chemical potential where particles are not close or in contact. This is called contact flattening. This densifies the system in a way similar to grain boundary diffusion in solid state sintering. Ostwald ripening will also occur where smaller particles will go into solution preferentially and precipitate on larger particles leading to densification.
final densification – densification of solid skeletal network, liquid movement from efficiently packed regions into pores.
For liquid phase sintering to be practical the major phase should be at least slightly soluble in the liquid phase and the additive should melt before any major sintering of the solid particulate network occurs, otherwise rearrangement of grains will not occur. Liquid phase sintering was successfully applied to improve grain growth of thin semiconductor layers from nanoparticle precursor films.
Electric current assisted sintering
These techniques employ electric currents to drive or enhance sintering. English engineer A. G. Bloxam registered in 1906 the first patent on sintering powders using direct current in vacuum. The primary purpose of his inventions was the industrial scale production of filaments for incandescent lamps by compacting tungsten or molybdenum particles. The applied current was particularly effective in reducing surface oxides that increased the emissivity of the filaments.
In 1913, Weintraub and Rush patented a modified sintering method which combined electric current with pressure. The benefits of this method were proved for the sintering of refractory metals as well as conductive carbide or nitride powders. The starting boron–carbon or silicon–carbon powders were placed in an electrically insulating tube and compressed by two rods which also served as electrodes for the current. The estimated sintering temperature was 2000 °C.
In the United States, sintering was first patented by Duval d'Adrian in 1922. His three-step process aimed at producing heat-resistant blocks from such oxide materials as zirconia, thoria or tantalia. The steps were: (i) molding the powder; (ii) annealing it at about 2500 °C to make it conducting; (iii) applying current-pressure sintering as in the method by Weintraub and Rush.
Sintering that uses an arc produced via a capacitance discharge to eliminate oxides before direct current heating, was patented by G. F. Taylor in 1932. This originated sintering methods employing pulsed or alternating current, eventually superimposed to a direct current. Those techniques have been developed over many decades and summarized in more than 640 patents.
Of these technologies the most well known is resistance sintering (also called hot pressing) and spark plasma sintering, while electro sinter forging is the latest advancement in this field.
Spark plasma sintering
In spark plasma sintering (SPS), external pressure and an electric field are applied simultaneously to enhance the densification of the metallic/ceramic powder compacts. However, after commercialization it was determined there is no plasma, so the proper name is spark sintering as coined by Lenel. The electric field driven densification supplements sintering with a form of hot pressing, to enable lower temperatures and taking less time than typical sintering. For a number of years, it was speculated that the existence of sparks or plasma between particles could aid sintering; however, Hulbert and coworkers systematically proved that the electric parameters used during spark plasma sintering make it (highly) unlikely. In light of this, the name "spark plasma sintering" has been rendered obsolete. Terms such as field assisted sintering technique (FAST), electric field assisted sintering (EFAS), and direct current sintering (DCS) have been implemented by the sintering community. Using a direct current (DC) pulse as the electric current, spark plasma, spark impact pressure, joule heating, and an electrical field diffusion effect would be created. By modifying the graphite die design and its assembly, it is possible to perform pressureless sintering in spark plasma sintering facility. This modified die design setup is reported to synergize the advantages of both conventional pressureless sintering and spark plasma sintering techniques.
Electro sinter forging
Electro sinter forging is an electric current assisted sintering (ECAS) technology originated from capacitor discharge sintering. It is used for the production of diamond metal matrix composites and is under evaluation for the production of hard metals, nitinol and other metals and intermetallics. It is characterized by a very low sintering time, allowing machines to sinter at the same speed as a compaction press.
Pressureless sintering
Pressureless sintering is the sintering of a powder compact (sometimes at very high temperatures, depending on the powder) without applied pressure. This avoids density variations in the final component, which occurs with more traditional hot pressing methods.
The powder compact (if a ceramic) can be created by slip casting, injection moulding, and cold isostatic pressing. After presintering, the final green compact can be machined to its final shape before being sintered.
Three different heating schedules can be performed with pressureless sintering: constant-rate of heating (CRH), rate-controlled sintering (RCS), and two-step sintering (TSS). The microstructure and grain size of the ceramics may vary depending on the material and method used.
Constant-rate of heating (CRH), also known as temperature-controlled sintering, consists of heating the green compact at a constant rate up to the sintering temperature. Experiments with zirconia have been performed to optimize the sintering temperature and sintering rate for CRH method. Results showed that the grain sizes were identical when the samples were sintered to the same density, proving that grain size is a function of specimen density rather than CRH temperature mode.
In rate-controlled sintering (RCS), the densification rate in the open-porosity phase is lower than in the CRH method. By definition, the relative density, ρrel, in the open-porosity phase is lower than 90%. Although this should prevent separation of pores from grain boundaries, it has been proven statistically that RCS did not produce smaller grain sizes than CRH for alumina, zirconia, and ceria samples.
Two-step sintering (TSS) uses two different sintering temperatures. The first sintering temperature should guarantee a relative density higher than 75% of theoretical sample density. This will remove supercritical pores from the body. The sample will then be cooled down and held at the second sintering temperature until densification is completed. Grains of cubic zirconia and cubic strontium titanate were significantly refined by TSS compared to CRH. However, the grain size changes in other ceramic materials, like tetragonal zirconia and hexagonal alumina, were not statistically significant.
Microwave sintering
In microwave sintering, heat is sometimes generated internally within the material, rather than via surface radiative heat transfer from an external heat source. Some materials fail to couple and others exhibit run-away behavior, so it is restricted in usefulness. A benefit of microwave sintering is faster heating for small loads, meaning less time is needed to reach the sintering temperature, less heating energy is required and there are improvements in the product properties.
A failing of microwave sintering is that it generally sinters only one compact at a time, so overall productivity turns out to be poor except for situations involving one of a kind sintering, such as for artists. As microwaves can only penetrate a short distance in materials with a high conductivity and a high permeability, microwave sintering requires the sample to be delivered in powders with a particle size around the penetration depth of microwaves in the particular material. The sintering process and side-reactions run several times faster during microwave sintering at the same temperature, which results in different properties for the sintered product.
This technique is acknowledged to be quite effective in maintaining fine grains/nano sized grains in sintered bioceramics. Magnesium phosphates and calcium phosphates are the examples which have been processed through the microwave sintering technique.
Densification, vitrification and grain growth
Sintering in practice is the control of both densification and grain growth. Densification is the act of reducing porosity in a sample, thereby making it denser. Grain growth is the process of grain boundary motion and Ostwald ripening to increase the average grain size. Many properties (mechanical strength, electrical breakdown strength, etc.) benefit from both a high relative density and a small grain size. Therefore, being able to control these properties during processing is of high technical importance. Since densification of powders requires high temperatures, grain growth naturally occurs during sintering. Reduction of this process is key for many engineering ceramics. Under certain conditions of chemistry and orientation, some grains may grow rapidly at the expense of their neighbours during sintering. This phenomenon, known as abnormal grain growth (AGG), results in a bimodal grain size distribution that has consequences for the mechanical, dielectric and thermal performance of the sintered material.
For densification to occur at a quick pace it is essential to have (1) an amount of liquid phase that is large in size, (2) a near complete solubility of the solid in the liquid, and (3) wetting of the solid by the liquid. The power behind the densification is derived from the capillary pressure of the liquid phase located between the fine solid particles. When the liquid phase wets the solid particles, each space between the particles becomes a capillary in which a substantial capillary pressure is developed. For submicrometre particle sizes, capillaries with diameters in the range of 0.1 to 1 micrometres develop pressures in the range of to for silicate liquids and in the range of to for a metal such as liquid cobalt.
Densification requires constant capillary pressure where just solution-precipitation material transfer would not produce densification. For further densification, additional particle movement while the particle undergoes grain-growth and grain-shape changes occurs. Shrinkage would result when the liquid slips between particles and increases pressure at points of contact causing the material to move away from the contact areas, forcing particle centers to draw near each other.
The sintering of liquid-phase materials involves a fine-grained solid phase to create the needed capillary pressures proportional to its diameter, and the liquid concentration must also create the required capillary pressure within range, else the process ceases. The vitrification rate is dependent upon the pore size, the viscosity and amount of liquid phase present leading to the viscosity of the overall composition, and the surface tension. Temperature dependence for densification controls the process because at higher temperatures viscosity decreases and increases liquid content. Therefore, when changes to the composition and processing are made, it will affect the vitrification process.
Sintering mechanisms
Sintering occurs by diffusion of atoms through the microstructure. This diffusion is caused by a gradient of chemical potential – atoms move from an area of higher chemical potential to an area of lower chemical potential. The different paths the atoms take to get from one spot to another are the "sintering mechanisms" or "matter transport mechanisms".
In solid state sintering, the six common mechanisms are:
surface diffusion – diffusion of atoms along the surface of a particle
vapor transport – evaporation of atoms which condense on a different surface
lattice diffusion from surface – atoms from surface diffuse through lattice
lattice diffusion from grain boundary – atom from grain boundary diffuses through lattice
grain boundary diffusion – atoms diffuse along grain boundary
plastic deformation – dislocation motion causes flow of matter.
Mechanisms 1–3 above are non-densifying (i.e. do not cause the pores and the overall ceramic body to shrink) but can still increase the area of the bond or "neck" between grains; they take atoms from the surface and rearrange them onto another surface or part of the same surface. Mechanisms 4–6 are densifying – atoms are moved from the bulk material or the grain boundaries to the surface of pores, thereby eliminating porosity and increasing the density of the sample.
Grain growth
A grain boundary (GB) is the transition area or interface between adjacent crystallites (or grains) of the same chemical and lattice composition, not to be confused with a phase boundary. The adjacent grains do not have the same orientation of the lattice, thus giving the atoms in GB shifted positions relative to the lattice in the crystals. Due to the shifted positioning of the atoms in the GB they have a higher energy state when compared with the atoms in the crystal lattice of the grains. It is this imperfection that makes it possible to selectively etch the GBs when one wants the microstructure to be visible.
Striving to minimize its energy leads to the coarsening of the microstructure to reach a metastable state within the specimen. This involves minimizing its GB area and changing its topological structure to minimize its energy. This grain growth can either be normal or abnormal, a normal grain growth is characterized by the uniform growth and size of all the grains in the specimen. Abnormal grain growth is when a few grains grow much larger than the remaining majority.
Grain boundary energy/tension
The atoms in the GB are normally in a higher energy state than their equivalent in the bulk material. This is due to their more stretched bonds, which gives rise to a GB tension . This extra energy that the atoms possess is called the grain boundary energy, . The grain will want to minimize this extra energy, thus striving to make the grain boundary area smaller and this change requires energy.
"Or, in other words, a force has to be applied, in the plane of the grain boundary and acting along a line in the grain-boundary area, in order to extend the grain-boundary area in the direction of the force. The force per unit length, i.e. tension/stress, along the line mentioned is σGB. On the basis of this reasoning it would follow that:
with dA as the increase of grain-boundary area per unit length along the line in the grain-boundary area considered."[pg 478]
The GB tension can also be thought of as the attractive forces between the atoms at the surface and the tension between these atoms is due to the fact that there is a larger interatomic distance between them at the surface compared to the bulk (i.e. surface tension). When the surface area becomes bigger the bonds stretch more and the GB tension increases. To counteract this increase in tension there must be a transport of atoms to the surface keeping the GB tension constant. This diffusion of atoms accounts for the constant surface tension in liquids. Then the argument,
holds true. For solids, on the other hand, diffusion of atoms to the surface might not be sufficient and the surface tension can vary with an increase in surface area.
For a solid, one can derive an expression for the change in Gibbs free energy, dG, upon the change of GB area, dA. dG is given by
which gives
is normally expressed in units of while is normally expressed in units of since they are different physical properties.
Mechanical equilibrium
In a two-dimensional isotropic material the grain boundary tension would be the same for the grains. This would give angle of 120° at GB junction where three grains meet. This would give the structure a hexagonal pattern which is the metastable state (or mechanical equilibrium) of the 2D specimen. A consequence of this is that, to keep trying to be as close to the equilibrium as possible, grains with fewer sides than six will bend the GB to try keep the 120° angle between each other. This results in a curved boundary with its curvature towards itself. A grain with six sides will, as mentioned, have straight boundaries, while a grain with more than six sides will have curved boundaries with its curvature away from itself. A grain with six boundaries (i.e. hexagonal structure) is in a metastable state (i.e. local equilibrium) within the 2D structure. In three dimensions structural details are similar but much more complex and the metastable structure for a grain is a non-regular 14-sided polyhedra with doubly curved faces. In practice all arrays of grains are always unstable and thus always grow until prevented by a counterforce.
Grains strive to minimize their energy, and a curved boundary has a higher energy than a straight boundary. This means that the grain boundary will migrate towards the curvature. The consequence of this is that grains with less than 6 sides will decrease in size while grains with more than 6 sides will increase in size.
Grain growth occurs due to motion of atoms across a grain boundary. Convex surfaces have a higher chemical potential than concave surfaces, therefore grain boundaries will move toward their center of curvature. As smaller particles tend to have a higher radius of curvature and this results in smaller grains losing atoms to larger grains and shrinking. This is a process called Ostwald ripening. Large grains grow at the expense of small grains.
Grain growth in a simple model is found to follow:
Here G is final average grain size, G0 is the initial average grain size, t is time, m is a factor between 2 and 4, and K is a factor given by:
Here Q is the molar activation energy, R is the ideal gas constant, T is absolute temperature, and K0 is a material dependent factor. In most materials the sintered grain size is proportional to the inverse square root of the fractional porosity, implying that pores are the most effective retardant for grain growth during sintering.
Reducing grain growth
Solute ions
If a dopant is added to the material (example: Nd in BaTiO3) the impurity will tend to stick to the grain boundaries. As the grain boundary tries to move (as atoms jump from the convex to concave surface) the change in concentration of the dopant at the grain boundary will impose a drag on the boundary. The original concentration of solute around the grain boundary will be asymmetrical in most cases. As the grain boundary tries to move, the concentration on the side opposite of motion will have a higher concentration and therefore have a higher chemical potential. This increased chemical potential will act as a backforce to the original chemical potential gradient that is the reason for grain boundary movement. This decrease in net chemical potential will decrease the grain boundary velocity and therefore grain growth.
Fine second phase particles
If particles of a second phase which are insoluble in the matrix phase are added to the powder in the form of a much finer powder, then this will decrease grain boundary movement. When the grain boundary tries to move past the inclusion diffusion of atoms from one grain to the other, it will be hindered by the insoluble particle. This is because it is beneficial for particles to reside in the grain boundaries and they exert a force in opposite direction compared to grain boundary migration. This effect is called the Zener effect after the man who estimated this drag force to
where r is the radius of the particle and λ the interfacial energy of the boundary if there are N particles per unit volume their volume fraction f is
assuming they are randomly distributed. A boundary of unit area will intersect all particles within a volume of 2r which is 2Nr particles. So the number of particles n intersecting a unit area of grain boundary is:
Now, assuming that the grains only grow due to the influence of curvature, the driving force of growth is where (for homogeneous grain structure) R approximates to the mean diameter of the grains. With this the critical diameter that has to be reached before the grains ceases to grow:
This can be reduced to
so the critical diameter of the grains is dependent on the size and volume fraction of the particles at the grain boundaries.
It has also been shown that small bubbles or cavities can act as inclusion
More complicated interactions which slow grain boundary motion include interactions of the surface energies of the two grains and the inclusion and are discussed in detail by C.S. Smith.
Sintering of catalysts
Sintering is an important cause for loss of catalytic activity, especially on supported metal catalysts. It decreases the surface area of the catalyst and changes the surface structure. For a porous catalytic surface, the pores may collapse due to sintering, resulting in loss of surface area. Sintering is in general an irreversible process.
Small catalyst particles have the highest possible relative surface area and high reaction temperature, both factors that generally increase the reactivity of a catalyst. However, these factors are also the circumstances under which sintering occurs. Specific materials may also increase the rate of sintering. On the other hand, by alloying catalysts with other materials, sintering can be reduced. Rare-earth metals in particular have been shown to reduce sintering of metal catalysts when alloyed.
For many supported metal catalysts, sintering starts to become a significant effect at temperatures over . Catalysts that operate at higher temperatures, such as a car catalyst, use structural improvements to reduce or prevent sintering. These improvements are in general in the form of a support made from an inert and thermally stable material such as silica, carbon or alumina.
| Technology | Metallurgy | null |
145023 | https://en.wikipedia.org/wiki/Spray%20drying | Spray drying | Spray drying is a method of forming a dry powder from a liquid or slurry by rapidly drying with a hot gas. This is the preferred method of drying of many thermally-sensitive materials such as foods and pharmaceuticals, or materials which may require extremely consistent, fine particle size. Air is most commonly used as the heated drying medium; however, nitrogen may be used if the liquid is flammable (such as ethanol) or if the product is oxygen-sensitive.
All spray dryers use some type of atomizer or spray nozzle to disperse the liquid or slurry into a controlled drop size spray. The most common of these are rotary disk and single-fluid high pressure swirl nozzles. Atomizer wheels are known to provide broader particle size distribution, but both methods allow for consistent distribution of particle size. Alternatively, for some applications two-fluid or ultrasonic nozzles are used. Depending on the process requirements, drop sizes from 10 to 500 μm can be achieved with the appropriate choices. The most common applications are in the 100 to 200 μm diameter range. The dry powder is often free-flowing.
The most common type of spray dryers are called single effect. There is a single source of drying air at the top of the chamber (see n°4 on the diagram). In most cases the air is blown in the same direction as the sprayed liquid (co-current). A fine powder is produced, but it can have poor flowability and causes a lot of dust. To overcome the dust issues and poor flowability of the powder, a new generation of spray dryers called multiple effect spray dryers have been developed. Instead of drying the liquid in one stage, drying is done through two steps: the first at the top (as per single effect) and the second with an integrated static bed at the bottom of the chamber. The bed provides a humid environment which causes smaller particles to clump, producing more uniform particle sizes, usually within the range of 100 to 300 μm. These powders are free-flowing due to the larger particle size.
The fine powders generated by the first stage drying can be recycled in continuous flow either at the top of the chamber (around the sprayed liquid) or at the bottom, inside the integrated fluidized bed.
The drying of the powder can be finalized on an external vibrating fluidized bed.
The hot drying gas can be passed in as a co-current, same direction as sprayed liquid atomizer, or counter-current, where the hot air flows against the flow from the atomizer. With co-current flow, particles spend less time in the system and the particle separator (typically a cyclone device). With counter-current flow, particles spend more time in the system and is usually paired with a fluidized bed system. Co-current flow generally allows the system to operate more efficiently.
Alternatives to spray dryers are:
Freeze dryer: a more-expensive batch process for products that degrade in spray drying. Dry product is not free-flowing.
Drum dryer: a less-expensive continuous process for low-value products; creates flakes instead of free-flowing powder.
Pulse combustion dryer: A less-expensive continuous process that can handle higher viscosities and solids loading than a spray dryer, and sometimes yields a freeze-dry quality powder that is free-flowing.
History
The spray drying technique was first described in 1860 with the first spray dryer instrument patented by Samuel Percy in 1872. With time, the spray drying method grew in popularity, at first mainly for milk production in the 1920s and during World War II, when there was a need to reduce the weight and volume of food and other materials. In the second half of the 20th century, commercialization of spray dryers increased, as did the number of spray drying applications.
Spray dryer
A spray dryer takes a liquid stream and separates the solute or suspension as a solid and the solvent into a vapor. The solid is usually collected in a drum or cyclone. The liquid input stream is sprayed through a nozzle into a hot vapor stream and vaporized. Solids form as moisture quickly leaves the droplets. A nozzle is usually used to make the droplets as small as possible, maximizing surface area hence heat transfer and the rate of water vaporization. Droplet sizes can range from 20 to 180 μm depending on the nozzle.
There are two main types of nozzles: high pressure single fluid nozzle (50 to 300 bars) and two-fluid nozzles: one fluid is the liquid to dry and the second is compressed gas (generally air at 1 to 7 bars).
Spray dryers can dry a product very quickly compared to other methods of drying. They also turn a solution (or slurry) into a dried powder in a single step, which simplifies the process and improves profit margins.
In pharmaceutical manufacturing, spray drying is employed to manufacture Amorphous Solid Dispersions, by uniformly dispersing Active Pharmaceutical Ingredients into a polymer matrix. This state will put the active compounds (drug) in a higher state of energy which in turn facilitates diffusion of drug species in patient body.
Micro-encapsulation
Spray drying often is used as an encapsulation technique by the food and other industries. A substance to be encapsulated (the load) and an amphipathic carrier (usually some sort of modified starch) are homogenized as a suspension in water (the slurry). The slurry is then fed into a spray drier, usually a tower heated to temperatures above the boiling point of water.
As the slurry enters the tower, it is atomized. Partly because of the high surface tension of water and partly because of the hydrophobic/hydrophilic interactions between the amphipathic carrier, the water, and the load, the atomized slurry forms micelles. The small size of the drops (averaging 100 micrometers in diameter) results in a relatively large surface area which dries quickly. As the water dries, the carrier forms a hardened shell around the load.
Load loss is usually a function of molecular weight. That is, lighter molecules tend to boil off in larger quantities at the processing temperatures. Loss is minimized industrially by spraying into taller towers. A larger volume of air has a lower average humidity as the process proceeds. By the osmosis principle, water will be encouraged by its difference in fugacities in the vapor and liquid phases to leave the micelles and enter the air. Therefore, the same percentage of water can be dried out of the particles at lower temperatures if larger towers are used. Alternatively, the slurry can be sprayed into a partial vacuum. Since the boiling point of a solvent is the temperature at which the vapor pressure of the solvent is equal to the ambient pressure, reducing pressure in the tower has the effect of lowering the boiling point of the solvent.
The application of the spray drying encapsulation technique is to prepare "dehydrated" powders of substances which do not have any water to dehydrate. For example, instant drink mixes are spray dries of the various chemicals which make up the beverage. The technique was once used to remove water from food products. One example is the preparation of dehydrated milk. Because the milk was not being encapsulated and because spray drying causes thermal degradation, milk dehydration and similar processes have been replaced by other dehydration techniques. Skim milk powders are still widely produced using spray drying technology, typically at high solids concentration for maximum drying efficiency. Thermal degradation of products can be overcome by using lower operating temperatures and larger chamber sizes for increased residence times.
Recent research is now suggesting that the use of spray-drying techniques may be an alternative method for crystallization of amorphous powders during the drying process since the temperature effects on the amorphous powders may be significant depending on drying residence times.
Designing particle shape and size
The spray drying process contains a variety of input parameters that can alter the shape and size of yielded particles.
Common input parameters:
Solution Concentration
Drying Gas Flow
Inlet Temperature
Spraying Gas Flow
Feed Rate
From the following input parameters comes a series of pathways a particle can take towards its yielded shape and size. Certain parameters like spraying gas flow, feed rate, and the solution concentration heavily influence the yielded particle size, whereas the inlet temperature plays a significant role into the shape of the particle at the end. Particle size has a great correlation with the original size of the solution droplet from the atomizer, so the greatest way to control particle size can be done by heavily saturating the solution and making the initial droplet larger or smaller. Once the initial droplet enters the drying chamber, the droplet can continue to crust formation, or no particle will be formed. From the crust formation, the temperature of the drying process and duration of the particle in the drying process can lead the particle toward a dry shell or a deformed particle. The dry shell can proceed into a solid particle or a shattered particle. The crust formation can also forgo the dry shell or deformed particle if the drying conditions are not correct and undergo an internal bubble nucleation with another series of pathways.
The current understanding of the drying conditions varies between different spray drying configurations and solution contents, but more research is being completed into the determination of what drives each particle shape pathways as future applications in pharmaceutical and industrial areas require better control over specific particle shapes and sizes of their products.
Spray drying applications
Food: milk powder, coffee, tea, eggs, cereal, spices, flavorings, blood, starch and starch derivatives, vitamins, enzymes, stevia, nutracutical, colourings, animal feed, etc.
Pharmaceutical: antibiotics, medical ingredients, additives.
Industrial: paint pigments, ceramic materials, catalyst supports, microalgae.
| Physical sciences | Phase separations | Chemistry |
145025 | https://en.wikipedia.org/wiki/Honey%20possum | Honey possum | The honey possum or noolbenger (Tarsipes rostratus), is a tiny species of marsupial that feeds on the nectar and pollen of a diverse range of flowering plants. Found only in southwest Australia, it is an important pollinator for such plants as Banksia attenuata, Banksia coccinea and Adenanthos cuneatus.
Taxonomy
The first description of the diprotodont species was published by Paul Gervais and Jules Verreaux on 3 March 1842, referring to a specimen collected by Verreaux. The lectotype nominated for this species, held in the collection at National Museum of Natural History, France, was collected the Swan River Colony. A description of a second species Tarsipes spenserae, published five days later by John Edward Gray and current until the 1970s, was thought to have been published earlier by T. S. Palmer in 1904 and displaced the usage of T. rostratus. A review by Mahoney in 1984 again reduced T. spenserae to a synonym for the species, as was the emendation to its spelling as spencerae cited by William Ride (1970) and others.
Gray's specimen was provided by George Grey to the British Museum of Natural History, the skin of a male also collected at King George Sound.
The author was aware of the description prepared by Gervais, who after examining his specimen suggested it represented a second species.
The population is the only known species in the genus Tarsipes, and assigned to a monotypic diprotodont family Tarsipedidae. The name of the genus means "tarsier-foot", given for a resemblance to tarsier's simian-like feet and toes noted by the earliest descriptions.
The poorly resolved phylogeny of ancestral marsupial relationships has presented this taxon, unique in many characteristics, in an arrangement of other higher classifications, including the separation as a superfamily Tarsipedoidea, later abandoned in favour grouping of South American and Australian marsupials as a monophyletic clade that ignores the modern geographic remoteness of these continent's fauna.
The relationships of the monotypic family within the Diprotodontia order as a petauroid alliance may be summarised as,
Superfamily Petauroidea
Family Pseudocheiridae ringtail possums
Family Petauridae gliders and trioks
Family Tarsipedidae
Genus Tarsipes
Tarsipes rostratus
Family Acrobatidae gliders
The closest relationship to other taxa was theorised to be Dromiciops gliroides, another smaller marsupial that occurs in South America and is known as the extant member of a genus that is represented in the Gondwanan fossil record. This was supported by phylogenetic analysis, but this is no longer believed to be true.
At Grey's suggestion, the maiden name of his wife Eliza Lucy Spencer was assigned to the epithet. Eliza or Elizabeth was the daughter of the government resident at King George Sound, Richard Spencer, prompting the unacceptable correction by Ride in 1970.
The common names include those cited or coined by Gilbert, Gould and Ellis Troughton, honey and long-snouted phalanger, tait and noolbenger in the local languages, or the descriptive brown barred mouse. An ethnographic survey of Noongar words recorded for the species found three names were in use, and proposed that these be regularised for spelling and pronunciation as ngoolboongoor (ngool'bong'oor), djebin (dje'bin) and dat. The term honey mouse was recorded by Troughton in 1922 as commonly used in the districts around King George Sound.
Description
A tiny marsupial that climbs woody plants to feed on the pollen and nectar, the honey, of banksia and eucalypts. They resemble a small mouse or the arboreal possums of Australia, and are readily distinguished by the exceptionally long muzzle and three brown stripes from the head to the rump.
The pelage is a cream colour below that merges to rufous at the flanks, the overall coloration of the upperparts is a mix of brown and grey hairs. A dark brown central stripe extends from the rump to a mid-point between the ears, this is a more distinct stripe than the two paler adjacent stripes. The length of the tail is from , exceeding the combined body and head length of , and has a prehensile ability that assists in climbing. The recorded weight range for the species is .
The number of teeth are fewer and most much smaller than is typical for marsupials, with the molars reduced to tiny cones. The dental formula of I2/1 C1/0 P1/0 M3/3 totals no more than 22 teeth.
The morphology of the elongated snout's jaws and dentition presents a number of unique characteristics suited to the specialisation as a palynivore and nectivore.
Tarsipes tongue is extensible and the end covered in brush-like papillae, with the redundant action of the modified or reduced teeth being replaced by the interaction of the tongue, keel-like lower incisors and a fine combing surface at the palate.
The testes are very large in size, noted as proportionally the greatest for a mammal at 4.6 percent of the body weight. The sperm also has an exceptional length; its tail (flagellum) measurement of 360 micrometres also cited as the longest known.
Specialised characters of T. rostratus include visual acuity for detecting the bright yellow inflorescence of Banksia attenuata.
They have a typical lifespan between one and two years.
They have trichromat vision, similar to some other marsupials as well as primates but unlike most mammals which have dichromat vision.
Behaviour
The honey possum is mainly nocturnal, but will come out to feed during daylight in cooler weather. Generally, though, it spends the days asleep in a shelter of convenience: a rock cranny, a tree cavity, the hollow inside of a grass tree, or an abandoned bird nest. When food is scarce, or in cold weather, it becomes torpid to conserve energy. In comparison to other marsupials of a similar size, T. rostratus has a high body temperature and metabolic rate that is termed euthermic. Lacking fat reserves, but able to reduce their body temperature, exposure to cooler temperatures or lack of food induces one of two states of torpor. One response is a shallow and brief period, similar to torpid dasyurids, where the body temperature is above 10–15 degrees Celsius, and another deeper state like the burramyids that lasts for multiple days and reduces their temperature to less than .
The species is able to climb with the assistance of the prehensile tail and an opposable first toe at the long hindfoot that is able to grip like a monkey's paw. The bristle-like papillae at the upper surface of the tongue increase in length toward the tip, and this is used to gather the pollen and nectar by rapidly wiping it into the inflorescence.
Both its front and back feet are adept at grasping, enabling it to climb trees with ease, as well as traverse the undergrowth at speed. The honey possum can also use its prehensile tail (which is longer than its head and body combined) to grip, much like another arm.
Radio-tracking has shown that males particularly are quite mobile, moving distances of up to 0.5 km in a night and use areas averaging 0.8 hectares. Males seem to venture out in a larger range, and some evidence indicates greater distances covered; evidence of pollen found on an individual in a study area was from a banksia not found within three kilometres of the collection site.
The plant species that provide nectar and pollen to T. rostratus are primarily genera of Proteaceae, Banksia and Adenanthos, and Myrtaceae, eucalypts and Agonis, and those of Epacridaceae, shrubby heath plants, although it is also known to visit the inflorescence of Anigozanthos, the kangaroo paws, and the tall spikes of Xanthorrhoea, the grass-trees.
Study of the amount of nectar and pollen has concluded that a nine gram individual requires around seven millilitres of nectar and one gram of pollen each day to maintain an energetic balance. This amount of pollen provides sufficient nitrogen for the species high activity metabolism, and the additional nitrogen requirements of females during lactation is available in the pollen of Banksia species.
The ingestion of excess water when feeding at wet flowers, a frequent circumstance in the high rainfall regions of its range, is able to be eliminated by kidneys that can process up to two times the animal's body weight in water.
Pollen grains are digested over the course of six hours, extracting almost all the nutrients they contain.
Reproduction
Breeding depends on the availability of nectar and can occur at any time of the year. Females are promiscuous, mating with a large number of males and may simultaneously carry embryos from different progenitors. Competition has led to the males having very large testicles relative to their body weight, at a relative mass of 4.2% it is amongst the largest known for a mammal. Their sperm is the largest in the mammal world, measuring 365 micrometres.
The development of blastocysts corresponds to day length, induced by a shorter photoperiod, but other reproductive processes are prompted by other factor, probably food availability. Gestation lasts for 28 days, with two to four young being produced. At birth, they are the smallest of any mammal, weighing 0.005 g. Nurturing and development within the pouch lasts for about 60 days, after which they emerge covered in fur and with open eyes, weighing some . As soon as they emerge, they are often left in a sheltered area (such as a hollow in a tree) while the mother searches for food for herself, but within days, they learn to grab hold of the mother's back and travel with her. However, their weight soon becomes too much, and they stop nursing at around 11 weeks, and start to make their own homes shortly thereafter. As is common in marsupials, a second litter is often born when the pouch is vacated by the first, fertilised embryos being stopped from developing.
Most of the time, honey possums stick to separate territories of about one hectare (2.5 acres), outside of the breeding season. They live in small groups of no more than 10, which results in them engaging in combat with one another only rarely. During the breeding season, females move into smaller areas with their young, which they will defend fiercely, especially from any males.
Distribution
Although restricted to a fairly small range in the southwest of Western Australia, it is locally common and does not seem to be threatened with extinction so long as its habitat of heath, shrubland, and woodland remains intact and diverse.
Records of locations held at the Western Australian Museum indicate they are more common in regions of high Proteaceae diversity, areas such as banksia woodlands where species can be found flowering at all times of the year.
Ecology
Tarsipe rostratus is a keystone species in the ecology of the coastal sands of Southwest Australia, complex assemblages of plants known as kwongan, and are likely to be the primary pollinator of woody shrubs such as banksia and Adenanthos. Their feeding activity involves visits to many individual plants and the head carries a small pollen load that can convey more effectively than the birds that visit the same flowers. The favoured species Banksia attenuata appears to be obliged to this animal as a pollination vector, and both species have evolved to suit their mutualistic interactions.
The effect of fire frequency on the population was evaluated in a study over a twenty three-year period, giving indications of resilience of the species to the first fire in the area and a subsequent burn six years later. The effect of increased frequency and intensity of fire, due to global warming and prescribed burns can adversely affect the suitability of the local habitat.
The species is susceptible to the impact of Phytophthora cinnamomi, a soil borne fungal-like species that is associated with forest dieback in the eucalypt forests and banksia woodlands of the region. The flowers of the nine plant species most favoured by T. rostratus provide food throughout the year, and five of these are vulnerable to the withering condition caused by P. cinnamomi pathogen.
It is the only entirely nectarivorous mammal which is not a bat; it has a long, pointed snout and a long, protrusile tongue with a brush tip that gathers pollen and nectar, like a honeyeater or a hummingbird. Floral diversity is particularly important for the honey possum, as it cannot survive without a year-round supply of nectar and, unlike nectarivorous birds, it cannot easily travel long distances in search of fresh supplies.
Natural history
An animal well known to the Noongar people of southwest, and incorporated into their culture, the name ngoolboongoor from the indigenous language has been proposed for modern usage as a common name, written as noolbenger. Honey possums continue to be an iconic animal to the people of the region, and was selected by Amok Island to feature in a large public art project on silos in the wheatbelt. The first report of the species was compiled by John Gilbert, the careful and thorough field collector commissioned by Gould to travel to the new colony at the Swan River on the west coast of Australia. Gilbert obtained access to Noongar informants that provided him with the names and details of the animal's habits and, with some difficulty, four specimens for scientific examination. Both he and Gould recognised the unique characters of the unknown species.
The next major field study was undertaken by the mammalogist Ellis Troughton at the suggestion of H. L. White, who provided an introduction to the professional collector F. Lawson Whitlock. Troughton's visit to King George Sound was guided by Whitlock to the source of a specimen he had sent to White, a man in the same region named David Morgan who accommodated the biologist while he searched extensively and unsuccessfully for further specimens. Troughton was eventually provided with a series of a dozen specimens when he was preparing to leave Albany port, a collection assembled over many years by the cats of Hugh Leishman at Nannarup.
The collection of the Australian Museum was increased when Morgan continued to forward specimens to Troughton, firstly with two pregnant females that were also killed by a cat, and then with a report of living animals he was able to maintain in captivity for five to six weeks. Morgan reported that his cat would bring a mangled specimen on a daily basis for a period of time, and observed it seeking them in a flowering shrub at dusk, but thought their local appearance was seasonally related and became absent outside the breeding season.
Closer study of the reproductive processes was allowed by the capture, extended observation and dissection of the species in University programs, the first success in captivity beginning in 1974. Examination of the reproductive strategies has allowed comparison to the other modern marsupial families, in particular the evolution of embryonic diapause.
The population structure and feeding habits of T. rostratus was poorly understood until a biological study at the Fitzgerald River National Park was completed in 1984.
A diet consisting entirely of nectar is unusual for a terrestrial vertebrate species, usually birds and flying mammals, and specialisation to the niche provided by the success of plant families Proteaceae and Myrtaceae began around forty million years ago.
| Biology and health sciences | Diprotodontia | Animals |
145040 | https://en.wikipedia.org/wiki/Conservation%20of%20mass | Conservation of mass | In physics and chemistry, the law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter the mass of the system must remain constant over time.
The law implies that mass can neither be created nor destroyed, although it may be rearranged in space, or the entities associated with it may be changed in form. For example, in chemical reactions, the mass of the chemical components before the reaction is equal to the mass of the components after the reaction. Thus, during any chemical reaction and low-energy thermodynamic processes in an isolated system, the total mass of the reactants, or starting materials, must be equal to the mass of the products.
The concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. Historically, mass conservation in chemical reactions was primarily demonstrated in the 17th century and finally confirmed by Antoine Lavoisier in the late 18th century. The formulation of this law was of crucial importance in the progress from alchemy to the modern natural science of chemistry.
In reality, the conservation of mass only holds approximately and is considered part of a series of assumptions in classical mechanics. The law has to be modified to comply with the laws of quantum mechanics and special relativity under the principle of mass–energy equivalence, which states that energy and mass form one conserved quantity. For very energetic systems the conservation of mass only is shown not to hold, as is the case in nuclear reactions and particle-antiparticle annihilation in particle physics.
Mass is also not generally conserved in open systems. Such is the case when any energy or matter is allowed into, or out of, the system. However, unless radioactivity or nuclear reactions are involved, the amount of energy entering or escaping such systems (as heat, mechanical work, or electromagnetic radiation) is usually too small to be measured as a change in the mass of the system.
For systems that include large gravitational fields, general relativity has to be taken into account; thus mass–energy conservation becomes a more complex concept, subject to different definitions, and neither mass nor energy is as strictly and simply conserved as is the case in special relativity.
Formulation and examples
The law of conservation of mass can only be formulated in classical mechanics, in which the energy scales associated with an isolated system are much smaller than , where is the mass of a typical object in the system, measured in the frame of reference where the object is at rest, and is the speed of light.
The law can be formulated mathematically in the fields of fluid mechanics and continuum mechanics, where the conservation of mass is usually expressed using the continuity equation, given in differential form as where is the density (mass per unit volume), is the time, is the divergence, and is the flow velocity field.
The interpretation of the continuity equation for mass is the following: For a given closed surface in the system, the change, over any time interval, of the mass enclosed by the surface is equal to the mass that traverses the surface during that time interval: positive if the matter goes in and negative if the matter goes out. For the whole isolated system, this condition implies that the total mass , the sum of the masses of all components in the system, does not change over time, i.e. where is the differential that defines the integral over the whole volume of the system.
The continuity equation for the mass is part of the Euler equations of fluid dynamics. Many other convection–diffusion equations describe the conservation and flow of mass and matter in a given system.
In chemistry, the calculation of the amount of reactant and products in a chemical reaction, or stoichiometry, is founded on the principle of conservation of mass. The principle implies that during a chemical reaction the total mass of the reactants is equal to the total mass of the products. For example, in the following reaction
where one molecule of methane () and two oxygen molecules are converted into one molecule of carbon dioxide () and two of water (). The number of molecules resulting from the reaction can be derived from the principle of conservation of mass, as initially four hydrogen atoms, 4 oxygen atoms and one carbon atom are present (as well as in the final state); thus the number water molecules produced must be exactly two per molecule of carbon dioxide produced.
Many engineering problems are solved by following the mass distribution of a given system over time; this methodology is known as mass balance.
History
As early as 520 BCE, Jain philosophy, a non-creationist philosophy based on the teachings of Mahavira, stated that the universe and its constituents such as matter cannot be destroyed or created. The Jain text Tattvarthasutra (2nd century CE) states that a substance is permanent, but its modes are characterised by creation and destruction.
An important idea in ancient Greek philosophy was that "Nothing comes from nothing", so that what exists now has always existed: no new matter can come into existence where there was none before. An explicit statement of this, along with the further principle that nothing can pass away into nothing, is found in Empedocles (c.4th century BCE): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed."
A further principle of conservation was stated by Epicurus around the 3rd century BCE, who wrote in describing the nature of the Universe that "the totality of things was always such as it is now, and always will be".
Discoveries in chemistry
By the 18th century the principle of conservation of mass during chemical reactions was widely used and was an important assumption during experiments, even before a definition was widely established, though an expression of the law can be dated back to Hero of Alexandria’s time, as can be seen in the works of Joseph Black, Henry Cavendish, and Jean Rey. One of the first to outline the principle was Mikhail Lomonosov in 1756. He may have demonstrated it by experiments and certainly had discussed the principle in 1748 in correspondence with Leonhard Euler, though his claim on the subject is sometimes challenged. According to the Soviet physicist Yakov Dorfman:The universal law was formulated by Lomonosov on the basis of general philosophical materialistic considerations, it was never questioned or tested by him, but on the contrary, served him as a solid starting position in all research throughout his life. A more refined series of experiments were later carried out by Antoine Lavoisier who expressed his conclusion in 1773 and popularized the principle of conservation of mass. The demonstrations of the principle disproved the then popular phlogiston theory that said that mass could be gained or lost in combustion and heat processes.
The conservation of mass was obscure for millennia because of the buoyancy effect of the Earth's atmosphere on the weight of gases. For example, a piece of wood weighs less after burning; this seemed to suggest that some of its mass disappears, or is transformed or lost. Careful experiments were performed in which chemical reactions such as rusting were allowed to take place in sealed glass ampoules; it was found that the chemical reaction did not change the weight of the sealed container and its contents. Weighing of gases using scales was not possible until the invention of the vacuum pump in the 17th century.
Once understood, the conservation of mass was of great importance in progressing from alchemy to modern chemistry. Once early chemists realized that chemical substances never disappeared but were only transformed into other substances with the same weight, these scientists could for the first time embark on quantitative studies of the transformations of substances. The idea of mass conservation plus a surmise that certain "elemental substances" also could not be transformed into others by chemical reactions, in turn led to an understanding of chemical elements, as well as the idea that all chemical processes and transformations (such as burning and metabolic reactions) are reactions between invariant amounts or weights of these chemical elements.
Following the pioneering work of Lavoisier, the exhaustive experiments of Jean Stas supported the consistency of this law in chemical reactions, even though they were carried out with other intentions. His research indicated that in certain reactions the loss or gain could not have been more than 2 to 4 parts in 100,000. The difference in the accuracy aimed at and attained by Lavoisier on the one hand, and by Edward W. Morley and Stas on the other, is enormous.
Modern physics
The law of conservation of mass was challenged with the advent of special relativity. In one of the Annus Mirabilis papers of Albert Einstein in 1905, he suggested an equivalence between mass and energy. This theory implied several assertions, like the idea that internal energy of a system could contribute to the mass of the whole system, or that mass could be converted into electromagnetic radiation. However, as Max Planck pointed out, a change in mass as a result of extraction or addition of chemical energy, as predicted by Einstein's theory, is so small that it could not be measured with the available instruments and could not be presented as a test of special relativity. Einstein speculated that the energies associated with newly discovered radioactivity were significant enough, compared with the mass of systems producing them, to enable their change of mass to be measured, once the energy of the reaction had been removed from the system. This later indeed proved to be possible, although it was eventually to be the first artificial nuclear transmutation reaction in 1932, demonstrated by Cockcroft and Walton, that proved the first successful test of Einstein's theory regarding mass loss with energy gain.
The law of conservation of mass and the analogous law of conservation of energy were finally generalized and unified into the principle of mass–energy equivalence, described by Albert Einstein's equation . Special relativity also redefines the concept of mass and energy, which can be used interchangeably and are defined relative to the frame of reference. Several quantities had to be defined for consistency, such as the rest mass of a particle (mass in the rest frame of the particle) and the relativistic mass (in another frame). The latter term is usually less frequently used.
In general relativity, conservation of both mass and energy is not globally conserved and its definition is more complicated.
| Physical sciences | Chemistry: General | null |
145066 | https://en.wikipedia.org/wiki/Extractive%20metallurgy | Extractive metallurgy | Extractive metallurgy is a branch of metallurgical engineering wherein process and methods of extraction of metals from their natural mineral deposits are studied. The field is a materials science, covering all aspects of the types of ore, washing, concentration, separation, chemical processes and extraction of pure metal and their alloying to suit various applications, sometimes for direct use as a finished product, but more often in a form that requires further working to achieve the given properties to suit the applications.
The field of ferrous and non-ferrous extractive metallurgy have specialties that are generically grouped into the categories of mineral processing, hydrometallurgy, pyrometallurgy, and electrometallurgy based on the process adopted to extract the metal. Several processes are used for extraction of the same metal depending on occurrence and chemical requirements.
Mineral processing
Mineral processing begins with beneficiation, consisting of initially breaking down the ore to required sizes depending on the concentration process to be followed, by crushing, grinding, sieving etc. Thereafter, the ore is physically separated from any unwanted impurity, depending on the form of occurrence and or further process involved. Separation processes take advantage of physical properties of the materials. These physical properties can include density, particle size and shape, electrical and magnetic properties, and surface properties. Major physical and chemical methods include magnetic separation, froth flotation, leaching etc., whereby the impurities and unwanted materials are removed from the ore and the base ore of the metal is concentrated, meaning the percentage of metal in the ore is increased. This concentrate is then either processed to remove moisture or else used as is for extraction of the metal or made into shapes and forms that can undergo further processing, with ease of handling.
Ore bodies often contain more than one valuable metal. Tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. Additionally, a concentrate may contain more than one valuable metal. That concentrate would then be processed to separate the valuable metals into individual constituents.
Hydrometallurgy
Hydrometallurgy is concerned with processes involving aqueous solutions to extract metals from ores. The first step in the hydrometallurgical process is leaching, which involves dissolution of the valuable metals into the aqueous solution and or a suitable solvent. After the solution is separated from the ore solids, the extract is often subjected to various processes of purification and concentration before the valuable metal is recovered either in its metallic state or as a chemical compound. This may include precipitation, distillation, adsorption, and solvent extraction. The final recovery step may involve precipitation, cementation, or an electrometallurgical process. Sometimes, hydrometallurgical processes may be carried out directly on the ore material without any pretreatment steps. More often, the ore must be pretreated by various mineral processing steps, and sometimes by pyrometallurgical processes.
Pyrometallurgy
Pyrometallurgy involves high temperature processes where chemical reactions take place among gases, solids, and molten materials. Solids containing valuable metals are treated to form intermediate compounds for further processing or converted into their elemental or metallic state. Pyrometallurgical processes that involve gases and solids are typified by calcining and roasting operations. Processes that produce molten products are collectively referred to as smelting operations. The energy required to sustain the high temperature pyrometallurgical processes may derive from the exothermic nature of the chemical reactions taking place. Typically, these reactions are oxidation, e.g. of sulfide to sulfur dioxide . Often, however, energy must be added to the process by combustion of fuel or, in the case of some smelting processes, by the direct application of electrical energy.
Ellingham diagrams are a useful way of analysing the possible reactions, and so predicting their outcome.
Electrometallurgy
Electrometallurgy involves metallurgical processes that take place in some form of electrolytic cell. The most common types of electrometallurgical processes are electrowinning and electro-refining. Electrowinning is an electrolysis process used to recover metals in aqueous solution, usually as the result of an ore having undergone one or more hydrometallurgical processes. The metal of interest is plated onto the cathode, while the anode is an inert electrical conductor. Electro-refining is used to dissolve an impure metallic anode (typically from a smelting process) and produce a high purity cathode. Fused salt electrolysis is another electrometallurgical process whereby the valuable metal has been dissolved into a molten salt which acts as the electrolyte, and the valuable metal collects on the cathode of the cell. The fused salt electrolysis process is conducted at temperatures sufficient to keep both the electrolyte and the metal being produced in the molten state. The scope of electrometallurgy has significant overlap with the areas of hydrometallurgy and (in the case of fused salt electrolysis) pyrometallurgy. Additionally, electrochemical phenomena play a considerable role in many mineral processing and hydrometallurgical processes.
Ionometallurgy
Mineral processing and extraction of metals are very energy-intensive processes, which are not exempted of producing large volumes of solid residues and wastewater, which also require energy to be further treated and disposed. Moreover, as the demand for metals increases, the metallurgical industry must rely on sources of materials with lower metal contents both from a primary (e.g., mineral ores) and/or secondary (e.g., slags, tailings, municipal waste) raw materials. Consequently, mining activities and waste recycling must evolve towards the development of more selective, efficient and environmentally friendly mineral and metal processing routes.
Mineral processing operations are needed firstly to concentrate the mineral phases of interest and reject the unwanted material physical or chemically associated to a defined raw material. The process, however, demand about 30 GJ/tonne of metal, which accounts about 29% of the total energy spent on mining in the USA. Meanwhile, pyrometallurgy is a significant producer of greenhouse gas emissions and harmful flue dust. Hydrometallurgy entails the consumption of large volumes of lixiviants such as H2SO4, HCl, KCN, NaCN which have poor selectivity. Moreover, despite the environmental concern and the use restriction imposed by some countries, cyanidation is still considered the prime process technology to recover gold from ores. Mercury is also used by artisanal miners in less economically developed countries to concentrate gold and silver from minerals, despite its obvious toxicity. Bio-hydro-metallurgy make use of living organisms, such as bacteria and fungi, and although this method demands only the input of and from the atmosphere, it requires low solid-to-liquid ratios and long contact times, which significantly reduces space-time yields.
Ionometallurgy makes use of non-aqueous ionic solvents such ionic liquids (ILs) and deep eutectic solvents (DESs), which allows the development of closed-loop flow sheet to effectively recover metals by, for instance, integrating the metallurgical unit operations of leaching and electrowinning. It allows to process metals at moderate temperatures in a non-aqueous environment which allows controlling metal speciation, tolerates impurities and at the same time exhibits suitable solubilities and current efficiencies. This simplify conventional processing routes and allows a substantial reduction in the size of a metal processing plant.
Metal extraction with ionic fluids
DESs are fluids generally composed of two or three cheap and safe components that are capable of self-association, often through hydrogen bond interactions, to form eutectic mixtures with a melting point lower than that of each individual component. DESs are generally liquid at temperatures lower than 100 °C, and they exhibit similar physico-chemical properties to traditional ILs, while being much cheaper and environmentally friendlier. Most of them are mixtures of choline chloride (ChCl) and a hydrogen-bond donor (e.g., urea, ethylene glycol, malonic acid) or mixtures of choline chloride with a hydrated metal salt. Other choline salts (e.g. acetate, citrate, nitrate) have a much higher costs or need to be synthesised, and the DES formulated from these anions are typically much more viscous and can have higher conductivities than for choline chloride. This results in lower plating rates and poorer throwing power and for this reason chloride-based DES systems are still favoured. For instance, Reline (a 1:2 mixture of choline chloride and urea) has been used to selectively recover Zn and Pb from a mixed metal oxide matrix. Similarly, Ethaline (a 1: 2 mixture of choline chloride and ethylene glycol) facilitates metal dissolution in electropolishing of steels. DESs have also demonstrated promising results to recover metals from complex mixtures such Cu/Zn and Ga/As, and precious metals from minerals. It has also been demonstrated that metals can be recovered from complex mixtures by electrocatalysis using a combination of DESs as lixiviants and an oxidising agent, while metal ions can be simultaneously separated from the solution by electrowinning.
Recovery of precious metals by ionometallurgy
Precious metals are rare, naturally occurring metallic chemical elements of high economic value. Chemically, the precious metals tend to be less reactive than most elements. They include gold and silver, but also the so-called platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum (see precious metals). Extraction of these metals from their corresponding hosting minerals would typically require pyrometallurgy (e.g., roasting), hydrometallurgy (cyanidation), or both as processing routes.
Early studies have demonstrated that gold dissolution rate in Ethaline compares very favourably to the cyanidation method, which is further enhanced by the addition of iodine as an oxidising agent. In an industrial process the iodine has the potential to be employed as an electrocatalyst, whereby it is continuously recovered in situ from the reduced iodide by electrochemical oxidation at the anode of an electrochemical cell. Dissolved metals can be selectively deposited at the cathode by adjusting the electrode potential. The method also allows better selectivity as part of the gangue (e.g., pyrite) tend to be dissolved more slowly.
Sperrylite (PtAs2) and moncheite (PtTe2), which are typically the more abundant platinum minerals in many orthomagmatic deposits, do not react under the same conditions in Ethaline because they are disulphide (pyrite), diarsenide (sperrylite) or ditellurides (calaverite and moncheite) minerals, which are particularly resistant to iodine oxidation. The reaction mechanism by which dissolution of platinum minerals is taking place is still under investigation.
Metal recovery from sulfide minerals with ionometallurgy
Metal sulfides (e.g., pyrite FeS2, arsenopyrite FeAsS, chalcopyrite CuFeS2) are normally processed by chemical oxidation either in aqueous media or at high temperatures. In fact, most base metals, e.g., aluminium, chromium, must be (electro)chemically reduced at high temperatures by which the process entails a high energy demand, and sometimes large volumes of aqueous waste is generated. In aqueous media chalcopyrite, for instance, is more difficult to dissolve chemically than covellite and chalcocite due to surface effects (formation of polysulfide species,). The presence of Cl− ions has been suggested to alter the morphology of any sulfide surface formed, allowing the sulfide mineral to leach more easily by preventing passivation. DESs provide a high Cl− ion concentration and low water content, whilst reducing the need for either high additional salt or acid concentrations, circumventing most oxide chemistry. Thus, the electrodissolution of sulfide minerals has demonstrated promising results in DES media in absence of passivation layers, with the release into the solution of metal ions which could be recovered from solution.
During extraction of copper from copper sulfide minerals with Ethaline, chalcocite (Cu2S) and covellite (CuS) produce a yellow solution, indicating that [CuCl4]2− complex are formed. Meanwhile, in the solution formed from chalcopyrite, Cu2+ and Cu+ species co-exist in solution due to the generation of reducing Fe2+ species at the cathode. The best selective recovery of copper (>97%) from chalcopyrite can be obtained with a mixed DES of 20 wt.% ChCl-oxalic acid and 80 wt.% Ethaline.
Metal recovery from oxide compounds with Ionometallurgy
Recovery of metals from oxide matrixes is generally carried out using mineral acids. However, electrochemical dissolution of metal oxides in DES can allow to enhance the dissolution up to more than 10 000 times in pH neutral solutions.
Studies have shown that ionic oxides such as ZnO tend to have high solubility in ChCl:malonic acid, ChCl:urea and Ethaline, which can resemble the solubilities in aqueous acidic solutions, e.g., HCl. Covalent oxides such as TiO2, however, exhibits almost no solubility. The electrochemical dissolution of metal oxides is strongly dependent on the proton activity from the HBD, i.e. capability of the protons to act as oxygen acceptors, and on the temperature. It has been reported that eutectic ionic fluids of lower pH-values, such as ChCl:oxalic acid and ChCl:lactic acid, allow a better solubility than that of higher pH (e.g., ChCl:acetic acid). Hence, different solubilities can be obtained by using, for instance, different carboxylic acids as HBD.
Outlook
Currently, the stability of most ionic liquids under practical electrochemical conditions is unknown, and the fundamental choice of ionic fluid is still empirical as there is almost no data on metal ion thermodynamics to feed into solubility and speciation models. Also, there are no Pourbaix diagrams available, no standard redox potentials, and bare knowledge of speciation or pH-values. It must be noticed that most processes reported in the literature involving ionic fluids have a Technology Readiness Level (TRL) 3 (experimental proof-of-concept) or 4 (technology validated in the lab), which is a disadvantage for short-term implementation. However, ionometallurgy has the potential to effectively recover metals in a more selective and sustainable way, as it considers environmentally benign solvents, reduction of greenhouse gas emissions and avoidance of corrosive and harmful reagents.
| Technology | Metallurgy | null |
145092 | https://en.wikipedia.org/wiki/Blacksmith | Blacksmith | A blacksmith is a metalsmith who creates objects primarily from wrought iron or steel, but sometimes from other metals, by forging the metal, using tools to hammer, bend, and cut (cf. tinsmith). Blacksmiths produce objects such as gates, grilles, railings, light fixtures, furniture, sculpture, tools, agricultural implements, decorative and religious items, cooking utensils, and weapons. There was a historical distinction between the heavy work of the blacksmith and the more delicate operations of a whitesmith, who usually worked in gold, silver, pewter, or the finishing steps of fine steel. The place where a blacksmith works is variously called a smithy, a forge, or a blacksmith's shop.
While there are many professions who work with metal, such as farriers, wheelwrights, and armorers, in former times the blacksmith had a general knowledge of how to make and repair many things, from the most complex of weapons and armor to simple things like nails or lengths of chain.
Etymology
The "black" in "blacksmith" refers to the black firescale, a layer of oxides that forms on the surface of the metal during heating. The origin of smith is the Old English word meaning "blacksmith", originating from the Proto-Germanic *smiþaz meaning "skilled worker".
Smithing process
Blacksmiths work by heating pieces of wrought iron or steel until the metal becomes soft enough for shaping with hand tools, such as a hammer, an anvil and a chisel. Heating generally takes place in a forge fueled by propane, natural gas, coal, charcoal, coke, or oil.
Some modern blacksmiths may also employ an oxyacetylene or similar blowtorch for more localized heating. Induction heating methods are gaining popularity among modern blacksmiths.
Color is important for indicating the temperature and workability of the metal. As iron heats to higher temperatures, it first glows red, then orange, yellow, and finally white. The ideal heat for most forging is the bright yellow-orange color that indicates forging heat. Because they must be able to see the glowing color of the metal, some blacksmiths work in dim, low-light conditions, but most work in well-lit conditions. The key is to have consistent lighting, but not too bright. Direct sunlight obscures the colors.
The techniques of smithing can be roughly divided into forging (sometimes called "sculpting"), welding, heat-treating, and finishing.
Forging
Forging—the process smiths use to shape metal by hammering—differs from machining in that forging does not remove material. Instead, the smith hammers the iron into shape. Even punching and cutting operations (except when trimming waste) by smiths usually re-arrange metal around the hole, rather than drilling it out as swarf.
Forging uses seven basic operations or techniques:
Drawing down
Shrinking (a type of upsetting)
Bending
Upsetting
Swaging
Punching
Forge welding
These operations generally require at least a hammer and anvil, but smiths also use other tools and techniques to accommodate odd-sized or repetitive jobs.
Drawing
Drawing lengthens the metal by reducing one or both of the other two dimensions. As the depth is reduced, or the width narrowed, the piece is lengthened or "drawn out."
As an example of drawing, a smith making a chisel might flatten a square bar of steel, lengthening the metal, reducing its depth but keeping its width consistent.
Drawing does not have to be uniform. A taper can result as in making a wedge or a woodworking chisel blade. If tapered in two dimensions, a point results.
Drawing can be accomplished with a variety of tools and methods. Two typical methods using only hammer and anvil would be hammering on the anvil horn, and hammering on the anvil face using the cross peen of a hammer.
Another method for drawing is to use a tool called a fuller, or the peen of the hammer, to hasten the drawing out of a thick piece of metal. (The technique is called fullering from the tool.) Fullering consists of hammering a series of indentations with corresponding ridges, perpendicular to the long section of the piece being drawn. The resulting effect looks somewhat like waves along the top of the piece. Then the smith turns the hammer over to use the flat face to hammer the tops of the ridges down level with the bottoms of the indentations. This forces the metal to grow in length (and width if left unchecked) much faster than just hammering with the flat face of the hammer.
Bending
Heating iron to a "forging heat" allows bending as if it were a soft, ductile metal, like copper or silver.
Bending can be done with the hammer over the horn or edge of the anvil or by inserting a bending fork into the hardy hole (the square hole in the top of the anvil), placing the work piece between the tines of the fork, and bending the material to the desired angle. Bends can be dressed and tightened, or widened, by hammering them over the appropriately shaped part of the anvil.
Some metals are "hot short", meaning they lose their tensile strength when heated. They become like Plasticine: although they may still be manipulated by squeezing, an attempt to stretch them, even by bending or twisting, is likely to have them crack and break apart. This is a problem for some blade-making steels, which must be worked carefully to avoid developing hidden cracks that would cause failure in the future. Though rarely hand-worked, titanium is notably hot short. Even such common smithing processes as decoratively twisting a bar are impossible with it.
Upsetting
Upsetting is the process of making metal thicker in one dimension through shortening in the other. One form is to heat the end of a rod and then hammer on it as one would drive a nail: the rod gets shorter, and the hot part widens. An alternative to hammering on the hot end is to place the hot end on the anvil and hammer on the cold end.
Punching
Punching may be done to create a decorative pattern, or to make a hole. For example, in preparation for making a hammerhead, a smith would punch a hole in a heavy bar or rod for the hammer handle. Punching is not limited to depressions and holes. It also includes cutting, slitting, and drifting—all done with a chisel.
Combining processes
The five basic forging processes are often combined to produce and refine the shapes necessary for finished products. For example, to fashion a cross-peen hammer head, a smith would start with a bar roughly the diameter of the hammer face: the handle hole would be punched and drifted (widened by inserting or passing a larger tool through it), the head would be cut (punched, but with a wedge), the peen would be drawn to a wedge, and the face would be dressed by upsetting.
As with making a chisel, since it is lengthened by drawing it would also tend to spread in width. A smith would therefore frequently turn the chisel-to-be on its side and hammer it back down—upsetting it—to check the spread and keep the metal at the correct width.
Or, if a smith needed to put a 90-degree bend in a bar and wanted a sharp corner on the outside of the bend, they would begin by hammering an unsupported end to make the curved bend. Then, to "fatten up" the outside radius of the bend, one or both arms of the bend would need to be pushed back to fill the outer radius of the curve. So they would hammer the ends of the stock down into the bend, 'upsetting' it at the point of the bend. They would then dress the bend by drawing the sides of the bend to keep the correct thickness. The hammering would continue—upsetting and then drawing—until the curve had been properly shaped. In the primary operation was the bend, but the drawing and upsetting are done to refine the shape.
Welding
Welding is the joining of the same or similar kind of metal.
A modern blacksmith has a range of options and tools to accomplish this. The basic types of welding commonly employed in a modern workshop include traditional forge welding as well as modern methods, including oxyacetylene and arc welding.
In forge welding, the pieces to join are heated to what is generally referred to as welding heat. For mild steel most smiths judge this temperature by color: the metal glows an intense yellow or white. At this temperature the steel is near molten.
Any foreign material in the weld, such as the oxides or "scale" that typically form in the fire, can weaken it and cause it to fail. Thus the mating surfaces to be joined must be kept clean. To this end a smith makes sure the fire is a reducing fire: a fire where, at the heart, there is a great deal of heat and very little oxygen. The smith also carefully shapes mating faces so that as they come together foreign material squeezes out as the metal is joined. To clean the faces, protect them from oxidation, and provide a medium to carry foreign material out of the weld, the smith sometimes uses flux—typically powdered borax, silica sand, or both.
The smith first cleans parts to be joined with a wire brush, then puts them in the fire to heat. With a mix of drawing and upsetting the smith shapes the faces so that when finally brought together, the center of the weld connects first and the connection spreads outward under the hammer blows, pushing out the flux (if used) and foreign material.
The dressed metal goes back in the fire, is brought near to welding heat, removed from the fire, and brushed. Flux is sometimes applied, which prevents oxygen from reaching and burning the metal during forging, and it is returned to the fire. The smith now watches carefully to avoid overheating the metal. There is some challenge to this because, to see the color of the metal, the smith must remove it from the fire—exposing it to air, which can rapidly oxidize it. So the smith might probe into the fire with a bit of steel wire, prodding lightly at the mating faces. When the end of the wire sticks on to the metal, it is at the right temperature (a small weld forms where the wire touches the mating face, so it sticks). The smith commonly places the metal in the fire so he can see it without letting surrounding air contact the surface. (Note that smiths don't always use flux, especially in the UK.)
Now the smith moves with rapid purpose, quickly taking the metal from the fire to the anvil and bringing the mating faces together. A few light hammer taps bring the mating faces into complete contact and squeeze out the flux—and finally, the smith returns the work to the fire. The weld begins with the taps, but often the joint is weak and incomplete, so the smith reheats the joint to welding temperature and works the weld with light blows to "set" the weld and finally to dress it to the shape.
Finishing
Depending on the intended use of the piece, a blacksmith may finish it in a number of ways:
A simple jig (a tool) that the smith might only use a few times in the shop may get the minimum of finishing—a rap on the anvil to break off scale and a brushing with a wire brush.
Files bring a piece to final shape, removing burrs and sharp edges, and smoothing the surface.
Heat treatment and case-hardening achieve the desired hardness.
The wire brush—as a hand tool or power tool—can further smooth, brighten, and polish surfaces.
Grinding stones, abrasive paper, and emery wheels can further shape, smooth, and polish the surface.
A range of treatments and finishes can inhibit oxidation and enhance or change the appearance of the piece. An experienced smith selects the finish based on the metal and on the intended use of the item. Finishes include (among others): paint, varnish, bluing, browning, oil, and wax.
Blacksmith's striker
A blacksmith's striker is an assistant (frequently an apprentice) whose job is to swing a large sledgehammer in heavy forging operations, as directed by the blacksmith. In practice, the blacksmith holds the hot iron at the anvil (with tongs) in one hand, and indicates where to strike the iron by tapping it with a small hammer in the other hand. The striker then delivers a heavy blow to the indicated spot with a sledgehammer. During the 20th century and into the 21st century, this role has become increasingly unnecessary and automated through the use of trip hammers or reciprocating power hammers.
Blacksmith's materials
When iron ore is smelted into usable metal, a certain amount of carbon is usually alloyed with the iron. (Charcoal is almost pure carbon.) The amount of carbon significantly affects the properties of the metal. If the carbon content is over 2%, the metal is called cast iron, because it has a relatively low melting point and is easily cast. It is quite brittle, however, and cannot be forged so therefore not used for blacksmithing. If the carbon content is between 0.25% and 2%, the resulting metal is tool steel, which can be heat treated as discussed above. When the carbon content is below 0.25%, the metal is either "wrought iron (wrought iron is not smelted and cannot come from this process) " or "mild steel." The terms are never interchangeable. In preindustrial times, the material of choice for blacksmiths was wrought iron. This iron had a very low carbon content, and also included up to 5% of glassy iron silicate slag in the form of numerous very fine stringers. This slag content made the iron very tough, gave it considerable resistance to rusting, and allowed it to be more easily "forge welded," a process in which the blacksmith permanently joins two pieces of iron, or a piece of iron and a piece of steel, by heating them nearly to a white heat and hammering them together. Forge welding is more difficult with modern mild steel, because it welds in a narrower temperature band. The fibrous nature of wrought iron required knowledge and skill to properly form any tool which would be subject to stress. Modern steel is produced using either the blast furnace or arc furnaces. Wrought iron was produced by a labor-intensive process called puddling, so this material is now a difficult-to-find specialty product. Modern blacksmiths generally substitute mild steel for making objects traditionally of wrought iron. Sometimes they use electrolytic-process pure iron.
Other metals
Many blacksmiths also incorporate materials such as bronze, copper, or brass in artistic products. Aluminum and titanium may also be forged by the blacksmith's process. Bronze is an alloy of copper and tin, while brass is an alloy of copper and zinc. Each material responds differently under the hammer and must be separately studied by the blacksmith.
Terminology
Iron is a naturally occurring metallic element. It is almost never found in its native form (pure iron) in nature. It is usually found as an oxide or sulfide, with many other impurity elements mixed in.
Wrought iron is the purest form of iron generally encountered or produced in quantity. It may contain as little as 0.04% carbon (by weight). From its traditional method of manufacture, wrought iron has a fibrous internal texture. Quality wrought-iron blacksmithing takes the direction of these fibers into account during forging, since the strength of the material is stronger in line with the grain than across the grain. Most of the remaining impurities from the initial smelting become concentrated in silicate slag trapped between the iron fibers. This slag produces a lucky side effect during forge-welding. When the silicate melts, it makes wrought iron self-fluxing. The slag becomes a liquid glass that covers the exposed surfaces of the wrought iron, preventing oxidation which would otherwise interfere with the successful welding process.
Steel is an alloy of iron and between 0.3% and 1.7% carbon by weight. The presence of carbon allows steel to assume one of several different crystalline configurations. Macroscopically, this is seen as the ability to "turn the hardness of a piece of steel on and off" through various processes of heat-treatment. If the concentration of carbon is held constant, this is a reversible process. Steel with a higher carbon percentage may be brought to a higher state of maximum hardness.
Cast iron is iron that contains between 2.0% to 6% carbon by weight. There is so much carbon present that the hardness cannot be switched off. Hence, cast iron is a brittle metal, which can break like glass. Cast iron cannot be forged without special heat treatment to convert it to malleable iron.
Steel with less than 0.6% carbon content cannot be hardened enough by simple heat-treatment to make useful hardened-steel tools. Hence, in what follows, wrought-iron, low-carbon-steel, and other soft unhardenable iron varieties are referred to indiscriminately as just iron.
History, prehistory, religion, and mythology
Mythology
In Hindu mythology, Tvastar also known as Vishvakarma is the blacksmith of the devas. The earliest references of Tvastar can be found in the Rigveda.
Hephaestus (Latin: Vulcan) was the blacksmith of the gods in Greek and Roman mythology. A supremely skilled artisan whose forge was a volcano, he constructed most of the weapons of the gods, as well as beautiful assistants for his smithy and a metal fishing-net of astonishing intricacy. He was the god of metalworking, fire, and craftsmen.
In Celtic mythology, the role of Smith is held by eponymous (their names do mean 'smith') characters : Goibhniu (Irish myths of the Tuatha Dé Danann cycle) or Gofannon (Welsh myths/ the Mabinogion). Brigid or Brigit, an Irish goddess, is sometimes described as the patroness of blacksmiths.
In the Nart mythology of the Caucasus the hero known to the Ossetians as Kurdalægon and the Circassians as Tlepsh is a blacksmith and skilled craftsman whose exploits exhibit shamanic features, sometimes bearing comparison to those of the Scandinavian deity Odin. One of his greatest feats is acting as a type of male midwife to the hero Xamyc, who has been made the carrier of the embryo of his son Batraz by his dying wife the water-sprite Lady Isp, who spits it between his shoulder blades, where it forms a womb-like cyst. Kurdalaegon prepares a type of tower or scaffold above a quenching bath for Xamyc, and, when the time is right, lances the cyst to liberate the infant hero Batraz as a newborn babe of white-hot steel, whom Kurdalægon then quenches like a newly forged sword.
The Anglo-Saxon Wayland Smith, known in Old Norse as Völundr, is a heroic blacksmith in Germanic mythology. The Poetic Edda states that he forged beautiful gold rings set with wonderful gems. He was captured by king Níðuðr, who cruelly hamstrung him and imprisoned him on an island. Völundr eventually had his revenge by killing Níðuðr's sons and fashioning goblets from their skulls, jewels from their eyes and a brooch from their teeth. He then raped the king's daughter, after drugging her with strong beer, and escaped, laughing, on wings of his own making, boasting that he had fathered a child upon her.
Seppo Ilmarinen, the Eternal Hammerer, blacksmith and inventor in the Kalevala, is an archetypal artificer from Finnish mythology.
Tubal-Cain is mentioned in the book of Genesis of the Torah as the original smith.
Ogun, the god of blacksmiths, warriors, hunters and others who work with iron is one of the pantheon of Orisha traditionally worshipped by the Yoruba people of Nigeria.
Before the Iron Age
Gold, silver, and copper all occur in nature in their native states, as reasonably pure metals humans probably worked these metals first. These metals are all quite malleable, and humans' initial development of hammering techniques was undoubtedly applied to these metals.
During the Chalcolithic era and the Bronze Age, humans in the Mideast learned how to smelt, melt, cast, rivet, and (to a limited extent) forge copper and bronze. Bronze is an alloy of copper and approximately 10% to 20% Tin. Bronze is superior to just copper, by being harder, being more resistant to corrosion, and by having a lower melting point (thereby requiring less fuel to melt and cast). Much of the copper used by the Mediterranean World came from the island of Cyprus. Most of the tin came from the Cornwall region of the island of Great Britain, transported by sea-borne Phoenician and Greek traders.
Copper and bronze cannot be hardened by heat-treatment, they can only be hardened by cold working. To accomplish this, a piece of bronze is lightly hammered for a long period of time. The localized stress-cycling causes work hardening by changing the size and shape of the metal's crystals. The hardened bronze can then be ground to sharpen it to make edged tools.
Clocksmiths as recently as the 19th century used work hardening techniques to harden the teeth of brass gears and ratchets. Tapping on just the teeth produced harder teeth, with superior wear-resistance. By contrast, the rest of the gear was left in a softer and tougher state, more capable of resisting cracking.
Bronze is sufficiently corrosion-resistant that artifacts of bronze may last thousands of years relatively unscathed. Accordingly, museums frequently preserve more examples of Bronze Age metal-work than examples of artifacts from the much younger Iron Age. Buried iron artifacts may completely rust away in less than 100 years. Examples of ancient iron work still extant are very much the exception to the norm.
Iron Age
Concurrent with the advent of alphabetic characters in the Iron Age, humans became aware of the metal iron. However, in earlier ages, iron's qualities, in contrast to those of bronze, were not generally understood. Iron artifacts, composed of meteoric iron, have the chemical composition containing up to 40% nickel. As this source of this iron is extremely rare and fortuitous, little development of smithing skills peculiar to iron can be assumed to have occurred. That we still possess any such artifacts of meteoric iron may be ascribed to the vagaries of climate, and the increased corrosion-resistance conferred on iron by the presence of nickel.
During the (north) Polar Exploration of the early 20th century, Inughuit, northern Greenlandic Inuit, were found to be making iron knives from two particularly large nickel-iron meteors. One of these meteors was taken to Washington, D.C., where it was remitted to the custody of the Smithsonian Institution.
The Hittites of Anatolia first discovered or developed the smelting of iron ores around 1500 BC. They seem to have maintained a near monopoly on the knowledge of iron production for several hundred years, but when their empire collapsed during the Eastern Mediterranean upheavals around 1200 BC, the knowledge seems to have escaped in all directions.
In the Iliad of Homer (describing the Trojan War and Bronze Age Greek and Trojan warriors), most of the armor and weapons (swords and spears) are stated to have been of bronze. Iron is not unknown, however, as arrowheads are described as iron, and a "ball of iron" is listed as a prize awarded for winning a competition.
The events described probably occurred around 1200 BC, but Homer is thought to have composed this epic poem around 700 BC; so exactitude must remain suspect.
The historical record during the Late Bronze Age Collapse is very inconsistent. Very few iron artifacts remain from the early Iron Age, due to loss from corrosion and re-use of iron as a valuable commodity. However, all of the basic operations of blacksmithing were in use by the time the Iron Age reached a particular locality. The scarcity of records and artifacts, and the rapidity of the transition from Bronze Age to Iron Age, is a reason to use evidence of bronze smithing to infer about the early development of blacksmithing.
It is uncertain when Iron weapons replaced Bronze weapons because the earliest Iron swords did not significantly improve on the qualities of existing bronze artifacts. Unalloyed iron is soft, does not hold an edge as well as a properly constructed bronze blade and needs more maintenance. Iron ores are more widely available than the necessary materials to create bronze however, which made iron weapons more economical than comparable bronze weapons. Small amounts of steel are often formed during several of the earliest refining practices, and when the properties of this alloy were discovered and exploited, steel edged weapons greatly outclassed bronze.
Iron is different from most other materials (including bronze), in that it does not immediately go from a solid to a liquid at its melting point. H2O is a solid (ice) at −1 C (31 F), and a liquid (water) at +1 C (33 F). Iron, by contrast, is definitely a solid at , but over the next it becomes increasingly plastic and more "taffy-like" as its temperature increases. This extreme temperature range of variable solidity is the fundamental material property upon which blacksmithing practice depends.
Another major difference between bronze and iron fabrication techniques is that bronze can be melted. The melting point of iron is much higher than that of bronze. In the western (Europe & the Mideast) tradition, the technology to make fires hot enough to melt iron did not arise until the 16th century, when smelting operations grew large enough to require overly large bellows. These produced blast-furnace temperatures high enough to melt partially refined ores, resulting in cast iron. Thus cast-iron frying pans and cookware did not become possible in Europe until 3000 years after the introduction of iron smelting. China, in a separate developmental tradition, was producing cast iron at least 1000 years before this.
Although iron is quite abundant, good quality steel remained rare and expensive until the industrial developments of Bessemer process et al. in the 1850s. Close examination of blacksmith-made antique tools clearly shows where small pieces of steel were forge-welded into iron to provide the hardened steel cutting edges of tools (notably in axes, adzes, chisels, etc.). The re-use of quality steel is another reason for the lack of artifacts.
The Romans (who ensured that their own weapons were made with good steel) noted (in the 4th century BC) that the Celts of the Po River Valley had iron, but not good steel. The Romans record that during battle, their Celtic opponents could only swing their swords two or three times before having to step on their swords to straighten them.
On the Indian subcontinent, Wootz steel was, and continues to be, produced in small quantities.
In southern Asia and western Africa, blacksmiths form endogenous castes that sometimes speak distinct languages.
Medieval period
In the medieval period, blacksmithing was considered part of the set of seven mechanical arts.
Prior to the Industrial Revolution, a "village smithy" was a staple of every town. Factories and mass-production reduced the demand for blacksmith-made tools and hardware.
Blacksmiths typically worked in small shops, often in the center of a village or town. Their shops were typically equipped with a forge, an anvil, and a variety of other tools. The work of a medieval blacksmith was physically demanding and often dangerous. Blacksmiths had to be able to lift and move heavy pieces of metal, and they had to be careful not to burn themselves on the hot forge.
Despite the challenges, blacksmithing was a respected trade in medieval society. Blacksmiths were considered to be skilled artisans, and their work was essential to the functioning of medieval society.
Women
Whilst the majority of blacksmiths named in Britain in the medieval period were men, some women also worked as smiths. For example, in 1346 Katherine Le Fevre was appointed by Edward III to ‘keep up the king’s forge within the Tower and carry on [its] work … receiving the wages pertaining to the office’. Another example is of Alice la Haubergere (Mail-maker) who owned an armour shop and worked as an armourer in Cheapside. In York in 1403 Agnes Hecche was left her father's mail-making equipment in his will, and took over the family business with her brother. Others included the bell founder Johanna Hill, the cutler Agnes Cotiller and Eustachia l’Armurer.
Medieval blacksmithing techniques
Medieval blacksmiths used a variety of techniques to create metal objects. One of the most common techniques was forging. Forging is the process of heating metal until it is soft enough to be shaped with a hammer and anvil.
Another common technique was welding. Welding is the process of joining two pieces of metal together by heating them until they melt and then hammering them together.
Blacksmiths also used a variety of other techniques, such as casting, cutting, and filing.
The original fuel for forge fires was charcoal. Coal did not begin to replace charcoal until the forests of first Britain (during the AD 17th century), and then the eastern United States of America (during the 19th century) were largely depleted. Coal can be an inferior fuel for blacksmithing, because much of the world's coal is contaminated with sulfur. Sulfur contamination of iron and steel make them "red short", so that at red heat they become "crumbly" instead of "plastic". Coal sold and purchased for blacksmithing should be largely free of sulfur.
European blacksmiths before and through the medieval era spent a great deal of time heating and hammering iron before forging it into finished articles. Although they were unaware of the chemical basis, they were aware that the quality of the iron was thus improved. From a scientific point of view, the reducing atmosphere of the forge was both removing oxygen (rust), and soaking more carbon into the iron, thereby developing increasingly higher grades of steel as the process was continued.
Industrial era
During the eighteenth century, agents for the Sheffield cutlery industry scoured the British country-side, offering new carriage springs for old. Springs must be made of hardened steel. At this time, the processes for making steel produced an extremely variable product—quality was not ensured at the initial point of sale. Springs that had survived cracking through hard use over the rough roads of the time, had proven to be of a better quality steel. Much of the fame of Sheffield cutlery (knives, shears, etc.) was due to the extreme lengths the companies took to ensure they used high-grade steel.
During the first half of the nineteenth century, the US government included in their treaties with many Native American tribes, that the US would employ blacksmiths and strikers at Army forts, with the expressed purpose of providing Native Americans with iron tools and repair services.
During the early to mid-nineteenth century, both European armies<ref>An Aide-Memoire to the Military Sciences volume 1 by Royal Engineers, British Service, 1845, Col. G.G. Lewis, senior editor</ref> as well as both the U.S. Federal and Confederate armies employed blacksmiths to shoe horses and repair equipment such as wagons, horse tack, and artillery equipment. These smiths primarily worked at a traveling forge that when combined with a limber, comprised wagons specifically designed and constructed as blacksmith shops on wheels to carry the essential equipment necessary for their work.# The ordnance manual for the use of officers of the United States army, 1861, reprinted by Scholarly Publishing Office, University of Michigan Library, December 22, 2005,
Lathes, patterned largely on their woodturning counterparts, had been used by some blacksmiths since the middle-ages. During the 1790s Henry Maudslay created the first screw-cutting lathe, a watershed event that signaled the start of blacksmiths being replaced by machinists in factories for the hardware needs of the populace.
Samuel Colt neither invented nor perfected interchangeable parts, but his insistence (and other industrialists at this time) that his firearms be manufactured with this property, was another step towards the obsolescence of metal-working artisans and blacksmiths. ( | Technology | Metallurgy | null |
145095 | https://en.wikipedia.org/wiki/Motor%20vehicle | Motor vehicle | A motor vehicle, also known as a motorized vehicle, automotive vehicle, automobile, or road vehicle, is a self-propelled land vehicle, commonly wheeled, that does not operate on rails (such as trains or trams), does not fly (such as airplanes or helicopters), does not float on water (such as boats or ships), and is used for the transportation of people or cargo.
The vehicle propulsion is provided by an engine or motor, usually a gasoline/diesel internal combustion engine or an electric traction motor, or some combination of the two as in hybrid electric vehicles and plug-in hybrid vehicles. For legal purpose, motor vehicles are often identified within a number of vehicle classes including cars, buses, motorcycles, off-road vehicles, light trucks and regular trucks. These classifications vary according to the legal codes of each country. ISO 3833:1977 is the standard for road vehicle types, terms and definitions. Generally, to avoid requiring people with disabilities from having to possess an operator's license to use one, or requiring tags and insurance, powered wheelchairs will be specifically excluded by law from being considered motor vehicles.
, there were more than one billion motor vehicles in use in the world, excluding off-road vehicles and heavy construction equipment. The US publisher Ward's estimates that as of 2019, there were 1.4 billion motor vehicles in use in the world.
Global vehicle ownership per capita in 2010 was 148 vehicles in operation (VIO) per 1000 people. China has the largest motor vehicle fleet in the world, with 322 million motor vehicles registered at the end of September 2018. The United States has the highest vehicle ownership per capita in the world, with 832 vehicles in operation per 1000 people in 2016. Also, China became the world's largest new car market in 2009. In 2022, a total of 85 million cars and commercial vehicles were built, led by China which built a total of 27 million motor vehicles.
Definitions and terminology
In 1968 the Vienna Convention on Road Traffic gave one of the first international definitions of a motor vehicle:
Other sources might provide other definitions, for instance in the year 1977, ISO 3833:1977 provide other definitions.
Ownership trends
The U.S. publisher Ward's estimates that as of 2010, there were 1.015 billion motor vehicles in use in the world. This figure represents the number of cars, trucks (light, medium and heavy duty), and buses, but does not include off-road vehicles or heavy construction equipment. The world vehicle population passed the 500 million-unit mark in 1986, from 250 million motor vehicles in 1970. Between 1950 and 1970, the vehicle population doubled roughly every 10 years. Navigant Consulting forecasts that the global stock of light-duty motor vehicles will reach 2 billion units in 2035.
Global vehicle ownership in 2010 was 148 vehicles in operation per 1,000 people, a ratio of 1:6.75 vehicles to people, slightly down from 150 vehicles per 1,000 people in 2009, a rate of 1:6.63 vehicles to people. The global rate of motorization increased in 2013 to 174 vehicles per 1000 people. In developing countries vehicle ownership rates rarely exceed 200 cars per 1,000 population.
The following table summarizes the evolution of motor vehicle registrations in the world from 1960 to 2019:
Alternative fuels and vehicle technology adoption
Since the early 2000s, the number of alternative fuel vehicles has been increasing driven by the interest of several governments to promote their widespread adoption through public subsidies and other non-financial incentives. Governments have adopted these policies due to a combination of factors, such as environmental concerns, high oil prices, and less dependence on imported oil.
Among the fuels other than traditional petroleum fuels (gasoline or diesel fuel), and alternative technologies for powering the engine of a motor vehicle, the most popular options promoted by different governments are: natural gas vehicles, LPG powered vehicles, flex-fuel vehicles, use of biofuels, hybrid electric vehicles, plug-in hybrids, electric cars, and hydrogen fuel cell cars.
Since the late 2000s, China, European countries, the United States, Canada, Japan and other developed countries have been providing strong financial incentives to promote the adoption of plug-in electric vehicle. , the stock of light-duty plug-in vehicles in use totaled over 10 million units. , in addition, the medium and heavy commercial segments add another 700,000 units to the global stock of plug-in electric vehicles. In 2020 the global market share of plug-in passenger car sales was 4.2%, up from 2.5% in 2019. Nevertheless, despite government support and the rapid growth experienced, the plug-in electric car segment represented just about 1 out of every 250 vehicles (0.4%) on the world's roads by the end of 2018.
China
The People's Republic of China had 322 million motor vehicles in use at the end of September 2018, of which, 235 million were passenger cars in 2018, making China the country with largest motor vehicle fleet in the world. In 2016, the motor vehicle fleet consisted of 165.6 million cars and 28.4 million trucks and buses. About 13.6 million vehicles were sold in 2009, and motor vehicle registrations in 2010 increased to more than 16.8 million units, representing nearly half the world's fleet increase in 2010. Ownership per capita rose from 26.6 vehicles per 1000 people in 2006 to 141.2 in 2016.
The stock of highway-legal plug-in electric or new energy vehicles in China totaled 2.21 million units by the end of September 2018, of which, 81% are all-electric vehicles. These figures include heavy-duty commercial vehicles such buses and sanitation trucks, which represent about 11% of the total stock. China is also the world's largest electric bus market, reaching about 385,000 units by the end of 2017.
The number of cars and motorcycles in China increased 20 times between 2000 and 2010. This explosive growth has allowed China to become the world's largest new car market, overtaking the US in 2009. Nevertheless, ownership per capita is 58 vehicles per 1000 people, or a ratio of 1:17.2 vehicles to people, still well below the rate of motorization of developed countries.
United States
The United States has the second-largest fleet of motor vehicles in the world after China. , had a motor vehicles stock of 259.14 million, of which, 246 million were light duty vehicles, consisting of 112.96 million passenger cars and 133 million light trucks (includes SUVs). A total of 11.5 million heavy trucks were registered at the end 2016 Vehicle ownership per capita in the U.S. is also the highest in the world, the U.S. Department of Energy (USDoE) reports a motorization rate of 831.9 vehicles in operation per 1000 people in 2016, or a ratio of 1:1.2 vehicles to people.
According to USDoE, the rate of motorization peaked in 2007 at 844.5 vehicles per 1,000 people. In terms of licensed drivers, as of 2009 the country had 1.0 vehicle for every licensed driver, and 1.87 vehicles per household. Passenger car registrations in the United States declined -11.5% in 2017 and -12.8% in 2018.
, the stock of alternative fuel vehicles in the United States included over 20 million flex-fuel cars and light trucks, the world's second-largest flexible-fuel fleet in the world after Brazil. However, actual use of ethanol fuel is significantly limited due to the lack of E85 refueling infrastructure.
Regarding the electrified segment, the fleet of hybrid electric vehicles in the United States is the second largest in the world after Japan, with more than four million units sold through April 2016. Since the introduction of the Tesla Roadster electric car in 2008, cumulative sales of highway legal plug-in electric vehicles in the United States passed one million units in September 2018. The U.S. stock of plug-in vehicles is the second largest after China (2.21 million by September 2018).
, the country's fleet also includes more than 160,000 natural gas vehicles, mainly transit buses and delivery fleets. Despite its relative small size, natural gas use accounted for about 52% of all alternative fuels consumed by alternative transportation fuel vehicles in the U.S. in 2009.
Europe
The 27 European Union (EU-27) member countries had a fleet of over 256 million in 2008, and passenger cars accounted for 87% of the union's fleet. The five largest markets, Germany (17.7%), Italy (15.4%), France (13.3%), the UK (12.5%), and Spain (9.5%), accounted for 68% of the region's total registered fleet in 2008. The EU-27 member countries had in 2009 an estimated ownership rate of 473 passenger cars per 1000 people.
According to Ward's, Italy had the second highest (after the U.S.) vehicle ownership per capita in 2010, with 690 vehicles per 1000 people. Germany had a rate of motorization of 534 vehicles per 1000 people and the UK of 525 vehicles per 1000 people, both in 2008. France had a rate of 575 vehicles per 1000 people and Spain 608 vehicles per 1000 people in 2007. Portugal, between 1991 and 2002 grew up 220% on its motorization rate, having had in 2002, 560 cars per 1000 people.
Italy also leads in alternative fuel vehicles, with a fleet of 779,090 natural gas vehicles , the largest NGV fleet in Europe. Sweden, with 225,000 flexible-fuel vehicles, has the largest flexifuel fleet in Europe by mid-2011.
More than one million plug-in electric passenger cars and vans have been registered in Europe by June 2018, the world's second largest regional plug-in stock after China.
Norway is the leading plug-in market in Europe with almost 500,000 units registered . In October 2018, Norway became the world's first country where 10% of all passenger cars on the road are plug-in electrics. Also, the Norwegian plug-in car segment market share has been the highest in the world for several years, achieving 39.2% in 2017, 49.1% in 2018, and 74.7% in 2020.
Japan
Japan had 73.9 million vehicles by 2010, and had the world's second largest motor vehicle fleet until 2009. , the registered motor vehicle fleet totaled 75.81 million vehicles consisting of 61,40 million cars and 14,41 million trucks and buses. Japan has the largest hybrid electric vehicle fleet in the world. , there were 7.51 million hybrids registered in the country, excluding kei cars, and representing 19.0% of all passenger cars on the road.
Brazil
The Brazilian vehicle fleet reached 64.8 million vehicles in 2010, up from 29.5 million units in 2000, representing a 119% growth in ten years, and reaching a motorization rate of 340 vehicles per 1000 people. In 2010 Brazil experienced the second largest fleet increase in the world after China, with 2.5 million vehicle registrations.
, Brazil has the largest alternative fuel vehicle fleet in the world with about 40 million alternative fuel motor vehicles in the road. The clean vehicle stock includes 30.5 million flexible-fuel cars and light utility vehicles and over 6 million flex-fuel motorcycles by March 2018; between 2.4 and 3.0 million neat ethanol vehicles still in use, out of 5.7 million ethanol only light-vehicles produced since 1979; and, , a total of 1.69 million natural gas vehicles.
In addition, all the Brazilian gasoline-powered fleet is designed to operate with high ethanol blends, up to 25% ethanol fuel (E25). The market share of flex fuel vehicles reached 88.6% of all light-duty vehicles registered in 2017.
India
India's vehicle fleet had the second-largest growth rate after China in 2010, with 8.9%. The fleet went from 19.1 million in 2009 to 20.8 million units in 2010. India's vehicle fleet has increased to 210 million in March 2015. India has a fleet of 1.1 million natural gas vehicles
.
Australia
As of January 2011, the Australian motor vehicle fleet had 16.4 million registered vehicles, with an ownership rate of 730 motor vehicles per 1000 people, up from 696 vehicles per 1000 residents in 2006. The motor vehicle fleet grew 14.5% since 2006, for an annual rate of 2.7% during this five-year period.
Motorization rates by region and selected country
The following table compares vehicle ownership rates by region with the United States, the country with one of the highest motorization rates in the world, and how it has evolved from 1999 to 2016.
Production by country
In 2017, a total of 97.3 million cars and commercial vehicles were built worldwide, led by China, with about 29 million motor vehicles manufactured, followed by the United States with 11.2 million, and Japan with 9.7 million. The following table shows the top 15 manufacturing countries for 2017 and their corresponding annual production between 2004 and 2017.
| Technology | Basics_7 | null |
145159 | https://en.wikipedia.org/wiki/Atropine | Atropine | Atropine is a tropane alkaloid and anticholinergic medication used to treat certain types of nerve agent and pesticide poisonings as well as some types of slow heart rate, and to decrease saliva production during surgery. It is typically given intravenously or by injection into a muscle. Eye drops are also available which are used to treat uveitis and early amblyopia. The intravenous solution usually begins working within a minute and lasts half an hour to an hour. Large doses may be required to treat some poisonings.
Common side effects include dry mouth, abnormally large pupils, urinary retention, constipation, and a fast heart rate. It should generally not be used in people with closed-angle glaucoma. While there is no evidence that its use during pregnancy causes birth defects, this has not been well studied so sound clinical judgment should be used. It is likely safe during breastfeeding. It is an antimuscarinic (a type of anticholinergic) that works by inhibiting the parasympathetic nervous system.
Atropine occurs naturally in a number of plants of the nightshade family, including deadly nightshade (belladonna), Jimson weed, and mandrake. It was first isolated in 1833, It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses
Eyes
Topical atropine is used as a cycloplegic, to temporarily paralyze the accommodation reflex, and as a mydriatic, to dilate the pupils. Atropine degrades slowly, typically wearing off in 7 to 14 days, so it is generally used as a therapeutic mydriatic, whereas tropicamide (a shorter-acting cholinergic antagonist) or phenylephrine (an α-adrenergic agonist) is preferred as an aid to ophthalmic examination.
In refractive and accommodative amblyopia, when occlusion is not appropriate sometimes atropine is given to induce blur in the good eye. Evidence suggests that atropine penalization is just as effective as occlusion in improving visual acuity.
Antimuscarinic topical medication is effective in slowing myopia progression in children; accommodation difficulties and papillae and follicles are possible side effects. All doses of atropine appear similarly effective, while higher doses have greater side effects. The lower dose of 0.01% is thus generally recommended due to fewer side effects and potential less rebound worsening when the atropine is stopped.
Heart
Injections of atropine are used in the treatment of symptomatic or unstable bradycardia.
Atropine was previously included in international resuscitation guidelines for use in cardiac arrest associated with asystole and PEA but was removed from these guidelines in 2010 due to a lack of evidence for its effectiveness. For symptomatic bradycardia, the usual dosage is 0.5 to 1 mg IV push; this may be repeated every 3 to 5 minutes, up to a total dose of 3 mg (maximum 0.04 mg/kg).
Atropine is also useful in treating second-degree heart block Mobitz type 1 (Wenckebach block), and also third-degree heart block with a high Purkinje or AV-nodal escape rhythm. It is usually not effective in second-degree heart block Mobitz type 2, and in third-degree heart block with a low Purkinje or ventricular escape rhythm.
Atropine has also been used to prevent a low heart rate during intubation of children; however, the evidence does not support this use.
Secretions
Atropine's actions on the parasympathetic nervous system inhibit salivary and mucous glands. The drug may also inhibit sweating via the sympathetic nervous system. This can be useful in treating hyperhidrosis, and can prevent the death rattle of dying patients. Even though atropine has not been officially indicated for either of these purposes by the FDA, it has been used by physicians for these purposes.
Poisonings
Atropine is not an actual antidote for organophosphate poisoning. However, by blocking the action of acetylcholine at muscarinic receptors, atropine also serves as a treatment for poisoning by organophosphate insecticides and nerve agents, such as tabun (GA), sarin (GB), soman (GD), and VX. Troops who are likely to be attacked with chemical weapons often carry autoinjectors with atropine and oxime, for rapid injection into the muscles of the thigh. In a developed case of nerve gas poisoning, maximum atropinization is desirable. Atropine is often used in conjunction with the oxime pralidoxime chloride.
Some of the nerve agents attack and destroy acetylcholinesterase by phosphorylation, so the action of acetylcholine becomes excessive and prolonged. Pralidoxime (2-PAM) can be effective against organophosphate poisoning because it can re-cleave this phosphorylation. Atropine can be used to reduce the effect of the poisoning by blocking muscarinic acetylcholine receptors, which would otherwise be overstimulated, by excessive acetylcholine accumulation.
Atropine or diphenhydramine can be used to treat muscarine intoxication.
Atropine was added to cafeteria salt shakers in an attempt to poison the staff of Radio Free Europe during the Cold War.
Irinotecan-induced diarrhea
Atropine has been observed to prevent or treat irinotecan induced acute diarrhea.
Side effects
Adverse reactions to atropine include ventricular fibrillation, supraventricular or ventricular tachycardia, dizziness, nausea, blurred vision, loss of balance, dilated pupils, photophobia, dry mouth and potentially extreme confusion, deliriant hallucinations, and excitation especially among the elderly. These latter effects are because atropine can cross the blood–brain barrier. Because of the hallucinogenic properties, some have used the drug recreationally, though this is potentially dangerous and often unpleasant.
In overdoses, atropine is poisonous. Atropine is sometimes added to potentially addictive drugs, particularly antidiarrhea opioid drugs such as diphenoxylate or difenoxin, wherein the secretion-reducing effects of the atropine can also aid the antidiarrhea effects.
Although atropine treats bradycardia (slow heart rate) in emergency settings, it can cause paradoxical heart rate slowing when given at very low doses (i.e. <0.5 mg), presumably as a result of central action in the CNS. One proposed mechanism for atropine's paradoxical bradycardia effect at low doses involves blockade of inhibitory presynaptic muscarinic autoreceptors, thereby blocking a system that inhibits the parasympathetic response.
Atropine is incapacitating at doses of 10 to 20 mg per person. Its LD50 is estimated to be 453 mg per person (by mouth) with a probit slope of 1.8.
The antidote to atropine is physostigmine or pilocarpine.
A common mnemonic used to describe the physiologic manifestations of atropine overdose is: "hot as a hare, blind as a bat, dry as a bone, red as a beet, and mad as a hatter". These associations reflect the specific changes of warm, dry skin from decreased sweating, blurry vision, decreased lacrimation, vasodilation, and central nervous system effects on muscarinic receptors, type 4 and 5. This set of symptoms is known as anticholinergic toxidrome, and may also be caused by other drugs with anticholinergic effects, such as hyoscine hydrobromide (scopolamine), diphenhydramine, phenothiazine antipsychotics and benztropine.
Contraindications
It is generally contraindicated in people with glaucoma, pyloric stenosis, or prostatic hypertrophy, except in doses ordinarily used for preanesthesia.
Chemistry
Atropine, a tropane alkaloid, is an enantiomeric mixture of d-hyoscyamine and l-hyoscyamine, with most of its physiological effects due to l-hyoscyamine, the 3(S)-endo isomer of atropine. Its pharmacological effects are due to binding to muscarinic acetylcholine receptors. It is an antimuscarinic agent. Significant levels are achieved in the CNS within 30 minutes to 1 hour and disappear rapidly from the blood with a half-life of 2 hours. About 60% is excreted unchanged in the urine, and most of the rest appears in the urine as hydrolysis and conjugation products. Noratropine (24%), atropine-N-oxide (15%), tropine (2%), and tropic acid (3%) appear to be the major metabolites, while 50% of the administered dose is excreted as apparently unchanged atropine. No conjugates were detectable. Evidence that atropine is present as (+)-hyoscyamine was found, suggesting that stereoselective metabolism of atropine probably occurs. Effects on the iris and ciliary muscle may persist for longer than 72 hours.
The most common atropine compound used in medicine is atropine sulfate (monohydrate) ()2·H2SO4·H2O, the full chemical name is 1α H, 5α H-Tropan-3-α ol (±)-tropate(ester), sulfate monohydrate.
Pharmacology
In general, atropine counters the "rest and digest" activity of glands regulated by the parasympathetic nervous system, producing clinical effects such as increased heart rate and delayed gastric emptying. This occurs because atropine is a competitive, reversible antagonist of the muscarinic acetylcholine receptors (acetylcholine being the main neurotransmitter used by the parasympathetic nervous system).
Atropine is a competitive antagonist of the muscarinic acetylcholine receptor types M1, M2, M3, M4 and M5. It is classified as an anticholinergic drug (parasympatholytic).
In cardiac uses, it works as a nonselective muscarinic acetylcholinergic antagonist, increasing firing of the sinoatrial node (SA) and conduction through the atrioventricular node (AV) of the heart, opposes the actions of the vagus nerve, blocks acetylcholine receptor sites, and decreases bronchial secretions.
In the eye, atropine induces mydriasis by blocking the contraction of the circular pupillary sphincter muscle, which is normally stimulated by acetylcholine release, thereby allowing the radial iris dilator muscle to contract and dilate the pupil. Atropine induces cycloplegia by paralyzing the ciliary muscles, whose action inhibits accommodation to allow accurate refraction in children, helps to relieve pain associated with iridocyclitis, and treats ciliary block (malignant) glaucoma.
The vagus (parasympathetic) nerves that innervate the heart release acetylcholine (ACh) as their primary neurotransmitter. ACh binds to muscarinic receptors (M2) that are found principally on cells comprising the sinoatrial (SA) and atrioventricular (AV) nodes. Muscarinic receptors are coupled to the Gi subunit; therefore, vagal activation decreases cAMP. Gi-protein activation also leads to the activation of KACh channels that increase potassium efflux and hyperpolarizes the cells.
Increases in vagal activities to the SA node decrease the firing rate of the pacemaker cells by decreasing the slope of the pacemaker potential (phase 4 of the action potential); this decreases heart rate (negative chronotropy). The change in phase 4 slope results from alterations in potassium and calcium currents, as well as the slow-inward sodium current that is thought to be responsible for the pacemaker current (If). By hyperpolarizing the cells, vagal activation increases the cell's threshold for firing, which contributes to the reduction in the firing rate. Similar electrophysiological effects also occur at the AV node; however, in this tissue, these changes are manifested as a reduction in impulse conduction velocity through the AV node (negative dromotropy). In the resting state, there is a large degree of vagal tone in the heart, which is responsible for low resting heart rates.
There is also some vagal innervation of the atrial muscle, and to a much lesser extent, the ventricular muscle. Vagus activation, therefore, results in modest reductions in atrial contractility (inotropy) and even smaller decreases in ventricular contractility.
Muscarinic receptor antagonists bind to muscarinic receptors thereby preventing ACh from binding to and activating the receptor. By blocking the actions of ACh, muscarinic receptor antagonists very effectively block the effects of vagal nerve activity on the heart. By doing so, they increase heart rate and conduction velocity.
History
The name atropine was coined in the 19th century, when pure extracts from the belladonna plant Atropa belladonna were first made. The medicinal use of preparations from plants in the nightshade family is much older however. Mandragora (mandrake) was described by Theophrastus in the fourth century B.C. for the treatment of wounds, gout, and sleeplessness, and as a love potion. By the first century A.D. Dioscorides recognized wine of mandrake as an anaesthetic for treatment of pain or sleeplessness, to be given before surgery or cautery. The use of nightshade preparations for anesthesia, often in combination with opium, persisted throughout the Roman and Islamic Empires and continued in Europe until superseded in the 19th century by modern anesthetics.
Atropine-rich extracts from the Egyptian henbane plant (another nightshade) were used by Cleopatra in the last century B.C. to dilate the pupils of her eyes, in the hope that she would appear more alluring. Likewise in the Renaissance, women used the juice of the berries of the nightshade Atropa belladonna to enlarge their pupils for cosmetic reasons. This practice resumed briefly in the late nineteenth and early twentieth century in Paris.
The pharmacological study of belladonna extracts was begun by the German chemist Friedlieb Ferdinand Runge (1795–1867). In 1831, the German pharmacist Heinrich F. G. Mein (1799-1864) succeeded in preparing a pure crystalline form of the active substance, which was named atropine. The substance was first synthesized by German chemist Richard Willstätter in 1901.
Natural sources
Atropine is found in many members of the family Solanaceae. The most commonly found sources are Atropa belladonna (the deadly nightshade), Datura innoxia, D. wrightii, D. metel, and D. stramonium. Other sources include members of the genera Brugmansia (angel's trumpets) and Hyoscyamus.
Synthesis
Atropine can be synthesized by the reaction of tropine with tropic acid in the presence of hydrochloric acid.
Biosynthesis
The biosynthesis of atropine starting from l-phenylalanine first undergoes a transamination forming phenylpyruvic acid which is then reduced to phenyl-lactic acid. Coenzyme A then couples phenyl-lactic acid with tropine forming littorine, which then undergoes a radical rearrangement initiated with a P450 enzyme forming hyoscyamine aldehyde. A dehydrogenase then reduces the aldehyde to a primary alcohol making (−)-hyoscyamine, which upon racemization forms atropine.
Name
The species name "belladonna" ('beautiful woman' in Italian) comes from the original use of deadly nightshade to dilate the pupils of the eyes for cosmetic effect. Both atropine and the genus name for deadly nightshade derive from Atropos, one of the three Fates who, according to Greek mythology, chose how a person was to die.
| Biology and health sciences | Anesthetics | Health |
145162 | https://en.wikipedia.org/wiki/Parallel%20computing | Parallel computing | Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
In computer science, parallelism and concurrency are two different things: a parallel program uses multiple CPU cores, each core performing a task independently. On the other hand, concurrency enables a program to deal with multiple tasks even on a single CPU core; the core switches between tasks (i.e. threads) without necessarily completing each one. A program can have both, neither or a combination of parallelism and concurrency characteristics.
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.
In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance.
A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilised.
Background
Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed.
Parallel computing, on the other hand, uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Historically parallel computing was used for scientific computing and the simulation of scientific problems, particularly in the natural and engineering sciences, such as meteorology. This led to the design of parallel hardware and software, as well as high performance computing.
Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004. The runtime of a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V 2 × F, where C is the capacitance being switched per clock cycle (proportional to the number of transistors whose inputs change), V is voltage, and F is the processor frequency (cycles per second). Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 8, 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm.
To deal with the problem of power consumption and overheating the major central processing unit (CPU or processor) manufacturers started to produce power efficient processors with multiple cores. The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers. Thus parallelization of serial programs has become a mainstream programming task. In 2012 quad-core processors became standard for desktop computers, while servers had 10+ core processors. By 2023 some processors had over hundred cores. Some designs having a mix of performance and efficiency cores (such as ARM's big.LITTLE design) due to thermal and design constraints.. From Moore's law it can be predicted that the number of cores per processor will double every 18–24 months.
An operating system can ensure that different tasks and user programs are run in parallel on the available cores. However, for a serial software program to take full advantage of the multi-core architecture the programmer needs to restructure and parallelize the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelize their software code to take advantage of the increasing computing power of multicore architectures.
Relevant laws
[[File:Optimizing-different-parts.svg|thumb|300px|Assume that a task has two independent parts, A and B. Part B takes roughly 25% of the time of the whole computation. By working very hard, one may be able to make this part 5 times faster, but this only reduces the time for the whole computation by a little. In contrast, one may need to perform less work to make part A twice as fast. This will make the computation much faster than by optimizing part B, even though part B'''s speedup is greater by ratio, (5 times versus 2 times).]]
Main article: Amdahl's law
Optimally, the speedup from parallelization would be linear—doubling the number of processing elements should halve the runtime, and doubling it a second time should again halve the runtime. However, very few parallel algorithms achieve optimal speedup. Most of them have a near-linear speedup for small numbers of processing elements, which flattens out into a constant value for large numbers of processing elements.
The maximum potential speedup of an overall system can be calculated by Amdahl's law. Amdahl's Law indicates that optimal performance improvement is achieved by balancing enhancements to both parallelizable and non-parallelizable components of a task. Furthermore, it reveals that increasing the number of processors yields diminishing returns, with negligible speedup gains beyond a certain point.
Amdahl's Law has limitations, including assumptions of fixed workload, neglecting inter-process communication and synchronization overheads, primarily focusing on computational aspect and ignoring extrinsic factors such as data persistence, I/O operations, and memory access overheads.
Gustafson's law and Universal Scalability Law give a more realistic assessment of the parallel performance.
Dependencies
Understanding data dependencies is fundamental in implementing parallel algorithms. No program can run more quickly than the longest chain of dependent calculations (known as the critical path), since calculations that depend upon prior calculations in the chain must be executed in order. However, most algorithms do not consist of just a long chain of dependent calculations; there are usually opportunities to execute independent calculations in parallel.
Let Pi and Pj be two program segments. Bernstein's conditions describe when the two are independent and can be executed in parallel. For Pi, let Ii be all of the input variables and Oi the output variables, and likewise for Pj. Pi and Pj are independent if they satisfy
Violation of the first condition introduces a flow dependency, corresponding to the first segment producing a result used by the second segment. The second condition represents an anti-dependency, when the second segment produces a variable needed by the first segment. The third and final condition represents an output dependency: when two segments write to the same location, the result comes from the logically last executed segment.
Consider the following functions, which demonstrate several kinds of dependencies:
1: function Dep(a, b)
2: c := a * b
3: d := 3 * c
4: end function
In this example, instruction 3 cannot be executed before (or even in parallel with) instruction 2, because instruction 3 uses a result from instruction 2. It violates condition 1, and thus introduces a flow dependency.
1: function NoDep(a, b)
2: c := a * b
3: d := 3 * b
4: e := a + b
5: end function
In this example, there are no dependencies between the instructions, so they can all be run in parallel.
Bernstein's conditions do not allow memory to be shared between different processes. For that, some means of enforcing an ordering between accesses is necessary, such as semaphores, barriers or some other synchronization method.
Race conditions, mutual exclusion, synchronization, and parallel slowdown
Subtasks in a parallel program are often called threads. Some parallel computer architectures use smaller, lightweight versions of threads known as fibers, while others use bigger versions known as processes. However, "threads" is generally accepted as a generic term for subtasks. Threads will often need synchronized access to an object or other resource, for example when they must update a variable that is shared between them. Without synchronization, the instructions between the two threads may be interleaved in any order. For example, consider the following program:
If instruction 1B is executed between 1A and 3A, or if instruction 1A is executed between 1B and 3B, the program will produce incorrect data. This is known as a race condition. The programmer must use a lock to provide mutual exclusion. A lock is a programming language construct that allows one thread to take control of a variable and prevent other threads from reading or writing it, until that variable is unlocked. The thread holding the lock is free to execute its critical section (the section of a program that requires exclusive access to some variable), and to unlock the data when it is finished. Therefore, to guarantee correct program execution, the above program can be rewritten to use locks:
One thread will successfully lock variable V, while the other thread will be locked out—unable to proceed until V is unlocked again. This guarantees correct execution of the program. Locks may be necessary to ensure correct program execution when threads must serialize access to resources, but their use can greatly slow a program and may affect its reliability.
Locking multiple variables using non-atomic locks introduces the possibility of program deadlock. An atomic lock locks multiple variables all at once. If it cannot lock all of them, it does not lock any of them. If two threads each need to lock the same two variables using non-atomic locks, it is possible that one thread will lock one of them and the second thread will lock the second variable. In such a case, neither thread can complete, and deadlock results.
Many parallel programs require that their subtasks act in synchrony. This requires the use of a barrier. Barriers are typically implemented using a lock or a semaphore. One class of algorithms, known as lock-free and wait-free algorithms, altogether avoids the use of locks and barriers. However, this approach is generally difficult to implement and requires correctly designed data structures.
Not all parallelization results in speed-up. Generally, as a task is split up into more and more threads, those threads spend an ever-increasing portion of their time communicating with each other or waiting on each other for access to resources. Once the overhead from resource contention or communication dominates the time spent on other computation, further parallelization (that is, splitting the workload over even more threads) increases rather than decreases the amount of time required to finish. This problem, known as parallel slowdown, can be improved in some cases by software analysis and redesign.
Fine-grained, coarse-grained, and embarrassing parallelism
Applications are often classified according to how often their subtasks need to synchronize or communicate with each other. An application exhibits fine-grained parallelism if its subtasks must communicate many times per second; it exhibits coarse-grained parallelism if they do not communicate many times per second, and it exhibits embarrassing parallelism if they rarely or never have to communicate. Embarrassingly parallel applications are considered the easiest to parallelize.
Flynn's taxonomy
Michael J. Flynn created one of the earliest classification systems for parallel (and sequential) computers and programs, now known as Flynn's taxonomy. Flynn classified programs and computers by whether they were operating using a single set or multiple sets of instructions, and whether or not those instructions were using a single set or multiple sets of data.
The single-instruction-single-data (SISD) classification is equivalent to an entirely sequential program. The single-instruction-multiple-data (SIMD) classification is analogous to doing the same operation repeatedly over a large data set. This is commonly done in signal processing applications. Multiple-instruction-single-data (MISD) is a rarely used classification. While computer architectures to deal with this were devised (such as systolic arrays), few applications that fit this class materialized. Multiple-instruction-multiple-data (MIMD) programs are by far the most common type of parallel programs.
According to David A. Patterson and John L. Hennessy, "Some machines are hybrids of these categories, of course, but this classic model has survived because it is simple, easy to understand, and gives a good first approximation. It is also—perhaps because of its understandability—the most widely used scheme."
Disadvantages
Parallel computing can incur significant overhead in practice, primarily due to the costs associated with merging data from multiple processes. Specifically, inter-process communication and synchronization can lead to overheads that are substantially higher—often by two or more orders of magnitude—compared to processing the same data on a single thread. Therefore, the overall improvement should be carefully evaluated.
Granularity
Bit-level parallelism
From the advent of very-large-scale integration (VLSI) computer-chip fabrication technology in the 1970s until about 1986, speed-up in computer architecture was driven by doubling computer word size—the amount of information the processor can manipulate per cycle. Increasing the word size reduces the number of instructions the processor must execute to perform an operation on variables whose sizes are greater than the length of the word. For example, where an 8-bit processor must add two 16-bit integers, the processor must first add the 8 lower-order bits from each integer using the standard addition instruction, then add the 8 higher-order bits using an add-with-carry instruction and the carry bit from the lower order addition; thus, an 8-bit processor requires two instructions to complete a single operation, where a 16-bit processor would be able to complete the operation with a single instruction.
Historically, 4-bit microprocessors were replaced with 8-bit, then 16-bit, then 32-bit microprocessors. This trend generally came to an end with the introduction of 32-bit processors, which has been a standard in general-purpose computing for two decades. Not until the early 2000s, with the advent of x86-64 architectures, did 64-bit processors become commonplace.
Instruction-level parallelism
A computer program is, in essence, a stream of instructions executed by a processor. Without instruction-level parallelism, a processor can only issue less than one instruction per clock cycle (). These processors are known as subscalar processors. These instructions can be re-ordered and combined into groups which are then executed in parallel without changing the result of the program. This is known as instruction-level parallelism. Advances in instruction-level parallelism dominated computer architecture from the mid-1980s until the mid-1990s.
All modern processors have multi-stage instruction pipelines. Each stage in the pipeline corresponds to a different action the processor performs on that instruction in that stage; a processor with an N-stage pipeline can have up to N different instructions at different stages of completion and thus can issue one instruction per clock cycle (). These processors are known as scalar processors. The canonical example of a pipelined processor is a RISC processor, with five stages: instruction fetch (IF), instruction decode (ID), execute (EX), memory access (MEM), and register write back (WB). The Pentium 4 processor had a 35-stage pipeline.
Most modern processors also have multiple execution units. They usually combine this feature with pipelining and thus can issue more than one instruction per clock cycle (). These processors are known as superscalar processors. Superscalar processors differ from multi-core processors in that the several execution units are not entire processors (i.e. processing units). Instructions can be grouped together only if there is no data dependency between them. Scoreboarding and the Tomasulo algorithm (which is similar to scoreboarding but makes use of register renaming) are two of the most common techniques for implementing out-of-order execution and instruction-level parallelism.
Task parallelism
Task parallelisms is the characteristic of a parallel program that "entirely different calculations can be performed on either the same or different sets of data". This contrasts with data parallelism, where the same calculation is performed on the same or different sets of data. Task parallelism involves the decomposition of a task into sub-tasks and then allocating each sub-task to a processor for execution. The processors would then execute these sub-tasks concurrently and often cooperatively. Task parallelism does not usually scale with the size of a problem.
Superword level parallelism
Superword level parallelism is a vectorization technique based on loop unrolling and basic block vectorization. It is distinct from loop vectorization algorithms in that it can exploit parallelism of inline code, such as manipulating coordinates, color channels or in loops unrolled by hand.
Hardware
Memory and communication
Main memory in a parallel computer is either shared memory (shared between all processing elements in a single address space), or distributed memory (in which each processing element has its own local address space). Distributed memory refers to the fact that the memory is logically distributed, but often implies that it is physically distributed as well. Distributed shared memory and memory virtualization combine the two approaches, where the processing element has its own local memory and access to the memory on non-local processors. Accesses to local memory are typically faster than accesses to non-local memory. On the supercomputers, distributed shared memory space can be implemented using the programming model such as PGAS. This model allows processes on one compute node to transparently access the remote memory of another compute node. All compute nodes are also connected to an external shared memory system via high-speed interconnect, such as Infiniband, this external shared memory system is known as burst buffer, which is typically built from arrays of non-volatile memory physically distributed across multiple I/O nodes.
Computer architectures in which each element of main memory can be accessed with equal latency and bandwidth are known as uniform memory access (UMA) systems. Typically, that can be achieved only by a shared memory system, in which the memory is not physically distributed. A system that does not have this property is known as a non-uniform memory access (NUMA) architecture. Distributed memory systems have non-uniform memory access.
Computer systems make use of caches—small and fast memories located close to the processor which store temporary copies of memory values (nearby in both the physical and logical sense). Parallel computer systems have difficulties with caches that may store the same value in more than one location, with the possibility of incorrect program execution. These computers require a cache coherency system, which keeps track of cached values and strategically purges them, thus ensuring correct program execution. Bus snooping is one of the most common methods for keeping track of which values are being accessed (and thus should be purged). Designing large, high-performance cache coherence systems is a very difficult problem in computer architecture. As a result, shared memory computer architectures do not scale as well as distributed memory systems do.
Processor–processor and processor–memory communication can be implemented in hardware in several ways, including via shared (either multiported or multiplexed) memory, a crossbar switch, a shared bus or an interconnect network of a myriad of topologies including star, ring, tree, hypercube, fat hypercube (a hypercube with more than one processor at a node), or n-dimensional mesh.
Parallel computers based on interconnected networks need to have some kind of routing to enable the passing of messages between nodes that are not directly connected. The medium used for communication between the processors is likely to be hierarchical in large multiprocessor machines.
Classes of parallel computers
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism. This classification is broadly analogous to the distance between basic computing nodes. These are not mutually exclusive; for example, clusters of symmetric multiprocessors are relatively common.
Multi-core computing
A multi-core processor is a processor that includes multiple processing units (called "cores") on the same chip. This processor differs from a superscalar processor, which includes multiple execution units and can issue multiple instructions per clock cycle from one instruction stream (thread); in contrast, a multi-core processor can issue multiple instructions per clock cycle from multiple instruction streams. IBM's Cell microprocessor, designed for use in the Sony PlayStation 3, is a prominent multi-core processor. Each core in a multi-core processor can potentially be superscalar as well—that is, on every clock cycle, each core can issue multiple instructions from one thread.
Simultaneous multithreading (of which Intel's Hyper-Threading is the best known) was an early form of pseudo-multi-coreism. A processor capable of concurrent multithreading includes multiple execution units in the same processing unit—that is it has a superscalar architecture—and can issue multiple instructions per clock cycle from multiple threads. Temporal multithreading on the other hand includes a single execution unit in the same processing unit and can issue one instruction at a time from multiple threads.
Symmetric multiprocessing
A symmetric multiprocessor (SMP) is a computer system with multiple identical processors that share memory and connect via a bus. Bus contention prevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors. Because of the small size of the processors and the significant reduction in the requirements for bus bandwidth achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that a sufficient amount of memory bandwidth exists.
Distributed computing
A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable. The terms "concurrent computing", "parallel computing", and "distributed computing" have a lot of overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel.
Cluster computing
A cluster is a group of loosely coupled computers that work together closely, so that in some respects they can be regarded as a single computer. Clusters are composed of multiple standalone machines connected by a network. While machines in a cluster do not have to be symmetric, load balancing is more difficult if they are not. The most common type of cluster is the Beowulf cluster, which is a cluster implemented on multiple identical commercial off-the-shelf computers connected with a TCP/IP Ethernet local area network. Beowulf technology was originally developed by Thomas Sterling and Donald Becker. 87% of all Top500 supercomputers are clusters. The remaining are Massively Parallel Processors, explained below.
Because grid computing systems (described below) can easily handle embarrassingly parallel problems, modern clusters are typically designed to handle more difficult problems—problems that require nodes to share intermediate results with each other more often. This requires a high bandwidth and, more importantly, a low-latency interconnection network. Many historic and current supercomputers use customized high-performance network hardware specifically designed for cluster computing, such as the Cray Gemini network. As of 2014, most current supercomputers use some off-the-shelf standard network hardware, often Myrinet, InfiniBand, or Gigabit Ethernet.
Massively parallel computing
A massively parallel processor (MPP) is a single computer with many networked processors. MPPs have many of the same characteristics as clusters, but MPPs have specialized interconnect networks (whereas clusters use commodity hardware for networking). MPPs also tend to be larger than clusters, typically having "far more" than 100 processors. In an MPP, "each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect."
IBM's Blue Gene/L, the fifth fastest supercomputer in the world according to the June 2009 TOP500 ranking, is an MPP.
Grid computing
Grid computing is the most distributed form of parallel computing. It makes use of computers communicating over the Internet to work on a given problem. Because of the low bandwidth and extremely high latency available on the Internet, distributed computing typically deals only with embarrassingly parallel problems.
Most grid computing applications use middleware (software that sits between the operating system and the application to manage network resources and standardize the software interface). The most common grid computing middleware is the Berkeley Open Infrastructure for Network Computing (BOINC). Often volunteer computing software makes use of "spare cycles", performing computations at times when a computer is idling.
Cloud computing
The ubiquity of Internet brought the possibility of large-scale cloud computing.
Specialized parallel computers
Within parallel computing, there are specialized parallel devices that remain niche areas of interest. While not domain-specific, they tend to be applicable to only a few classes of parallel problems.
Reconfigurable computing with field-programmable gate arrays
Reconfigurable computing is the use of a field-programmable gate array (FPGA) as a co-processor to a general-purpose computer. An FPGA is, in essence, a computer chip that can rewire itself for a given task.
FPGAs can be programmed with hardware description languages such as VHDL or Verilog. Several vendors have created C to HDL languages that attempt to emulate the syntax and semantics of the C programming language, with which most programmers are familiar. The best known C to HDL languages are Mitrion-C, Impulse C, and Handel-C. Specific subsets of SystemC based on C++ can also be used for this purpose.
AMD's decision to open its HyperTransport technology to third-party vendors has become the enabling technology for high-performance reconfigurable computing. According to Michael R. D'Amour, Chief Operating Officer of DRC Computer Corporation, "when we first walked into AMD, they called us 'the socket stealers.' Now they call us their partners."
General-purpose computing on graphics processing units (GPGPU)
General-purpose computing on graphics processing units (GPGPU) is a fairly recent trend in computer engineering research. GPUs are co-processors that have been heavily optimized for computer graphics processing. Computer graphics processing is a field dominated by data parallel operations—particularly linear algebra matrix operations.
In the early days, GPGPU programs used the normal graphics APIs for executing programs. However, several new programming languages and platforms have been built to do general purpose computation on GPUs with both Nvidia and AMD releasing programming environments with CUDA and Stream SDK respectively. Other GPU programming languages include BrookGPU, PeakStream, and RapidMind. Nvidia has also released specific products for computation in their Tesla series. The technology consortium Khronos Group has released the OpenCL specification, which is a framework for writing programs that execute across platforms consisting of CPUs and GPUs. AMD, Apple, Intel, Nvidia and others are supporting OpenCL.
Application-specific integrated circuits
Several application-specific integrated circuit (ASIC) approaches have been devised for dealing with parallel applications.
Because an ASIC is (by definition) specific to a given application, it can be fully optimized for that application. As a result, for a given application, an ASIC tends to outperform a general-purpose computer. However, ASICs are created by UV photolithography. This process requires a mask set, which can be extremely expensive. A mask set can cost over a million US dollars. (The smaller the transistors required for the chip, the more expensive the mask will be.) Meanwhile, performance increases in general-purpose computing over time (as described by Moore's law) tend to wipe out these gains in only one or two chip generations. High initial cost, and the tendency to be overtaken by Moore's-law-driven general-purpose computing, has rendered ASICs unfeasible for most parallel computing applications. However, some have been built. One example is the PFLOPS RIKEN MDGRAPE-3 machine which uses custom ASICs for molecular dynamics simulation.
Vector processors
A vector processor is a CPU or computer system that can execute the same instruction on large sets of data. Vector processors have high-level operations that work on linear arrays of numbers or vectors. An example vector operation is A = B × C, where A, B, and C are each 64-element vectors of 64-bit floating-point numbers. They are closely related to Flynn's SIMD classification.
Cray computers became famous for their vector-processing computers in the 1970s and 1980s. However, vector processors—both as CPUs and as full computer systems—have generally disappeared. Modern processor instruction sets do include some vector processing instructions, such as with Freescale Semiconductor's AltiVec and Intel's Streaming SIMD Extensions (SSE).
Software
Parallel programming languages
Concurrent programming languages, libraries, APIs, and parallel programming models (such as algorithmic skeletons) have been created for programming parallel computers. These can generally be divided into classes based on the assumptions they make about the underlying memory architecture—shared memory, distributed memory, or shared distributed memory. Shared memory programming languages communicate by manipulating shared memory variables. Distributed memory uses message passing. POSIX Threads and OpenMP are two of the most widely used shared memory APIs, whereas Message Passing Interface (MPI) is the most widely used message-passing system API. One concept used in programming parallel programs is the future concept, where one part of a program promises to deliver a required datum to another part of a program at some future time.
Efforts to standardize parallel programming include an open standard called OpenHMPP for hybrid multi-core parallel programming. The OpenHMPP directive-based programming model offers a syntax to efficiently offload computations on hardware accelerators and to optimize data movement to/from the hardware memory using remote procedure calls.
The rise of consumer GPUs has led to support for compute kernels, either in graphics APIs (referred to as compute shaders), in dedicated APIs (such as OpenCL), or in other language extensions.
Automatic parallelization
Automatic parallelization of a sequential program by a compiler is the "holy grail" of parallel computing, especially with the aforementioned limit of processor frequency. Despite decades of work by compiler researchers, automatic parallelization has had only limited success.
Mainstream parallel programming languages remain either explicitly parallel or (at best) partially implicit, in which a programmer gives the compiler directives for parallelization. A few fully implicit parallel programming languages exist—SISAL, Parallel Haskell, SequenceL, System C (for FPGAs), Mitrion-C, VHDL, and Verilog.
Application checkpointing
As a computer system grows in complexity, the mean time between failures usually decreases. Application checkpointing is a technique whereby the computer system takes a "snapshot" of the application—a record of all current resource allocations and variable states, akin to a core dump—; this information can be used to restore the program if the computer should fail. Application checkpointing means that the program has to restart from only its last checkpoint rather than the beginning. While checkpointing provides benefits in a variety of situations, it is especially useful in highly parallel systems with a large number of processors used in high performance computing.
Algorithmic methods
As parallel computers become larger and faster, we are now able to solve problems that had previously taken too long to run. Fields as varied as bioinformatics (for protein folding and sequence analysis) and economics have taken advantage of parallel computing. Common types of problems in parallel computing applications include:
Dense linear algebra
Sparse linear algebra
Spectral methods (such as Cooley–Tukey fast Fourier transform)
N-body problems (such as Barnes–Hut simulation)
Structured grid problems (such as Lattice Boltzmann methods)
Unstructured grid problems (such as found in finite element analysis)
Monte Carlo method
Combinational logic (such as brute-force cryptographic techniques)
Graph traversal (such as sorting algorithms)
Dynamic programming
Branch and bound methods
Graphical models (such as detecting hidden Markov models and constructing Bayesian networks)
HBJ model, a concise message-passing model
Finite-state machine simulation
Fault tolerance
Parallel computing can also be applied to the design of fault-tolerant computer systems, particularly via lockstep systems performing the same operation in parallel. This provides redundancy in case one component fails, and also allows automatic error detection and error correction if the results differ. These methods can be used to help prevent single-event upsets caused by transient errors. Although additional measures may be required in embedded or specialized systems, this method can provide a cost-effective approach to achieve n-modular redundancy in commercial off-the-shelf systems.
History
The origins of true (MIMD) parallelism go back to Luigi Federico Menabrea and his Sketch of the Analytic Engine Invented by Charles Babbage.Patterson and Hennessy, p. 753.
In 1957, Compagnie des Machines Bull announced the first computer architecture specifically designed for parallelism, the Gamma 60. It utilized a fork-join model and a "Program Distributor" to dispatch and collect data to and from independent processing units connected to a central memory.
In April 1958, Stanley Gill (Ferranti) discussed parallel programming and the need for branching and waiting. Also in 1958, IBM researchers John Cocke and Daniel Slotnick discussed the use of parallelism in numerical calculations for the first time. Burroughs Corporation introduced the D825 in 1962, a four-processor computer that accessed up to 16 memory modules through a crossbar switch. In 1967, Amdahl and Slotnick published a debate about the feasibility of parallel processing at American Federation of Information Processing Societies Conference. It was during this debate that Amdahl's law was coined to define the limit of speed-up due to parallelism.
In 1969, Honeywell introduced its first Multics system, a symmetric multiprocessor system capable of running up to eight processors in parallel. C.mmp, a multi-processor project at Carnegie Mellon University in the 1970s, was among the first multiprocessors with more than a few processors. The first bus-connected multiprocessor with snooping caches was the Synapse N+1 in 1984.
SIMD parallel computers can be traced back to the 1970s. The motivation behind early SIMD computers was to amortize the gate delay of the processor's control unit over multiple instructions. In 1964, Slotnick had proposed building a massively parallel computer for the Lawrence Livermore National Laboratory. His design was funded by the US Air Force, which was the earliest SIMD parallel-computing effort, ILLIAC IV. The key to its design was a fairly high parallelism, with up to 256 processors, which allowed the machine to work on large datasets in what would later be known as vector processing. However, ILLIAC IV was called "the most infamous of supercomputers", because the project was only one-fourth completed, but took 11 years and cost almost four times the original estimate. When it was finally ready to run its first real application in 1976, it was outperformed by existing commercial supercomputers such as the Cray-1.
Biological brain as massively parallel computer
In the early 1970s, at the MIT Computer Science and Artificial Intelligence Laboratory, Marvin Minsky and Seymour Papert started developing the Society of Mind theory, which views the biological brain as massively parallel computer. In 1986, Minsky published The Society of Mind'', which claims that "mind is formed from many little agents, each mindless by itself". The theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks.
Similar models (which also view the biological brain as a massively parallel computer, i.e., the brain is made up of a constellation of independent or semi-independent agents) were also described by:
Thomas R. Blakeslee,
Michael S. Gazzaniga,
Robert E. Ornstein,
Ernest Hilgard,
Michio Kaku,
George Ivanovich Gurdjieff,
Neurocluster Brain Model.
| Technology | Computer science | null |
145197 | https://en.wikipedia.org/wiki/Metallic%20hydrogen | Metallic hydrogen | Metallic hydrogen is a phase of hydrogen in which it behaves like an electrical conductor. This phase was predicted in 1935 on theoretical grounds by Eugene Wigner and Hillard Bell Huntington.
At high pressure and temperatures, metallic hydrogen can exist as a partial liquid rather than a solid, and researchers think it might be present in large quantities in the hot and gravitationally compressed interiors of Jupiter and Saturn, as well as in some exoplanets.
Theoretical predictions
Hydrogen under pressure
Though often placed at the top of the alkali metal column in the periodic table, hydrogen does not, under ordinary conditions, exhibit the properties of an alkali metal. Instead, it forms diatomic molecules, similar to halogens and some nonmetals in the second period of the periodic table, such as nitrogen and oxygen. Diatomic hydrogen is a gas that, at atmospheric pressure, liquefies and solidifies only at very low temperature (20 K and 14 K respectively).
In 1935, physicists Eugene Wigner and Hillard Bell Huntington predicted that under an immense pressure of around , hydrogen would display metallic properties: instead of discrete molecules (which consist of two electrons bound between two protons), a bulk phase would form with a solid lattice of protons and the electrons delocalized throughout. Since then, producing metallic hydrogen in the laboratory has been described as "the holy grail of high-pressure physics".
The initial prediction about the amount of pressure needed was eventually shown to be too low. Since the first work by Wigner and Huntington, the more modern theoretical calculations point toward higher but potentially achievable metalization pressures of around .
Liquid metallic hydrogen
Helium-4 is a liquid at normal pressure near absolute zero, a consequence of its high zero-point energy (ZPE). The ZPE of protons in a dense state is also high, and a decline in the ordering energy (relative to the ZPE) is expected at high pressures. Arguments have been advanced by Neil Ashcroft and others that there is a melting point maximum in compressed hydrogen, but also that there might be a range of densities, at pressures around 400 GPa, where hydrogen would be a liquid metal, even at low temperatures.
Geng predicted that the ZPE of protons indeed lowers the melting temperature of hydrogen to a minimum of at pressures of .
Within this flat region there might be an elemental mesophase intermediate between the liquid and solid state, which could be metastably stabilized down to low temperature and enter a supersolid state.
Superconductivity
In 1968, Neil Ashcroft suggested that metallic hydrogen might be a superconductor, up to room temperature (). This hypothesis is based on an expected strong coupling between conduction electrons and lattice vibrations.
As a rocket propellant
Metastable metallic hydrogen may have potential as a highly efficient rocket propellant; the metallic form would be stored, and the energy of its decompression and conversion to the diatomic gaseous form when released through a nozzle used to generate thrust, with a theoretical specific impulse of up to 1700 seconds (for reference, the current most efficient chemical rocket propellants have an less than 500 s), although a metastable form suitable for mass-production and conventional high-volume storage may not exist. Another significant issue is the heat of the reaction, which at over 6000 K is too high for any known engine materials to be used. This would necessitate diluting the metallic hydrogen with water or liquid hydrogen, a mixture that would still provide a significant performance boost from current propellants.
Possibility of novel types of quantum fluid
Presently known "super" states of matter are superconductors, superfluid liquids and gases, and supersolids. Egor Babaev predicted that if hydrogen and deuterium have liquid metallic states, they might have quantum ordered states that cannot be classified as superconducting or superfluid in the usual sense. Instead, they might represent two possible novel types of quantum fluids: superconducting superfluids and metallic superfluids. Such fluids were predicted to have highly unusual reactions to external magnetic fields and rotations, which might provide a means for experimental verification of Babaev's predictions. It has also been suggested that, under the influence of a magnetic field, hydrogen might exhibit phase transitions from superconductivity to superfluidity and vice versa.
Lithium alloying reduces requisite pressure
In 2009, Zurek et al. predicted that the alloy would be a stable metal at only one quarter of the pressure required to metallize hydrogen, and that similar effects should hold for alloys of type LiHn and possibly "other alkali high-hydride systems", i.e. alloys of type XHn, where X is an alkali metal. This was later verified in AcH8 and LaH10 with Tc approaching 270 K leading to speculation that other compounds may even be stable at mere MPa pressures with room-temperature superconductivity.
Experimental pursuit
Shock-wave compression, 1996
In March 1996, a group of scientists at Lawrence Livermore National Laboratory reported that they had serendipitously produced the first identifiably metallic hydrogen for about a microsecond at temperatures of thousands of kelvins, pressures of over , and densities of approximately . The team did not expect to produce metallic hydrogen, as it was not using solid hydrogen, thought to be necessary, and was working at temperatures above those specified by metallization theory. Previous studies in which solid hydrogen was compressed inside diamond anvils to pressures of up to , did not confirm detectable metallization. The team had sought simply to measure the less extreme electrical conductivity changes they expected. The researchers used a 1960s-era light-gas gun, originally employed in guided missile studies, to shoot an impactor plate into a sealed container containing a half-millimeter thick sample of liquid hydrogen. The liquid hydrogen was in contact with wires leading to a device measuring electrical resistance. The scientists found that, as pressure rose to , the electronic energy band gap, a measure of electrical resistance, fell to almost zero. The band gap of hydrogen in its uncompressed state is about , making it an insulator but, as the pressure increases significantly, the band gap gradually fell to . Because the thermal energy of the fluid (the temperature became about due to compression of the sample) was above , the hydrogen might be considered metallic.
Other experimental research, 1996–2004
Many experiments are continuing in the production of metallic hydrogen in laboratory conditions at static compression and low temperature. Arthur Ruoff and Chandrabhas Narayana from Cornell University in 1998, and later Paul Loubeyre and René LeToullec from Commissariat à l'Énergie Atomique, France in 2002, have shown that at pressures close to those at the center of the Earth () and temperatures of , hydrogen is still not a true alkali metal, because of the non-zero band gap. The quest to see metallic hydrogen in laboratory at low temperature and static compression continues. Studies are also ongoing on deuterium. Shahriar Badiei and Leif Holmlid from the University of Gothenburg have shown in 2004 that condensed metallic states made of excited hydrogen atoms (Rydberg matter) are effective promoters to metallic hydrogen, however these results are disputed.
Pulsed laser heating experiment, 2008
The theoretically predicted maximum of the melting curve (the prerequisite for the liquid metallic hydrogen) was discovered by Shanti Deemyad and Isaac F. Silvera by using pulsed laser heating. Hydrogen-rich molecular silane () was claimed to be metallized and become superconducting by M.I. Eremets et al.. This claim is disputed, and their results have not been repeated.
Observation of liquid metallic hydrogen, 2011
In 2011 Eremets and Troyan reported observing the liquid metallic state of hydrogen and deuterium at static pressures of . This claim was questioned by other researchers in 2012.
Z machine, 2015
In 2015, scientists at the Z Pulsed Power Facility announced the creation of metallic deuterium using dense liquid deuterium, an electrical insulator-to-conductor transition associated with an increase in optical reflectivity.
Claimed observation of solid metallic hydrogen, 2016
On 5 October 2016, Ranga Dias and Isaac F. Silvera of Harvard University released claims in a pre-print manuscript of experimental evidence that solid metallic hydrogen had been synthesized in the laboratory at a pressure of around using a diamond anvil cell. A revised version was published in Science in 2017.
In the preprint version of the paper, Dias and Silvera write:
In June 2019 a team at the Commissariat à l'énergie atomique et aux énergies alternatives (French Alternative Energies & Atomic Energy Commission) claimed to have created metallic hydrogen at around 425GPa.
W. Ferreira et al. (including Dias and Silvera) repeated their experiments multiple times after the Science article was published, finally publishing in 2023 and finding metallisation of hydrogen between . This time, the pressure was released to assess the question of metastability. Metallic hydrogen was not found to be metastable to zero pressure.
Experiments on fluid deuterium at the National Ignition Facility, 2018
In August 2018, scientists announced new observations regarding the rapid transformation of fluid deuterium from an insulating to a metallic form below 2000 K. Remarkable agreement is found between the experimental data and the predictions based on quantum Monte Carlo simulations, which is expected to be the most accurate method to date. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields.
| Physical sciences | Phase transitions | Physics |
145243 | https://en.wikipedia.org/wiki/Limnology | Limnology | Limnology ( ; ) is the study of inland aquatic ecosystems.
The study of limnology includes aspects of the biological, chemical, physical, and geological characteristics of fresh and saline, natural and man-made bodies of water. This includes the study of lakes, reservoirs, ponds, rivers, springs, streams, wetlands, and groundwater. Water systems are often categorized as either running (lotic) or standing (lentic).
Limnology includes the study of the drainage basin, movement of water through the basin and biogeochemical changes that occur en route. A more recent sub-discipline of limnology, termed landscape limnology, studies, manages, and seeks to conserve these ecosystems using a landscape perspective, by explicitly examining connections between an aquatic ecosystem and its drainage basin. Recently, the need to understand global inland waters as part of the Earth system created a sub-discipline called global limnology. This approach considers processes in inland waters on a global scale, like the role of inland aquatic ecosystems in global biogeochemical cycles.
Limnology is closely related to aquatic ecology and hydrobiology, which study aquatic organisms and their interactions with the abiotic (non-living) environment. While limnology has substantial overlap with freshwater-focused disciplines (e.g., freshwater biology), it also includes the study of inland salt lakes.
History
The term limnology was coined by François-Alphonse Forel (1841–1912) who established the field with his studies of Lake Geneva. Interest in the discipline rapidly expanded, and in 1922 August Thienemann (a German zoologist) and Einar Naumann (a Swedish botanist) co-founded the International Society of Limnology (SIL, from Societas Internationalis Limnologiae). Forel's original definition of limnology, "the oceanography of lakes", was expanded to encompass the study of all inland waters, and influenced Benedykt Dybowski's work on Lake Baikal.
Prominent early American limnologists included G. Evelyn Hutchinson and Ed Deevey. At the University of Wisconsin-Madison, Edward A. Birge, Chancey Juday, Charles R. Goldman, and Arthur D. Hasler contributed to the development of the Center for Limnology.
General limnology
Physical properties
Physical properties of aquatic ecosystems are determined by a combination of heat, currents, waves and other seasonal distributions of environmental conditions. The morphometry of a body of water depends on the type of feature (such as a lake, river, stream, wetland, estuary etc.) and the structure of the earth surrounding the body of water. Lakes, for instance, are classified by their formation, and zones of lakes are defined by water depth. River and stream system morphometry is driven by underlying geology of the area as well as the general velocity of the water. Stream morphometry is also influenced by topography (especially slope) as well as precipitation patterns and other factors such as vegetation and land development. Connectivity between streams and lakes relates to the landscape drainage density, lake surface area and lake shape.
Other types of aquatic systems which fall within the study of limnology are estuaries. Estuaries are bodies of water classified by the interaction of a river and the ocean or sea. Wetlands vary in size, shape, and pattern however the most common types, marshes, bogs and swamps, often fluctuate between containing shallow, freshwater and being dry depending on the time of year. The volume and quality of water in underground aquifers rely on the vegetation cover, which fosters recharge and aids in maintaining water quality.
Light interactions
Light zonation is the concept of how the amount of sunlight penetration into water influences the structure of a body of water. These zones define various levels of productivity within an aquatic ecosystems such as a lake. For instance, the depth of the water column which sunlight is able to penetrate and where most plant life is able to grow is known as the photic or euphotic zone. The rest of the water column which is deeper and does not receive sufficient amounts of sunlight for plant growth is known as the aphotic zone. The amount of solar energy present underwater and the spectral quality of the light that are present at various depths have a significant impact on the behavior of many aquatic organisms. For example, zooplankton's vertical migration is influenced by solar energy levels.
Thermal stratification
Similar to light zonation, thermal stratification or thermal zonation is a way of grouping parts of the water body within an aquatic system based on the temperature of different lake layers. The less turbid the water, the more light is able to penetrate, and thus heat is conveyed deeper in the water. Heating declines exponentially with depth in the water column, so the water will be warmest near the surface but progressively cooler as moving downwards. There are three main sections that define thermal stratification in a lake. The epilimnion is closest to the water surface and absorbs long- and shortwave radiation to warm the water surface. During cooler months, wind shear can contribute to cooling of the water surface. The thermocline is an area within the water column where water temperatures rapidly decrease. The bottom layer is the hypolimnion, which tends to have the coldest water because its depth restricts sunlight from reaching it. In temperate lakes, fall-season cooling of surface water results in turnover of the water column, where the thermocline is disrupted, and the lake temperature profile becomes more uniform. In cold climates, when water cools below 4oC (the temperature of maximum density) many lakes can experience an inverse thermal stratification in winter. These lakes are often dimictic, with a brief spring overturn in addition to longer fall overturn. The relative thermal resistance is the energy needed to mix these strata of different temperatures.
Lake Heat Budget
An annual heat budget, also shown as θa, is the total amount of heat needed to raise the water from its minimum winter temperature to its maximum summer temperature. This can be calculated by integrating the area of the lake at each depth interval (Az) multiplied by the difference between the summer (θsz) and winter (θwz) temperatures or Az(θsz-θwz)
Chemical properties
The chemical composition of water in aquatic ecosystems is influenced by natural characteristics and processes including precipitation, underlying soil and bedrock in the drainage basin, erosion, evaporation, and sedimentation. All bodies of water have a certain composition of both organic and inorganic elements and compounds. Biological reactions also affect the chemical properties of water. In addition to natural processes, human activities strongly influence the chemical composition of aquatic systems and their water quality.
Allochthonous sources of carbon or nutrients come from outside the aquatic system (such as plant and soil material). Carbon sources from within the system, such as algae and the microbial breakdown of aquatic particulate organic carbon, are autochthonous. In aquatic food webs, the portion of biomass derived from allochthonous material is then named "allochthony". In streams and small lakes, allochthonous sources of carbon are dominant while in large lakes and the ocean, autochthonous sources dominate.
Oxygen and carbon dioxide
Dissolved oxygen and dissolved carbon dioxide are often discussed together due their coupled role in respiration and photosynthesis. Dissolved oxygen concentrations can be altered by physical, chemical, and biological processes and reaction. Physical processes including wind mixing can increase dissolved oxygen concentrations, particularly in surface waters of aquatic ecosystems. Because dissolved oxygen solubility is linked to water temperatures, changes in temperature affect dissolved oxygen concentrations as warmer water has a lower capacity to "hold" oxygen as colder water. Biologically, both photosynthesis and aerobic respiration affect dissolved oxygen concentrations. Photosynthesis by autotrophic organisms, such as phytoplankton and aquatic algae, increases dissolved oxygen concentrations while simultaneously reducing carbon dioxide concentrations, since carbon dioxide is taken up during photosynthesis. All aerobic organisms in the aquatic environment take up dissolved oxygen during aerobic respiration, while carbon dioxide is released as a byproduct of this reaction. Because photosynthesis is light-limited, both photosynthesis and respiration occur during the daylight hours, while only respiration occurs during dark hours or in dark portions of an ecosystem. The balance between dissolved oxygen production and consumption is calculated as the aquatic metabolism rate.
Vertical changes in the concentrations of dissolved oxygen are affected by both wind mixing of surface waters and the balance between photosynthesis and respiration of organic matter. These vertical changes, known as profiles, are based on similar principles as thermal stratification and light penetration. As light availability decreases deeper in the water column, photosynthesis rates also decrease, and less dissolved oxygen is produced. This means that dissolved oxygen concentrations generally decrease as you move deeper into the body of water because of photosynthesis is not replenishing dissolved oxygen that is being taken up through respiration. During periods of thermal stratification, water density gradients prevent oxygen-rich surface waters from mixing with deeper waters. Prolonged periods of stratification can result in the depletion of bottom-water dissolved oxygen; when dissolved oxygen concentrations are below 2 milligrams per liter, waters are considered hypoxic. When dissolved oxygen concentrations are approximately 0 milligrams per liter, conditions are anoxic. Both hypoxic and anoxic waters reduce available habitat for organisms that respire oxygen, and contribute to changes in other chemical reactions in the water.
Nitrogen and phosphorus
Nitrogen and phosphorus are ecologically significant nutrients in aquatic systems. Nitrogen is generally present as a gas in aquatic ecosystems however most water quality studies tend to focus on nitrate, nitrite and ammonia levels. Most of these dissolved nitrogen compounds follow a seasonal pattern with greater concentrations in the fall and winter months compared to the spring and summer. Phosphorus has a different role in aquatic ecosystems as it is a limiting factor in the growth of phytoplankton because of generally low concentrations in the water. Dissolved phosphorus is also crucial to all living things, is often very limiting to primary productivity in freshwater, and has its own distinctive ecosystem cycling.
Biological properties
Role in ecology
Lakes "are relatively easy to sample, because they have clear-cut boundaries (compared to terrestrial ecosystems) and because field experiments are relatively easy to perform.", which make then especially useful for ecologists who try to understand ecological dynamics.
Lake trophic classification
One way to classify lakes (or other bodies of water) is with the trophic state index. An oligotrophic lake is characterized by relatively low levels of primary production and low levels of nutrients. A eutrophic lake has high levels of primary productivity due to very high nutrient levels. Eutrophication of a lake can lead to algal blooms. Dystrophic lakes have high levels of humic matter and typically have yellow-brown, tea-coloured waters. These categories do not have rigid specifications; the classification system can be seen as more of a spectrum encompassing the various levels of aquatic productivity.
Tropical limnology
Tropical limnology is a unique and important subfield of limnology that focuses on the distinct physical, chemical, biological, and cultural aspects of freshwater systems in tropical regions. The physical and chemical properties of tropical aquatic environments are different from those in temperate regions, with warmer and more stable temperatures, higher nutrient levels, and more complex ecological interactions. Moreover, the biodiversity of tropical freshwater systems is typically higher, human impacts are often more severe, and there are important cultural and socioeconomic factors that influence the use and management of these systems.
Professional organizations
People who study limnology are called limnologists. These scientists largely study the characteristics of inland fresh-water systems such as lakes, rivers, streams, ponds and wetlands. They may also study non-oceanic bodies of salt water, such as the Great Salt Lake. There are many professional organizations related to limnology and other aspects of the aquatic science, including the Association for the Sciences of Limnology and Oceanography, the Asociación Ibérica de Limnología, the International Society of Limnology, the Polish Limnological Society, the Society of Canadian Limnologists, and the Freshwater Biological Association.
| Physical sciences | Hydrology | null |
145343 | https://en.wikipedia.org/wiki/Wave%20function | Wave function | In quantum physics, a wave function (or wavefunction) is a mathematical description of the quantum state of an isolated quantum system. The most common symbols for a wave function are the Greek letters and (lower-case and capital psi, respectively). Wave functions are complex-valued. For example, a wave function might assign a complex number to each point in a region of space. The Born rule provides the means to turn these complex probability amplitudes into actual probabilities. In one common form, it says that the squared modulus of a wave function that depends upon position is the probability density of measuring a particle as being at a given place. The integral of a wavefunction's squared modulus over all the system's degrees of freedom must be equal to 1, a condition called normalization. Since the wave function is complex-valued, only its relative phase and relative magnitude can be measured; its value does not, in isolation, tell anything about the magnitudes or directions of measurable observables. One has to apply quantum operators, whose eigenvalues correspond to sets of possible results of measurements, to the wave function and calculate the statistical distributions for measurable quantities.
Wave functions can be functions of variables other than position, such as momentum. The information represented by a wave function that is dependent upon position can be converted into a wave function dependent upon momentum and vice versa, by means of a Fourier transform. Some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom; other discrete variables can also be included, such as isospin. When a system has internal degrees of freedom, the wave function at each point in the continuous degrees of freedom (e.g., a point in space) assigns a complex number for each possible value of the discrete degrees of freedom (e.g., z-component of spin). These values are often displayed in a column matrix (e.g., a column vector for a non-relativistic electron with spin ).
According to the superposition principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions and form a Hilbert space. The inner product of two wave functions is a measure of the overlap between the corresponding physical states and is used in the foundational probabilistic interpretation of quantum mechanics, the Born rule, relating transition probabilities to inner products. The Schrödinger equation determines how wave functions evolve over time, and a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name "wave function", and gives rise to wave–particle duality. However, the wave function in quantum mechanics describes a kind of physical phenomenon, as of 2023 still open to different interpretations, which fundamentally differs from that of classic mechanical waves.
Historical background
In 1900, Max Planck postulated the proportionality between the frequency of a photon and its energy
and in 1916 the corresponding relation between a photon's momentum and wavelength
where is the Planck constant. In 1923, De Broglie was the first to suggest that the relation now called the De Broglie relation, holds for massive particles, the chief clue being Lorentz invariance, and this can be viewed as the starting point for the modern development of quantum mechanics. The equations represent wave–particle duality for both massless and massive particles.
In the 1920s and 1930s, quantum mechanics was developed using calculus and linear algebra. Those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing "wave mechanics". Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, and others, developing "matrix mechanics". Schrödinger subsequently showed that the two approaches were equivalent.
In 1926, Schrödinger published the famous wave equation now named after him, the Schrödinger equation. This equation was based on classical conservation of energy using quantum operators and the de Broglie relations and the solutions of the equation are the wave functions for the quantum system. However, no one was clear on how to interpret it. At first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the elastic scattering of a wave packet (representing a particle) off a target; it spreads out in all directions.
While a scattered particle may scatter in any direction, it does not break up and take off in all directions. In 1926, Born provided the perspective of probability amplitude. This relates calculations of quantum mechanics directly to probabilistic experimental observations. It is accepted as part of the Copenhagen interpretation of quantum mechanics. There are many other interpretations of quantum mechanics. In 1927, Hartree and Fock made the first step in an attempt to solve the N-body wave function, and developed the self-consistency cycle: an iterative algorithm to approximate the solution. Now it is also known as the Hartree–Fock method. The Slater determinant and permanent (of a matrix) was part of the method, provided by John C. Slater.
Schrödinger did encounter an equation for the wave function that satisfied relativistic energy conservation before he published the non-relativistic one, but discarded it as it predicted negative probabilities and negative energies. In 1927, Klein, Gordon and Fock also found it, but incorporated the electromagnetic interaction and proved that it was Lorentz invariant. De Broglie also arrived at the same equation in 1928. This relativistic wave equation is now most commonly known as the Klein–Gordon equation.
In 1927, Pauli phenomenologically found a non-relativistic equation to describe spin-1/2 particles in electromagnetic fields, now called the Pauli equation. Pauli found the wave function was not described by a single complex function of space and time, but needed two complex numbers, which respectively correspond to the spin +1/2 and −1/2 states of the fermion. Soon after in 1928, Dirac found an equation from the first successful unification of special relativity and quantum mechanics applied to the electron, now called the Dirac equation. In this, the wave function is a spinor represented by four complex-valued components: two for the electron and two for the electron's antiparticle, the positron. In the non-relativistic limit, the Dirac wave function resembles the Pauli wave function for the electron. Later, other relativistic wave equations were found.
Wave functions and wave equations in modern theories
All these wave equations are of enduring importance. The Schrödinger equation and the Pauli equation are under many circumstances excellent approximations of the relativistic variants. They are considerably easier to solve in practical problems than the relativistic counterparts.
The Klein–Gordon equation and the Dirac equation, while being relativistic, do not represent full reconciliation of quantum mechanics and special relativity. The branch of quantum mechanics where these equations are studied the same way as the Schrödinger equation, often called relativistic quantum mechanics, while very successful, has its limitations (see e.g. Lamb shift) and conceptual problems (see e.g. Dirac sea).
Relativity makes it inevitable that the number of particles in a system is not constant. For full reconciliation, quantum field theory is needed.
In this theory, the wave equations and the wave functions have their place, but in a somewhat different guise. The main objects of interest are not the wave functions, but rather operators, so called field operators (or just fields where "operator" is understood) on the Hilbert space of states (to be described next section). It turns out that the original relativistic wave equations and their solutions are still needed to build the Hilbert space. Moreover, the free fields operators, i.e. when interactions are assumed not to exist, turn out to (formally) satisfy the same equation as do the fields (wave functions) in many cases.
Thus the Klein–Gordon equation (spin ) and the Dirac equation (spin ) in this guise remain in the theory. Higher spin analogues include the Proca equation (spin ), Rarita–Schwinger equation (spin ), and, more generally, the Bargmann–Wigner equations. For massless free fields two examples are the free field Maxwell equation (spin ) and the free field Einstein equation (spin ) for the field operators.
All of them are essentially a direct consequence of the requirement of Lorentz invariance. Their solutions must transform under Lorentz transformation in a prescribed way, i.e. under a particular representation of the Lorentz group and that together with few other reasonable demands, e.g. the cluster decomposition property,
with implications for causality is enough to fix the equations.
This applies to free field equations; interactions are not included. If a Lagrangian density (including interactions) is available, then the Lagrangian formalism will yield an equation of motion at the classical level. This equation may be very complex and not amenable to solution. Any solution would refer to a fixed number of particles and would not account for the term "interaction" as referred to in these theories, which involves the creation and annihilation of particles and not external potentials as in ordinary "first quantized" quantum theory.
In string theory, the situation remains analogous. For instance, a wave function in momentum space has the role of Fourier expansion coefficient in a general state of a particle (string) with momentum that is not sharply defined.
Definition (one spinless particle in one dimension)
For now, consider the simple case of a non-relativistic single particle, without spin, in one spatial dimension. More general cases are discussed below.
According to the postulates of quantum mechanics, the state of a physical system, at fixed time , is given by the wave function belonging to a separable complex Hilbert space. As such, the inner product of two wave functions and can be defined as the complex number (at time )
.
More details are given below. However, the inner product of a wave function with itself,
,
is always a positive real number. The number (not ) is called the norm of the wave function .
The separable Hilbert space being considered is infinite-dimensional, which means there is no finite set of square integrable functions which can be added together in various combinations to create every possible square integrable function.
Position-space wave functions
The state of such a particle is completely described by its wave function, where is position and is time. This is a complex-valued function of two real variables and .
For one spinless particle in one dimension, if the wave function is interpreted as a probability amplitude; the square modulus of the wave function, the positive real number
is interpreted as the probability density for a measurement of the particle's position at a given time . The asterisk indicates the complex conjugate. If the particle's position is measured, its location cannot be determined from the wave function, but is described by a probability distribution.
Normalization condition
The probability that its position will be in the interval is the integral of the density over this interval:
where is the time at which the particle was measured. This leads to the normalization condition:
because if the particle is measured, there is 100% probability that it will be somewhere.
For a given system, the set of all possible normalizable wave functions (at any given time) forms an abstract mathematical vector space, meaning that it is possible to add together different wave functions, and multiply wave functions by complex numbers. Technically, wave functions form a ray in a projective Hilbert space rather than an ordinary vector space.
Quantum states as vectors
At a particular instant of time, all values of the wave function are components of a vector. There are uncountably infinitely many of them and integration is used in place of summation. In Bra–ket notation, this vector is written
and is referred to as a "quantum state vector", or simply "quantum state". There are several advantages to understanding wave functions as representing elements of an abstract vector space:
All the powerful tools of linear algebra can be used to manipulate and understand wave functions. For example:
Linear algebra explains how a vector space can be given a basis, and then any vector in the vector space can be expressed in this basis. This explains the relationship between a wave function in position space and a wave function in momentum space and suggests that there are other possibilities too.
Bra–ket notation can be used to manipulate wave functions.
The idea that quantum states are vectors in an abstract vector space is completely general in all aspects of quantum mechanics and quantum field theory, whereas the idea that quantum states are complex-valued "wave" functions of space is only true in certain situations.
The time parameter is often suppressed, and will be in the following. The coordinate is a continuous index. The are called improper vectors which, unlike proper vectors that are normalizable to unity, can only be normalized to a Dirac delta function.
thus
and
which illuminates the identity operator
which is analogous to completeness relation of orthonormal basis in N-dimensional Hilbert space.
Finding the identity operator in a basis allows the abstract state to be expressed explicitly in a basis, and more (the inner product between two state vectors, and other operators for observables, can be expressed in the basis).
Momentum-space wave functions
The particle also has a wave function in momentum space:
where is the momentum in one dimension, which can be any value from to , and is time.
Analogous to the position case, the inner product of two wave functions and can be defined as:
One particular solution to the time-independent Schrödinger equation is
a plane wave, which can be used in the description of a particle with momentum exactly , since it is an eigenfunction of the momentum operator. These functions are not normalizable to unity (they are not square-integrable), so they are not really elements of physical Hilbert space. The set
forms what is called the momentum basis. This "basis" is not a basis in the usual mathematical sense. For one thing, since the functions are not normalizable, they are instead normalized to a delta function,
For another thing, though they are linearly independent, there are too many of them (they form an uncountable set) for a basis for physical Hilbert space. They can still be used to express all functions in it using Fourier transforms as described next.
Relations between position and momentum representations
The and representations are
Now take the projection of the state onto eigenfunctions of momentum using the last expression in the two equations,
Then utilizing the known expression for suitably normalized eigenstates of momentum in the position representation solutions of the free Schrödinger equation
one obtains
Likewise, using eigenfunctions of position,
The position-space and momentum-space wave functions are thus found to be Fourier transforms of each other. They are two representations of the same state; containing the same information, and either one is sufficient to calculate any property of the particle.
In practice, the position-space wave function is used much more often than the momentum-space wave function. The potential entering the relevant equation (Schrödinger, Dirac, etc.) determines in which basis the description is easiest. For the harmonic oscillator, and enter symmetrically, so there it does not matter which description one uses. The same equation (modulo constants) results. From this, with a little bit of afterthought, it follows that solutions to the wave equation of the harmonic oscillator are eigenfunctions of the Fourier transform in .
Definitions (other cases)
Following are the general forms of the wave function for systems in higher dimensions and more particles, as well as including other degrees of freedom than position coordinates or momentum components.
Finite dimensional Hilbert space
While Hilbert spaces originally refer to infinite dimensional complete inner product spaces they, by definition, include finite dimensional complete inner product spaces as well.
In physics, they are often referred to as finite dimensional Hilbert spaces. For every finite dimensional Hilbert space there exist orthonormal basis kets that span the entire Hilbert space.
If the -dimensional set is orthonormal, then the projection operator for the space spanned by these states is given by:
where the projection is equivalent to identity operator since spans the entire Hilbert space, thus leaving any vector from Hilbert space unchanged. This is also known as completeness relation of finite dimensional Hilbert space.
The wavefunction is instead given by:
where , is a set of complex numbers which can be used to construct a wavefunction using the above formula.
Probability interpretation of inner product
If the set are eigenkets of a non-degenerate observable with eigenvalues , by the postulates of quantum mechanics, the probability of measuring the observable to be is given according to Born rule as:
For non-degenerate of some observable, if eigenvalues have subset of eigenvectors labelled as , by the postulates of quantum mechanics, the probability of measuring the observable to be is given by:
where is a projection operator of states to subspace spanned by . The equality follows due to orthogonal nature of .
Hence, which specify state of the quantum mechanical system, have magnitudes whose square gives the probability of measuring the respective state.
Physical significance of relative phase
While the relative phase has observable effects in experiments, the global phase of the system is experimentally indistinguishable. For example in a particle in superposition of two states, the global phase of the particle cannot be distinguished by finding expectation value of observable or probabilities of observing different states but relative phases can affect the expectation values of observables.
While the overall phase of the system is considered to be arbitrary, the relative phase for each state of a prepared state in superposition can be determined based on physical meaning of the prepared state and its symmetry. For example, the construction of spin states along x direction as a superposition of spin states along z direction, can done by applying appropriate rotation transformation on the spin along z states which provides appropriate phase of the states relative to each other.
Application to include spin
An example of finite dimensional Hilbert space can be constructed using spin eigenkets of -spin particles which forms a dimensional Hilbert space. However, the general wavefunction of a particle that fully describes its state, is always from an infinite dimensional Hilbert space since it involves a tensor product with Hilbert space relating to the position or momentum of the particle. Nonetheless, the techniques developed for finite dimensional Hilbert space are useful since they can either be treated independently or treated in consideration of linearity of tensor product.
Since the spin operator for a given -spin particles can be represented as a finite matrix which acts on independent spin vector components, it is usually preferable to denote spin components using matrix/column/row notation as applicable.
For example, each is usually identified as a column vector:
but it is a common abuse of notation, because the kets are not synonymous or equal to the column vectors. Column vectors simply provide a convenient way to express the spin components.
Corresponding to the notation, the z-component spin operator can be written as:
since the eigenvectors of z-component spin operator are the above column vectors, with eigenvalues being the corresponding spin quantum numbers.
Corresponding to the notation, a vector from such a finite dimensional Hilbert space is hence represented as:
where are corresponding complex numbers.
In the following discussion involving spin, the complete wavefunction is considered as tensor product of spin states from finite dimensional Hilbert spaces and the wavefunction which was previously developed. The basis for this Hilbert space are hence considered: .
One-particle states in 3d position space
The position-space wave function of a single particle without spin in three spatial dimensions is similar to the case of one spatial dimension above: where is the position vector in three-dimensional space, and is time. As always is a complex-valued function of real variables. As a single vector in Dirac notation
All the previous remarks on inner products, momentum space wave functions, Fourier transforms, and so on extend to higher dimensions.
For a particle with spin, ignoring the position degrees of freedom, the wave function is a function of spin only (time is a parameter);
where is the spin projection quantum number along the axis. (The axis is an arbitrary choice; other axes can be used instead if the wave function is transformed appropriately, see below.) The parameter, unlike and , is a discrete variable. For example, for a spin-1/2 particle, can only be or , and not any other value. (In general, for spin , can be ). Inserting each quantum number gives a complex valued function of space and time, there are of them. These can be arranged into a column vector
In bra–ket notation, these easily arrange into the components of a vector:
The entire vector is a solution of the Schrödinger equation (with a suitable Hamiltonian), which unfolds to a coupled system of ordinary differential equations with solutions . The term "spin function" instead of "wave function" is used by some authors. This contrasts the solutions to position space wave functions, the position coordinates being continuous degrees of freedom, because then the Schrödinger equation does take the form of a wave equation.
More generally, for a particle in 3d with any spin, the wave function can be written in "position–spin space" as:
and these can also be arranged into a column vector
in which the spin dependence is placed in indexing the entries, and the wave function is a complex vector-valued function of space and time only.
All values of the wave function, not only for discrete but continuous variables also, collect into a single vector
For a single particle, the tensor product of its position state vector and spin state vector gives the composite position-spin state vector
with the identifications
The tensor product factorization of energy eigenstates is always possible if the orbital and spin angular momenta of the particle are separable in the Hamiltonian operator underlying the system's dynamics (in other words, the Hamiltonian can be split into the sum of orbital and spin terms). The time dependence can be placed in either factor, and time evolution of each can be studied separately. Under such Hamiltonians, any tensor product state evolves into another tensor product state, which essentially means any unentangled state remains unentangled under time evolution. This is said to happen when there is no physical interaction between the states of the tensor products. In the case of non separable Hamiltonians, energy eigenstates are said to be some linear combination of such states, which need not be factorizable; examples include a particle in a magnetic field, and spin–orbit coupling.
The preceding discussion is not limited to spin as a discrete variable, the total angular momentum J may also be used. Other discrete degrees of freedom, like isospin, can expressed similarly to the case of spin above.
Many-particle states in 3d position space
If there are many particles, in general there is only one wave function, not a separate wave function for each particle. The fact that one wave function describes many particles is what makes quantum entanglement and the EPR paradox possible. The position-space wave function for particles is written:
where is the position of the -th particle in three-dimensional space, and is time. Altogether, this is a complex-valued function of real variables.
In quantum mechanics there is a fundamental distinction between identical particles and distinguishable particles. For example, any two electrons are identical and fundamentally indistinguishable from each other; the laws of physics make it impossible to "stamp an identification number" on a certain electron to keep track of it. This translates to a requirement on the wave function for a system of identical particles:
where the sign occurs if the particles are all bosons and sign if they are all fermions. In other words, the wave function is either totally symmetric in the positions of bosons, or totally antisymmetric in the positions of fermions. The physical interchange of particles corresponds to mathematically switching arguments in the wave function. The antisymmetry feature of fermionic wave functions leads to the Pauli principle. Generally, bosonic and fermionic symmetry requirements are the manifestation of particle statistics and are present in other quantum state formalisms.
For distinguishable particles (no two being identical, i.e. no two having the same set of quantum numbers), there is no requirement for the wave function to be either symmetric or antisymmetric.
For a collection of particles, some identical with coordinates and others distinguishable (not identical with each other, and not identical to the aforementioned identical particles), the wave function is symmetric or antisymmetric in the identical particle coordinates only:
Again, there is no symmetry requirement for the distinguishable particle coordinates .
The wave function for N particles each with spin is the complex-valued function
Accumulating all these components into a single vector,
For identical particles, symmetry requirements apply to both position and spin arguments of the wave function so it has the overall correct symmetry.
The formulae for the inner products are integrals over all coordinates or momenta and sums over all spin quantum numbers. For the general case of particles with spin in 3-d,
this is altogether three-dimensional volume integrals and sums over the spins. The differential volume elements are also written "" or "".
The multidimensional Fourier transforms of the position or position–spin space wave functions yields momentum or momentum–spin space wave functions.
Probability interpretation
For the general case of particles with spin in 3d, if is interpreted as a probability amplitude, the probability density is
and the probability that particle 1 is in region with spin and particle 2 is in region with spin etc. at time is the integral of the probability density over these regions and evaluated at these spin numbers:
Physical significance of phase
In non-relativistic quantum mechanics, it can be shown using Schrodinger's time dependent wave equation that the equation:
is satisfied, where is the probability density and , is known as the probability flux in accordance with the continuity equation form of the above equation.
Using the following expression for wavefunction:where is the probability density and is the phase of the wavefunction, it can be shown that:
Hence the spacial variation of phase characterizes the probability flux.
In classical analogy, for , the quantity is analogous with velocity. Note that this does not imply a literal interpretation of as velocity since velocity and position cannot be simultaneously determined as per the uncertainty principle. Substituting the form of wavefunction in Schrodinger's time dependent wave equation, and taking the classical limit, :
Which is analogous to Hamilton-Jacobi equation from classical mechanics. This interpretation fits with Hamilton–Jacobi theory, in which , where is Hamilton's principal function.
Time dependence
For systems in time-independent potentials, the wave function can always be written as a function of the degrees of freedom multiplied by a time-dependent phase factor, the form of which is given by the Schrödinger equation. For particles, considering their positions only and suppressing other degrees of freedom,
where is the energy eigenvalue of the system corresponding to the eigenstate . Wave functions of this form are called stationary states.
The time dependence of the quantum state and the operators can be placed according to unitary transformations on the operators and states. For any quantum state and operator , in the Schrödinger picture changes with time according to the Schrödinger equation while is constant. In the Heisenberg picture it is the other way round, is constant while evolves with time according to the Heisenberg equation of motion. The Dirac (or interaction) picture is intermediate, time dependence is places in both operators and states which evolve according to equations of motion. It is useful primarily in computing S-matrix elements.
Non-relativistic examples
The following are solutions to the Schrödinger equation for one non-relativistic spinless particle.
Finite potential barrier
One of the most prominent features of wave mechanics is the possibility for a particle to reach a location with a prohibitive (in classical mechanics) force potential. A common model is the "potential barrier", the one-dimensional case has the potential
and the steady-state solutions to the wave equation have the form (for some constants )
Note that these wave functions are not normalized; see scattering theory for discussion.
The standard interpretation of this is as a stream of particles being fired at the step from the left (the direction of negative ): setting corresponds to firing particles singly; the terms containing and signify motion to the right, while and – to the left. Under this beam interpretation, put since no particles are coming from the right. By applying the continuity of wave functions and their derivatives at the boundaries, it is hence possible to determine the constants above.
In a semiconductor crystallite whose radius is smaller than the size of its exciton Bohr radius, the excitons are squeezed, leading to quantum confinement. The energy levels can then be modeled using the particle in a box model in which the energy of different states is dependent on the length of the box.
Quantum harmonic oscillator
The wave functions for the quantum harmonic oscillator can be expressed in terms of Hermite polynomials , they are
where .
Hydrogen atom
The wave functions of an electron in a Hydrogen atom are expressed in terms of spherical harmonics and generalized Laguerre polynomials (these are defined differently by different authors—see main article on them and the hydrogen atom).
It is convenient to use spherical coordinates, and the wave function can be separated into functions of each coordinate,
where are radial functions and are spherical harmonics of degree and order . This is the only atom for which the Schrödinger equation has been solved exactly. Multi-electron atoms require approximative methods. The family of solutions is:
where is the Bohr radius,
are the generalized Laguerre polynomials of degree , is the principal quantum number, the azimuthal quantum number, the magnetic quantum number. Hydrogen-like atoms have very similar solutions.
This solution does not take into account the spin of the electron.
In the figure of the hydrogen orbitals, the 19 sub-images are images of wave functions in position space (their norm squared). The wave functions represent the abstract state characterized by the triple of quantum numbers , in the lower right of each image. These are the principal quantum number, the orbital angular momentum quantum number, and the magnetic quantum number. Together with one spin-projection quantum number of the electron, this is a complete set of observables.
The figure can serve to illustrate some further properties of the function spaces of wave functions.
In this case, the wave functions are square integrable. One can initially take the function space as the space of square integrable functions, usually denoted .
The displayed functions are solutions to the Schrödinger equation. Obviously, not every function in satisfies the Schrödinger equation for the hydrogen atom. The function space is thus a subspace of .
The displayed functions form part of a basis for the function space. To each triple , there corresponds a basis wave function. If spin is taken into account, there are two basis functions for each triple. The function space thus has a countable basis.
The basis functions are mutually orthonormal.
Wave functions and function spaces
The concept of function spaces enters naturally in the discussion about wave functions. A function space is a set of functions, usually with some defining requirements on the functions (in the present case that they are square integrable), sometimes with an algebraic structure on the set (in the present case a vector space structure with an inner product), together with a topology on the set. The latter will sparsely be used here, it is only needed to obtain a precise definition of what it means for a subset of a function space to be closed. It will be concluded below that the function space of wave functions is a Hilbert space. This observation is the foundation of the predominant mathematical formulation of quantum mechanics.
Vector space structure
A wave function is an element of a function space partly characterized by the following concrete and abstract descriptions.
The Schrödinger equation is linear. This means that the solutions to it, wave functions, can be added and multiplied by scalars to form a new solution. The set of solutions to the Schrödinger equation is a vector space.
The superposition principle of quantum mechanics. If and are two states in the abstract space of states of a quantum mechanical system, and and are any two complex numbers, then is a valid state as well. (Whether the null vector counts as a valid state ("no system present") is a matter of definition. The null vector does not at any rate describe the vacuum state in quantum field theory.) The set of allowable states is a vector space.
This similarity is of course not accidental. There are also a distinctions between the spaces to keep in mind.
Representations
Basic states are characterized by a set of quantum numbers. This is a set of eigenvalues of a maximal set of commuting observables. Physical observables are represented by linear operators, also called observables, on the vectors space. Maximality means that there can be added to the set no further algebraically independent observables that commute with the ones already present. A choice of such a set may be called a choice of representation.
It is a postulate of quantum mechanics that a physically observable quantity of a system, such as position, momentum, or spin, is represented by a linear Hermitian operator on the state space. The possible outcomes of measurement of the quantity are the eigenvalues of the operator. At a deeper level, most observables, perhaps all, arise as generators of symmetries.
The physical interpretation is that such a set represents what can – in theory – simultaneously be measured with arbitrary precision. The Heisenberg uncertainty relation prohibits simultaneous exact measurements of two non-commuting observables.
The set is non-unique. It may for a one-particle system, for example, be position and spin -projection, , or it may be momentum and spin -projection, . In this case, the operator corresponding to position (a multiplication operator in the position representation) and the operator corresponding to momentum (a differential operator in the position representation) do not commute.
Once a representation is chosen, there is still arbitrariness. It remains to choose a coordinate system. This may, for example, correspond to a choice of - and -axis, or a choice of curvilinear coordinates as exemplified by the spherical coordinates used for the Hydrogen atomic wave functions. This final choice also fixes a basis in abstract Hilbert space. The basic states are labeled by the quantum numbers corresponding to the maximal set of commuting observables and an appropriate coordinate system.
The abstract states are "abstract" only in that an arbitrary choice necessary for a particular explicit description of it is not given. This is the same as saying that no choice of maximal set of commuting observables has been given. This is analogous to a vector space without a specified basis. Wave functions corresponding to a state are accordingly not unique. This non-uniqueness reflects the non-uniqueness in the choice of a maximal set of commuting observables. For one spin particle in one dimension, to a particular state there corresponds two wave functions, and , both describing the same state.
For each choice of maximal commuting sets of observables for the abstract state space, there is a corresponding representation that is associated to a function space of wave functions.
Between all these different function spaces and the abstract state space, there are one-to-one correspondences (here disregarding normalization and unobservable phase factors), the common denominator here being a particular abstract state. The relationship between the momentum and position space wave functions, for instance, describing the same state is the Fourier transform.
Each choice of representation should be thought of as specifying a unique function space in which wave functions corresponding to that choice of representation lives. This distinction is best kept, even if one could argue that two such function spaces are mathematically equal, e.g. being the set of square integrable functions. One can then think of the function spaces as two distinct copies of that set.
Inner product
There is an additional algebraic structure on the vector spaces of wave functions and the abstract state space.
Physically, different wave functions are interpreted to overlap to some degree. A system in a state that does not overlap with a state cannot be found to be in the state upon measurement. But if overlap to some degree, there is a chance that measurement of a system described by will be found in states . Also selection rules are observed apply. These are usually formulated in the preservation of some quantum numbers. This means that certain processes allowable from some perspectives (e.g. energy and momentum conservation) do not occur because the initial and final total wave functions do not overlap.
Mathematically, it turns out that solutions to the Schrödinger equation for particular potentials are orthogonal in some manner, this is usually described by an integral where are (sets of) indices (quantum numbers) labeling different solutions, the strictly positive function is called a weight function, and is the Kronecker delta. The integration is taken over all of the relevant space.
This motivates the introduction of an inner product on the vector space of abstract quantum states, compatible with the mathematical observations above when passing to a representation. It is denoted , or in the Bra–ket notation . It yields a complex number. With the inner product, the function space is an inner product space. The explicit appearance of the inner product (usually an integral or a sum of integrals) depends on the choice of representation, but the complex number does not. Much of the physical interpretation of quantum mechanics stems from the Born rule. It states that the probability of finding upon measurement the state given the system is in the state is
where and are assumed normalized. Consider a scattering experiment. In quantum field theory, if describes a state in the "distant future" (an "out state") after interactions between scattering particles have ceased, and an "in state" in the "distant past", then the quantities , with and varying over a complete set of in states and out states respectively, is called the S-matrix or scattering matrix. Knowledge of it is, effectively, having solved the theory at hand, at least as far as predictions go. Measurable quantities such as decay rates and scattering cross sections are calculable from the S-matrix.
Hilbert space
The above observations encapsulate the essence of the function spaces of which wave functions are elements. However, the description is not yet complete. There is a further technical requirement on the function space, that of completeness, that allows one to take limits of sequences in the function space, and be ensured that, if the limit exists, it is an element of the function space. A complete inner product space is called a Hilbert space. The property of completeness is crucial in advanced treatments and applications of quantum mechanics. For instance, the existence of projection operators or orthogonal projections relies on the completeness of the space. These projection operators, in turn, are essential for the statement and proof of many useful theorems, e.g. the spectral theorem. It is not very important in introductory quantum mechanics, and technical details and links may be found in footnotes like the one that follows.
The space is a Hilbert space, with inner product presented later. The function space of the example of the figure is a subspace of . A subspace of a Hilbert space is a Hilbert space if it is closed.
In summary, the set of all possible normalizable wave functions for a system with a particular choice of basis, together with the null vector, constitute a Hilbert space.
Not all functions of interest are elements of some Hilbert space, say . The most glaring example is the set of functions . These are plane wave solutions of the Schrödinger equation for a free particle that are not normalizable, hence not in . But they are nonetheless fundamental for the description. One can, using them, express functions that are normalizable using wave packets. They are, in a sense, a basis (but not a Hilbert space basis, nor a Hamel basis) in which wave functions of interest can be expressed. There is also the artifact "normalization to a delta function" that is frequently employed for notational convenience, see further down. The delta functions themselves are not square integrable either.
The above description of the function space containing the wave functions is mostly mathematically motivated. The function spaces are, due to completeness, very large in a certain sense. Not all functions are realistic descriptions of any physical system. For instance, in the function space one can find the function that takes on the value for all rational numbers and for the irrationals in the interval . This is square integrable,
but can hardly represent a physical state.
Common Hilbert spaces
While the space of solutions as a whole is a Hilbert space there are many other Hilbert spaces that commonly occur as ingredients.
Square integrable complex valued functions on the interval . The set is a Hilbert space basis, i.e. a maximal orthonormal set.
The Fourier transform takes functions in the above space to elements of , the space of square summable functions . The latter space is a Hilbert space and the Fourier transform is an isomorphism of Hilbert spaces. Its basis is with .
The most basic example of spanning polynomials is in the space of square integrable functions on the interval for which the Legendre polynomials is a Hilbert space basis (complete orthonormal set).
The square integrable functions on the unit sphere is a Hilbert space. The basis functions in this case are the spherical harmonics. The Legendre polynomials are ingredients in the spherical harmonics. Most problems with rotational symmetry will have "the same" (known) solution with respect to that symmetry, so the original problem is reduced to a problem of lower dimensionality.
The associated Laguerre polynomials appear in the hydrogenic wave function problem after factoring out the spherical harmonics. These span the Hilbert space of square integrable functions on the semi-infinite interval .
More generally, one may consider a unified treatment of all second order polynomial solutions to the Sturm–Liouville equations in the setting of Hilbert space. These include the Legendre and Laguerre polynomials as well as Chebyshev polynomials, Jacobi polynomials and Hermite polynomials. All of these actually appear in physical problems, the latter ones in the harmonic oscillator, and what is otherwise a bewildering maze of properties of special functions becomes an organized body of facts. For this, see .
There occurs also finite-dimensional Hilbert spaces. The space is a Hilbert space of dimension . The inner product is the standard inner product on these spaces. In it, the "spin part" of a single particle wave function resides.
In the non-relativistic description of an electron one has and the total wave function is a solution of the Pauli equation.
In the corresponding relativistic treatment, and the wave function solves the Dirac equation.
With more particles, the situations is more complicated. One has to employ tensor products and use representation theory of the symmetry groups involved (the rotation group and the Lorentz group respectively) to extract from the tensor product the spaces in which the (total) spin wave functions reside. (Further problems arise in the relativistic case unless the particles are free. See the Bethe–Salpeter equation.) Corresponding remarks apply to the concept of isospin, for which the symmetry group is SU(2). The models of the nuclear forces of the sixties (still useful today, see nuclear force) used the symmetry group SU(3). In this case, as well, the part of the wave functions corresponding to the inner symmetries reside in some or subspaces of tensor products of such spaces.
In quantum field theory the underlying Hilbert space is Fock space. It is built from free single-particle states, i.e. wave functions when a representation is chosen, and can accommodate any finite, not necessarily constant in time, number of particles. The interesting (or rather the tractable) dynamics lies not in the wave functions but in the field operators that are operators acting on Fock space. Thus the Heisenberg picture is the most common choice (constant states, time varying operators).
Due to the infinite-dimensional nature of the system, the appropriate mathematical tools are objects of study in functional analysis.
Simplified description
Not all introductory textbooks take the long route and introduce the full Hilbert space machinery, but the focus is on the non-relativistic Schrödinger equation in position representation for certain standard potentials. The following constraints on the wave function are sometimes explicitly formulated for the calculations and physical interpretation to make sense:
The wave function must be square integrable. This is motivated by the Copenhagen interpretation of the wave function as a probability amplitude.
It must be everywhere continuous and everywhere continuously differentiable. This is motivated by the appearance of the Schrödinger equation for most physically reasonable potentials.
It is possible to relax these conditions somewhat for special purposes.
If these requirements are not met, it is not possible to interpret the wave function as a probability amplitude. Note that exceptions can arise to the continuity of derivatives rule at points of infinite discontinuity of potential field. For example, in particle in a box where the derivative of wavefunction can be discontinuous at the boundary of the box where the potential is known to have infinite discontinuity.
This does not alter the structure of the Hilbert space that these particular wave functions inhabit, but the subspace of the square-integrable functions , which is a Hilbert space, satisfying the second requirement is not closed in , hence not a Hilbert space in itself.
The functions that does not meet the requirements are still needed for both technical and practical reasons.
More on wave functions and abstract state space
As has been demonstrated, the set of all possible wave functions in some representation for a system constitute an in general infinite-dimensional Hilbert space. Due to the multiple possible choices of representation basis, these Hilbert spaces are not unique. One therefore talks about an abstract Hilbert space, state space, where the choice of representation and basis is left undetermined. Specifically, each state is represented as an abstract vector in state space. A quantum state in any representation is generally expressed as a vector
where
the basis vectors of the chosen representation
a differential volume element in the continuous degrees of freedom
a component of the vector , called the wave function of the system
dimensionless discrete quantum numbers
continuous variables (not necessarily dimensionless)
These quantum numbers index the components of the state vector. More, all are in an -dimensional set where each is the set of allowed values for ; all are in an -dimensional "volume" where and each is the set of allowed values for , a subset of the real numbers . For generality and are not necessarily equal.
Example:
The probability density of finding the system at time at state is
The probability of finding system with in some or all possible discrete-variable configurations, , and in some or all possible continuous-variable configurations, , is the sum and integral over the density,
Since the sum of all probabilities must be 1, the normalization condition
must hold at all times during the evolution of the system.
The normalization condition requires to be dimensionless, by dimensional analysis must have the same units as .
Ontology
Whether the wave function exists in reality, and what it represents, are major questions in the interpretation of quantum mechanics. Many famous physicists of a previous generation puzzled over this problem, such as Erwin Schrödinger, Albert Einstein and Niels Bohr. Some advocate formulations or variants of the Copenhagen interpretation (e.g. Bohr, Eugene Wigner and John von Neumann) while others, such as John Archibald Wheeler or Edwin Thompson Jaynes, take the more classical approach and regard the wave function as representing information in the mind of the observer, i.e. a measure of our knowledge of reality. Some, including Schrödinger, David Bohm and Hugh Everett III and others, argued that the wave function must have an objective, physical existence. Einstein thought that a complete description of physical reality should refer directly to physical space and time, as distinct from the wave function, which refers to an abstract mathematical space.
| Physical sciences | Quantum mechanics | null |
145424 | https://en.wikipedia.org/wiki/Exoskeleton | Exoskeleton | An exoskeleton (from Greek éxō "outer" and skeletós "skeleton") is a skeleton that is on the exterior of an animal in the form of hardened integument, which both supports the body's shape and protects the internal organs, in contrast to an internal endoskeleton (e.g. that of a human) which is enclosed underneath other soft tissues. Some large, hard and non-flexible protective exoskeletons are known as shell or armour.
Examples of exoskeletons in animals include the cuticle skeletons shared by arthropods (insects, chelicerates, myriapods and crustaceans) and tardigrades, as well as the skeletal cups formed by hardened secretion of stony corals, the test/tunic of sea squirts and sea urchins, and the prominent mollusc shell shared by snails, clams, tusk shells, chitons and nautilus. Some vertebrate animals, such as the turtle, have both an endoskeleton and a protective exoskeleton.
Role
Exoskeletons contain rigid and resistant components that fulfil a set of functional roles in addition to structural support in many animals, including protection, respiration, excretion, sensation, feeding and courtship display, and as an osmotic barrier against desiccation in terrestrial organisms. Exoskeletons have roles in defence from parasites and predators and in providing attachment points for musculature.
Arthropod exoskeletons contain chitin; the addition of calcium carbonate makes them harder and stronger, at the price of increased weight. Ingrowths of the arthropod exoskeleton known as apodemes serve as attachment sites for muscles. These structures are composed of chitin and are approximately six times stronger and twice the stiffness of vertebrate tendons. Similar to tendons, apodemes can stretch to store elastic energy for jumping, notably in locusts. Calcium carbonates constitute the shells of molluscs, brachiopods, and some tube-building polychaete worms. Silica forms the exoskeleton in the microscopic diatoms and radiolaria. One mollusc species, the scaly-foot gastropod, even uses the iron sulfides greigite and pyrite.
Some organisms, such as some foraminifera, agglutinate exoskeletons by sticking grains of sand and shell to their exterior. Contrary to a common misconception, echinoderms do not possess an exoskeleton and their test is always contained within a layer of living tissue.
Exoskeletons have evolved independently many times; 18 lineages evolved calcified exoskeletons alone. Further, other lineages have produced tough outer coatings, such as some mammals, that are analogous to an exoskeleton. This coating is constructed from bone in the armadillo, and hair in the pangolin. The armour of reptiles like turtles and dinosaurs like Ankylosaurs is constructed of bone; crocodiles have bony scutes and horny scales.
Growth
Since exoskeletons are rigid, they present some limits to growth. Organisms with open shells can grow by adding new material to the aperture of their shell, as is the case in gastropods, bivalves, and other molluscans. A true exoskeleton, like that found in panarthropods, must be shed via moulting (ecdysis) when the animal starts to outgrow it. A new exoskeleton is produced beneath the old one, and the new skeleton is soft and pliable before shedding the old one. The animal will typically stay in a den or burrow during moulting, as it is quite vulnerable to trauma during this period. Once at least partially set, the organism will plump itself up to try to expand the exoskeleton. The new exoskeleton is still capable of growing to some degree before it is eventually hardened. In contrast, moulting reptiles shed only the outer layer of skin and often exhibit indeterminate growth. These animals produce new skin and integuments throughout their life, replacing them according to growth. Arthropod growth, however, is limited by the space within its current exoskeleton. Failure to shed the exoskeleton once outgrown can result in the animal's death or prevent subadults from reaching maturity, thus preventing them from reproducing. This is the mechanism behind some insect pesticides, such as Azadirachtin.
Paleontological significance
Exoskeletons, as hard parts of organisms, are greatly useful in assisting the preservation of organisms, whose soft parts usually rot before they can be fossilized. Mineralized exoskeletons can be preserved as shell fragments. The possession of an exoskeleton permits a couple of other routes to fossilization. For instance, the strong layer can resist compaction, allowing a mould of the organism to be formed underneath the skeleton, which may later decay. Alternatively, exceptional preservation may result in chitin being mineralised, as in the Burgess Shale, or transformed to the resistant polymer keratin, which can resist decay and be recovered.
However, our dependence on fossilised skeletons also significantly limits our understanding of evolution. Only the parts of organisms that were already mineralised are usually preserved, such as the shells of molluscs. It helps that exoskeletons often contain "muscle scars", marks where muscles have been attached to the exoskeleton, which may allow the reconstruction of much of an organism's internal parts from its exoskeleton alone. The most significant limitation is that, although there are 30-plus phyla of living animals, two-thirds of these phyla have never been found as fossils, because most animal species are soft-bodied and decay before they can become fossilised.
Mineralized skeletons first appear in the fossil record shortly before the base of the Cambrian period, . The evolution of a mineralised exoskeleton is considered a possible driving force of the Cambrian explosion of animal life, resulting in a diversification of predatory and defensive tactics. However, some Precambrian (Ediacaran) organisms produced tough outer shells while others, such as Cloudina, had a calcified exoskeleton. Some Cloudina shells even show evidence of predation, in the form of borings.
Evolution
The fossil record primarily contains mineralized exoskeletons, since these are by far the most durable. Since most lineages with exoskeletons are thought to have started with a non-mineralized exoskeleton which they later mineralized, it is difficult to comment on the very early evolution of each lineage's exoskeleton. It is known, however, that in a very short course of time, just before the Cambrian period, exoskeletons made of various materials – silica, calcium phosphate, calcite, aragonite, and even glued-together mineral flakes – sprang up in a range of different environments. Most lineages adopted the form of calcium carbonate which was stable in the ocean at the time they first mineralized, and did not change from this mineral morph - even when it became less favourable.
Some Precambrian (Ediacaran) organisms produced tough but non-mineralized outer shells, while others, such as Cloudina, had a calcified exoskeleton, but mineralized skeletons did not become common until the beginning of the Cambrian period, with the rise of the "small shelly fauna". Just after the base of the Cambrian, these miniature fossils become diverse and abundant – this abruptness may be an illusion since the chemical conditions which preserved the small shells appeared at the same time. Most other shell-forming organisms appeared during the Cambrian period, with the Bryozoans being the only calcifying phylum to appear later, in the Ordovician. The sudden appearance of shells has been linked to a change in ocean chemistry which made the calcium compounds of which the shells are constructed stable enough to be precipitated into a shell. However, this is unlikely to be a sufficient cause, as the main construction cost of shells is in creating the proteins and polysaccharides required for the shell's composite structure, not in the precipitation of the mineral components. Skeletonization also appeared at almost the same time that animals started burrowing to avoid predation, and one of the earliest exoskeletons was made of glued-together mineral flakes, suggesting that skeletonization was likewise a response to increased pressure from predators.
Ocean chemistry may also control which mineral shells are constructed of. Calcium carbonate has two forms, the stable calcite and the metastable aragonite, which is stable within a reasonable range of chemical environments but rapidly becomes unstable outside this range. When the oceans contain a relatively high proportion of magnesium compared to calcium, aragonite is more stable, but as the magnesium concentration drops, it becomes less stable, hence harder to incorporate into an exoskeleton, as it will tend to dissolve.
Except for the molluscs, whose shells often comprise both forms, most lineages use just one form of the mineral. The form used appears to reflect the seawater chemistry – thus which form was more easily precipitated – at the time that the lineage first evolved a calcified skeleton, and does not change thereafter. However, the relative abundance of calcite- and aragonite-using lineages does not reflect subsequent seawater chemistry – the magnesium/calcium ratio of the oceans appears to have a negligible impact on organisms' success, which is instead controlled mainly by how well they recover from mass extinctions. A recently discovered modern gastropod Chrysomallon squamiferum that lives near deep-sea hydrothermal vents illustrates the influence of both ancient and modern local chemical environments: its shell is made of aragonite, which is found in some of the earliest fossil molluscs; but it also has armour plates on the sides of its foot, and these are mineralised with the iron sulfides pyrite and greigite, which had never previously been found in any metazoan but whose ingredients are emitted in large quantities by the vents.
| Biology and health sciences | Skeletal system | Biology |
145440 | https://en.wikipedia.org/wiki/Rare-earth%20element | Rare-earth element | The rare-earth elements (REE), also called the rare-earth metals or rare earths, and sometimes the lanthanides or lanthanoids (although scandium and yttrium, which do not belong to this series, are usually included as rare earths), are a set of 17 nearly indistinguishable lustrous silvery-white soft heavy metals. Compounds containing rare earths have diverse applications in electrical and electronic components, lasers, glass, magnetic materials, and industrial processes.
Scandium and yttrium are considered rare-earth elements because they tend to occur in the same ore deposits as the lanthanides and exhibit similar chemical properties, but have different electrical and magnetic properties. The term 'rare-earth' is a misnomer because they are not actually scarce, although historically it took a long time to isolate these elements.
These metals tarnish slowly in air at room temperature and react slowly with cold water to form hydroxides, liberating hydrogen. They react with steam to form oxides and ignite spontaneously at a temperature of . These elements and their compounds have no biological function other than in several specialized enzymes, such as in lanthanide-dependent methanol dehydrogenases in bacteria. The water-soluble compounds are mildly to moderately toxic, but the insoluble ones are not. All isotopes of promethium are radioactive, and it does not occur naturally in the earth's crust, except for a trace amount generated by spontaneous fission of uranium-238. They are often found in minerals with thorium, and less commonly uranium.
Though rare-earth elements are technically relatively plentiful in the entire Earth's crust (cerium being the 25th-most-abundant element at 68 parts per million, more abundant than copper), in practice this is spread thin across trace impurities, so to obtain rare earths at usable purity requires processing enormous amounts of raw ore at great expense, thus the name "rare" earths.
Because of their geochemical properties, rare-earth elements are typically dispersed and not often found concentrated in rare-earth minerals. Consequently, economically exploitable ore deposits are sparse. The first rare-earth mineral discovered (1787) was gadolinite, a black mineral composed of cerium, yttrium, iron, silicon, and other elements. This mineral was extracted from a mine in the village of Ytterby in Sweden; four of the rare-earth elements bear names derived from this single location.
Elements
A table listing the 17 rare-earth elements, their atomic number and symbol, the etymology of their names, and their main uses (see also Applications of lanthanides) is provided here. Some of the rare-earth elements are named after the scientists who discovered them, or elucidated their elemental properties, and some after the geographical locations where discovered.
A mnemonic for the names of the sixth-row elements in order is "Lately college parties never produce sexy European girls that drink heavily even though you look".
Discovery and early history
Rare earths were mainly discovered as components of minerals. Ytterbium was found in the "ytterbite" (renamed to gadolinite in 1800) discovered by Lieutenant Carl Axel Arrhenius in 1787 at a quarry in the village of Ytterby, Sweden and termed "rare" because it had never yet been seen. Arrhenius's "ytterbite" reached Johan Gadolin, a Royal Academy of Turku professor, and his analysis yielded an unknown oxide ("earth" in the geological parlance of the day), which he called yttria. Anders Gustav Ekeberg isolated beryllium from the gadolinite but failed to recognize other elements in the ore. After this discovery in 1794, a mineral from Bastnäs near Riddarhyttan, Sweden, which was believed to be an iron–tungsten mineral, was re-examined by Jöns Jacob Berzelius and Wilhelm Hisinger. In 1803 they obtained a white oxide and called it ceria. Martin Heinrich Klaproth independently discovered the same oxide and called it ochroia. It took another 30 years for researchers to determine that other elements were contained in the two ores ceria and yttria (the similarity of the rare-earth metals' chemical properties made their separation difficult).
In 1839 Carl Gustav Mosander, an assistant of Berzelius, separated ceria by heating the nitrate and dissolving the product in nitric acid. He called the oxide of the soluble salt lanthana. It took him three more years to separate the lanthana further into didymia and pure lanthana. Didymia, although not further separable by Mosander's techniques, was in fact still a mixture of oxides.
In 1842 Mosander also separated the yttria into three oxides: pure yttria, terbia, and erbia (all the names are derived from the town name "Ytterby"). The earth giving pink salts he called terbium; the one that yielded yellow peroxide he called erbium.
In 1842 the number of known rare-earth elements had reached six: yttrium, cerium, lanthanum, didymium, erbium, and terbium.
Nils Johan Berlin and Marc Delafontaine tried also to separate the crude yttria and found the same substances that Mosander obtained, but Berlin named (1860) the substance giving pink salts erbium, and Delafontaine named the substance with the yellow peroxide terbium. This confusion led to several false claims of new elements, such as the mosandrium of J. Lawrence Smith, or the philippium and decipium of Delafontaine. Due to the difficulty in separating the metals (and determining the separation is complete), the total number of false discoveries was dozens, with some putting the total number of discoveries at over a hundred.
Spectroscopic identification
There were no further discoveries for 30 years, and the element didymium was listed in the periodic table of elements with a molecular mass of 138. In 1879, Delafontaine used the new physical process of optical flame spectroscopy and found several new spectral lines in didymia. Also in 1879, Paul Émile Lecoq de Boisbaudran isolated the new element samarium from the mineral samarskite.
The samaria earth was further separated by Lecoq de Boisbaudran in 1886, and a similar result was obtained by Jean Charles Galissard de Marignac by direct isolation from samarskite. They named the element gadolinium after Johan Gadolin, and its oxide was named "gadolinia".
Further spectroscopic analysis between 1886 and 1901 of samaria, yttria, and samarskite by William Crookes, Lecoq de Boisbaudran and Eugène-Anatole Demarçay yielded several new spectral lines that indicated the existence of an unknown element. The fractional crystallization of the oxides then yielded europium in 1901.
In 1839 the third source for rare earths became available. This is a mineral similar to gadolinite called uranotantalum (now called "samarskite") an oxide of a mixture of elements such as yttrium, ytterbium, iron, uranium, thorium, calcium, niobium, and tantalum. This mineral from Miass in the southern Ural Mountains was documented by Gustav Rose. The Russian chemist R. Harmann proposed that a new element he called "ilmenium" should be present in this mineral, but later, Christian Wilhelm Blomstrand, Galissard de Marignac, and Heinrich Rose found only tantalum and niobium (columbium) in it.
The exact number of rare-earth elements that existed was highly unclear, and a maximum number of 25 was estimated. The use of X-ray spectra (obtained by X-ray crystallography) by Henry Gwyn Jeffreys Moseley made it possible to assign atomic numbers to the elements. Moseley found that the exact number of lanthanides had to be 15, but that element 61 had not yet been discovered. (This is promethium, a radioactive element whose most stable isotope has a half-life of just 18 years.)
Using these facts about atomic numbers from X-ray crystallography, Moseley also showed that hafnium (element 72) would not be a rare-earth element. Moseley was killed in World War I in 1915, years before hafnium was discovered. Hence, the claim of Georges Urbain that he had discovered element 72 was untrue. Hafnium is an element that lies in the periodic table immediately below zirconium, and hafnium and zirconium have very similar chemical and physical properties.
Sources and purification
During the 1940s, Frank Spedding and others in the United States (during the Manhattan Project) developed chemical ion-exchange procedures for separating and purifying rare-earth elements. This method was first applied to the actinides for separating plutonium-239 and neptunium from uranium, thorium, actinium, and the other actinides in the materials produced in nuclear reactors. Plutonium-239 was very desirable because it is a fissile material.
The principal sources of rare-earth elements are the minerals bastnäsite (, where R is a mixture of rare-earth elements), monazite (, where X is a mixture of rare-earth elements and sometimes thorium), and loparite (), and the lateritic ion-adsorption clays. Despite their high relative abundance, rare-earth minerals are more difficult to mine and extract than equivalent sources of transition metals (due in part to their similar chemical properties), making the rare-earth elements relatively expensive. Their industrial use was very limited until efficient separation techniques were developed, such as ion exchange, fractional crystallization, and liquid–liquid extraction during the late 1950s and early 1960s.
Some ilmenite concentrates contain small amounts of scandium and other rare-earth elements, which could be analysed by X-ray fluorescence (XRF).
Classification
Before the time that ion exchange methods and elution were available, the separation of the rare earths was primarily achieved by repeated precipitation or crystallization. In those days, the first separation was into two main groups, the cerium earths (lanthanum, cerium, praseodymium, neodymium, and samarium) and the yttrium earths (scandium, yttrium, dysprosium, holmium, erbium, thulium, ytterbium, and lutetium). Europium, gadolinium, and terbium were either considered as a separate group of rare-earth elements (the terbium group), or europium was included in the cerium group, and gadolinium and terbium were included in the yttrium group. In the latter case, the f-block elements are split into half: the first half (La–Eu) form the cerium group, and the second half (Gd–Yb) together with group 3 (Sc, Y, Lu) form the yttrium group. The reason for this division arose from the difference in solubility of rare-earth double sulfates with sodium and potassium. The sodium double sulfates of the cerium group are poorly soluble, those of the terbium group slightly, and those of the yttrium group are very soluble. Sometimes, the yttrium group was further split into the erbium group (dysprosium, holmium, erbium, and thulium) and the ytterbium group (ytterbium and lutetium), but today the main grouping is between the cerium and the yttrium groups. Today, the rare-earth elements are classified as light or heavy rare-earth elements, rather than in cerium and yttrium groups.
Light versus heavy classification
The classification of rare-earth elements is inconsistent between authors. The most common distinction between rare-earth elements is made by atomic numbers; those with low atomic numbers are referred to as light rare-earth elements (LREE), those with high atomic numbers are the heavy rare-earth elements (HREE), and those that fall in between are typically referred to as the middle rare-earth elements (MREE). Commonly, rare-earth elements with atomic numbers 57 to 61 (lanthanum to promethium) are classified as light and those with atomic numbers 62 and greater are classified as heavy rare-earth elements. Increasing atomic numbers between light and heavy rare-earth elements and decreasing atomic radii throughout the series causes chemical variations. Europium is exempt of this classification as it has two valence states: Eu and Eu. Yttrium is grouped as heavy rare-earth element due to chemical similarities. The break between the two groups is sometimes put elsewhere, such as between elements 63 (europium) and 64 (gadolinium). The actual metallic densities of these two groups overlap, with the "light" group having densities from 6.145 (lanthanum) to 7.26 (promethium) or 7.52 (samarium) g/cc, and the "heavy" group from 6.965 (ytterbium) to 9.32 (thulium), as well as including yttrium at 4.47. Europium has a density of 5.24.
Origin
Rare-earth elements, except scandium, are heavier than iron and thus are produced by supernova nucleosynthesis or by the s-process in asymptotic giant branch stars. In nature, spontaneous fission of uranium-238 produces trace amounts of radioactive promethium, but most promethium is synthetically produced in nuclear reactors.
Due to their chemical similarity, the concentrations of rare earths in rocks are only slowly changed by geochemical processes, making their proportions useful for geochronology and dating fossils.
Compounds
Rare-earth elements occur in nature in combination with phosphate (monazite), carbonate-fluoride (bastnäsite), and oxygen anions.
In their oxides, most rare-earth elements only have a valence of 3 and form sesquioxides (cerium forms ). Five different crystal structures are known, depending on the element and the temperature. The X-phase and the H-phase are only stable above 2000 K. At lower temperatures, there are the hexagonal A-phase, the monoclinic B-phase, and the cubic C-phase, which is the stable form at room temperature for most of the elements. The C-phase was once thought to be in space group I23 (no. 199), but is now known to be in space group Ia (no. 206). The structure is similar to that of fluorite or cerium dioxide (in which the cations form a face-centred cubic lattice and the anions sit inside the tetrahedra of cations), except that one-quarter of the anions (oxygen) are missing. The unit cell of these sesquioxides corresponds to eight unit cells of fluorite or cerium dioxide, with 32 cations instead of 4. This is called the bixbyite structure, as it occurs in a mineral of that name ().
Geological distribution
As seen in the chart, rare-earth elements are found on Earth at similar concentrations to many common transition metals. The most abundant rare-earth element is cerium, which is actually the 25th most abundant element in Earth's crust, having 68 parts per million (about as common as copper). The exception is the highly unstable and radioactive promethium "rare earth" is quite scarce. The longest-lived isotope of promethium has a half-life of 17.7 years, so the element exists in nature in only negligible amounts (approximately 572 g in the entire Earth's crust). Promethium is one of the two elements that do not have stable (non-radioactive) isotopes and are followed by (i.e. with higher atomic number) stable elements (the other being technetium).
The rare-earth elements are often found together. During the sequential accretion of the Earth, the dense rare-earth elements were incorporated into the deeper portions of the planet. Early differentiation of molten material largely incorporated the rare earths into mantle rocks. The high field strength and large ionic radii of rare earths make them incompatible with the crystal lattices of most rock-forming minerals, so REE will undergo strong partitioning into a melt phase if one is present. REE are chemically very similar and have always been difficult to separate, but the gradual decrease in ionic radius from light REE (LREE) to heavy REE (HREE), called the lanthanide contraction, can produce a broad separation between light and heavy REE. The larger ionic radii of LREE make them generally more incompatible than HREE in rock-forming minerals, and will partition more strongly into a melt phase, while HREE may prefer to remain in the crystalline residue, particularly if it contains HREE-compatible minerals like garnet. The result is that all magma formed from partial melting will always have greater concentrations of LREE than HREE, and individual minerals may be dominated by either HREE or LREE, depending on which range of ionic radii best fits the crystal lattice.
Among the anhydrous rare-earth phosphates, it is the tetragonal mineral xenotime that incorporates yttrium and the HREE, whereas the monoclinic monazite phase incorporates cerium and the LREE preferentially. The smaller size of the HREE allows greater solid solubility in the rock-forming minerals that make up Earth's mantle, and thus yttrium and the HREE show less enrichment in Earth's crust relative to chondritic abundance than does cerium and the LREE. This has economic consequences: large ore bodies of LREE are known around the world and are being exploited. Ore bodies for HREE are more rare, smaller, and less concentrated. Most of the current supply of HREE originates in the "ion-absorption clay" ores of Southern China. Some versions provide concentrates containing about 65% yttrium oxide, with the HREE being present in ratios reflecting the Oddo–Harkins rule: even-numbered REE at abundances of about 5% each, and odd-numbered REE at abundances of about 1% each. Similar compositions are found in xenotime or gadolinite.
Well-known minerals containing yttrium, and other HREE, include gadolinite, xenotime, samarskite, euxenite, fergusonite, yttrotantalite, yttrotungstite, yttrofluorite (a variety of fluorite), thalenite, and yttrialite. Small amounts occur in zircon, which derives its typical yellow fluorescence from some of the accompanying HREE. The zirconium mineral eudialyte, such as is found in southern Greenland, contains small but potentially useful amounts of yttrium. Of the above yttrium minerals, most played a part in providing research quantities of lanthanides during the discovery days. Xenotime is occasionally recovered as a byproduct of heavy-sand processing, but is not as abundant as the similarly recovered monazite (which typically contains a few percent of yttrium). Uranium ores from Ontario have occasionally yielded yttrium as a byproduct.
Well-known minerals containing cerium, and other LREE, include bastnäsite, monazite, allanite, loparite, ancylite, parisite, lanthanite, chevkinite, cerite, stillwellite, britholite, fluocerite, and cerianite. Monazite (marine sands from Brazil, India, or Australia; rock from South Africa), bastnäsite (from Mountain Pass rare earth mine, or several localities in China), and loparite (Kola Peninsula, Russia) have been the principal ores of cerium and the light lanthanides.
Enriched deposits of rare-earth elements at the surface of the Earth, carbonatites and pegmatites, are related to alkaline plutonism, an uncommon kind of magmatism that occurs in tectonic settings where there is rifting or that are near subduction zones. In a rift setting, the alkaline magma is produced by very small degrees of partial melting (<1%) of garnet peridotite in the upper mantle (200 to 600 km depth). This melt becomes enriched in incompatible elements, like the rare-earth elements, by leaching them out of the crystalline residue. The resultant magma rises as a diapir, or diatreme, along pre-existing fractures, and can be emplaced deep in the crust, or erupted at the surface. Typical REE enriched deposits types forming in rift settings are carbonatites, and A- and M-Type granitoids. Near subduction zones, partial melting of the subducting plate within the asthenosphere (80 to 200 km depth) produces a volatile-rich magma (high concentrations of and water), with high concentrations of alkaline elements, and high element mobility that the rare earths are strongly partitioned into. This melt may also rise along pre-existing fractures, and be emplaced in the crust above the subducting slab or erupted at the surface. REE-enriched deposits forming from these melts are typically S-Type granitoids.
Alkaline magmas enriched with rare-earth elements include carbonatites, peralkaline granites (pegmatites), and nepheline syenite. Carbonatites crystallize from -rich fluids, which can be produced by partial melting of hydrous-carbonated lherzolite to produce a CO-rich primary magma, by fractional crystallization of an alkaline primary magma, or by separation of a -rich immiscible liquid from. These liquids are most commonly forming in association with very deep Precambrian cratons, like the ones found in Africa and the Canadian Shield. Ferrocarbonatites are the most common type of carbonatite to be enriched in REE, and are often emplaced as late-stage, brecciated pipes at the core of igneous complexes; they consist of fine-grained calcite and hematite, sometimes with significant concentrations of ankerite and minor concentrations of siderite. Large carbonatite deposits enriched in rare-earth elements include Mount Weld in Australia, Thor Lake in Canada, Zandkopsdrift in South Africa, and Mountain Pass in the USA. Peralkaline granites (A-Type granitoids) have very high concentrations of alkaline elements and very low concentrations of phosphorus; they are deposited at moderate depths in extensional zones, often as igneous ring complexes, or as pipes, massive bodies, and lenses. These fluids have very low viscosities and high element mobility, which allows for the crystallization of large grains, despite a relatively short crystallization time upon emplacement; their large grain size is why these deposits are commonly referred to as pegmatites. Economically viable pegmatites are divided into Lithium-Cesium-Tantalum (LCT) and Niobium-Yttrium-Fluorine (NYF) types; NYF types are enriched in rare-earth minerals. Examples of rare-earth pegmatite deposits include Strange Lake in Canada and Khaladean-Buregtey in Mongolia. Nepheline syenite (M-Type granitoids) deposits are 90% feldspar and feldspathoid minerals. They are deposited in small, circular massifs and contain high concentrations of rare-earth-bearing accessory minerals. For the most part, these deposits are small but important examples include Illimaussaq-Kvanefeld in Greenland, and Lovozera in Russia.
Rare-earth elements can also be enriched in deposits by secondary alteration either by interactions with hydrothermal fluids or meteoric water or by erosion and transport of resistate REE-bearing minerals. Argillization of primary minerals enriches insoluble elements by leaching out silica and other soluble elements, recrystallizing feldspar into clay minerals such kaolinite, halloysite, and montmorillonite. In tropical regions where precipitation is high, weathering forms a thick argillized regolith, this process is called supergene enrichment and produces laterite deposits; heavy rare-earth elements are incorporated into the residual clay by absorption. This kind of deposit is only mined for REE in Southern China, where the majority of global heavy rare-earth element production occurs. REE-laterites do form elsewhere, including over the carbonatite at Mount Weld in Australia. REE may also be extracted from placer deposits if the sedimentary parent lithology contains REE-bearing, heavy resistate minerals.
In 2011, Yasuhiro Kato, a geologist at the University of Tokyo who led a study of Pacific Ocean seabed mud, published results indicating the mud could hold rich concentrations of rare-earth minerals. The deposits, studied at 78 sites, came from "[h]ot plumes from hydrothermal vents pull[ing] these materials out of seawater and deposit[ing] them on the seafloor, bit by bit, over tens of millions of years. One square patch of metal-rich mud 2.3 kilometers wide might contain enough rare earths to meet most of the global demand for a year, Japanese geologists report in Nature Geoscience." "I believe that rare[-]earth resources undersea are much more promising than on-land resources," said Kato. "[C]oncentrations of rare earths were comparable to those found in clays mined in China. Some deposits contained twice as much heavy rare earths such as dysprosium, a component of magnets in hybrid car motors."
The global demand for rare-earth elements (REEs) is expected to increase more than fivefold by 2030.
Geochemistry
The REE geochemical classification is usually done on the basis of their atomic weight. One of the most common classifications divides REE into 3 groups: light rare earths (LREE - from 57La to 60Nd), intermediate (MREE - from 62Sm to 67Ho) and heavy (HREE - from 68Er to 71Lu). REE usually appear as trivalent ions, except for Ce and Eu which can take the form of Ce4+ and Eu2+ depending on the redox conditions of the system. Consequentially, REE are characterized by a substantial identity in their chemical reactivity, which results in a serial behaviour during geochemical processes rather than being characteristic of a single element of the series. Sc, Y, and Lu can be electronically distinguished from the other rare earths because they do not have f valence electrons, whereas the others do, but the chemical behaviour is almost the same.
A distinguishing factor in the geochemical behaviour of the REE is linked to the so-called "lanthanide contraction" which represents a higher-than-expected decrease in the atomic/ionic radius of the elements along the series. This is determined by the variation of the shielding effect towards the nuclear charge due to the progressive filling of the 4f orbital which acts against the electrons of the 6s and 5d orbitals. The lanthanide contraction has a direct effect on the geochemistry of the lanthanides, which show a different behaviour depending on the systems and processes in which they are involved. The effect of the lanthanide contraction can be observed in the REE behaviour both in a CHARAC-type geochemical system (CHArge-and-RAdius-Controlled) where elements with similar charge and radius should show coherent geochemical behaviour, and in non-CHARAC systems, such as aqueous solutions, where the electron structure is also an important parameter to consider as the lanthanide contraction affects the ionic potential. A direct consequence is that, during the formation of coordination bonds, the REE behaviour gradually changes along the series. Furthermore, the lanthanide contraction causes the ionic radius of Ho3+ (0.901 Å) to be almost identical to that of Y3+ (0.9 Å), justifying the inclusion of the latter among the REE.
Applications
The application of rare-earth elements to geology is important to understanding the petrological processes of igneous, sedimentary and metamorphic rock formation. In geochemistry, rare-earth elements can be used to infer the petrological mechanisms that have affected a rock due to the subtle atomic size differences between the elements, which causes preferential fractionation of some rare earths relative to others depending on the processes at work.
The geochemical study of the REE is not carried out on absolute concentrations – as it is usually done with other chemical elements – but on normalized concentrations in order to observe their serial behaviour. In geochemistry, rare-earth elements are typically presented in normalized "spider" diagrams, in which concentration of rare-earth elements are normalized to a reference standard and are then expressed as the logarithm to the base 10 of the value.
Commonly, the rare-earth elements are normalized to chondritic meteorites, as these are believed to be the closest representation of unfractionated Solar System material. However, other normalizing standards can be applied depending on the purpose of the study. Normalization to a standard reference value, especially of a material believed to be unfractionated, allows the observed abundances to be compared to the initial abundances of the element. Normalization also removes the pronounced 'zig-zag' pattern caused by the differences in abundance between even and odd atomic numbers. Normalization is carried out by dividing the analytical concentrations of each element of the series by the concentration of the same element in a given standard, according to the equation:
where n indicates the normalized concentration, the analytical concentration of the element measured in the sample, and the concentration of the same element in the reference material.
It is possible to observe the serial trend of the REE by reporting their normalized concentrations against the atomic number. The trends that are observed in "spider" diagrams are typically referred to as "patterns", which may be diagnostic of petrological processes that have affected the material of interest.
According to the general shape of the patterns or thanks to the presence (or absence) of so-called "anomalies", information regarding the system under examination and the occurring geochemical processes can be obtained. The anomalies represent enrichment (positive anomalies) or depletion (negative anomalies) of specific elements along the series and are graphically recognizable as positive or negative "peaks" along the REE patterns. The anomalies can be numerically quantified as the ratio between the normalized concentration of the element showing the anomaly and the predictable one based on the average of the normalized concentrations of the two elements in the previous and next position in the series, according to the equation:
where is the normalized concentration of the element whose anomaly has to be calculated, and the normalized concentrations of the respectively previous and next elements along the series.
The rare-earth elements patterns observed in igneous rocks are primarily a function of the chemistry of the source where the rock came from, as well as the fractionation history the rock has undergone. Fractionation is in turn a function of the partition coefficients of each element. Partition coefficients are responsible for the fractionation of trace elements (including rare-earth elements) into the liquid phase (the melt/magma) into the solid phase (the mineral). If an element preferentially remains in the solid phase it is termed 'compatible', and if it preferentially partitions into the melt phase it is described as 'incompatible'. Each element has a different partition coefficient, and therefore fractionates into solid and liquid phases distinctly. These concepts are also applicable to metamorphic and sedimentary petrology.
In igneous rocks, particularly in felsic melts, the following observations apply: anomalies in europium are dominated by the crystallization of feldspars. Hornblende, controls the enrichment of MREE compared to LREE and HREE. Depletion of LREE relative to HREE may be due to the crystallization of olivine, orthopyroxene, and clinopyroxene. On the other hand, the depletion of HREE relative to LREE may be due to the presence of garnet, as garnet preferentially incorporates HREE into its crystal structure. The presence of zircon may also cause a similar effect.
In sedimentary rocks, rare-earth elements in clastic sediments are a representation of provenance. The rare-earth element concentrations are not typically affected by sea and river waters, as rare-earth elements are insoluble and thus have very low concentrations in these fluids. As a result, when sediment is transported, rare-earth element concentrations are unaffected by the fluid and instead the rock retains the rare-earth element concentration from its source.
Sea and river waters typically have low rare-earth element concentrations. However, aqueous geochemistry is still very important. In oceans, rare-earth elements reflect input from rivers, hydrothermal vents, and aeolian sources; this is important in the investigation of ocean mixing and circulation.
Rare-earth elements are also useful for dating rocks, as some radioactive isotopes display long half-lives. Of particular interest are the La-Ce, Sm-Nd, and Lu-Hf systems.
Production
Until 1948, most of the world's rare earths were sourced from placer sand deposits in India and Brazil. Through the 1950s, South Africa was the world's rare earth source, from a monazite-rich reef at the Steenkampskraal mine in Western Cape province. Through the 1960s until the 1980s, the Mountain Pass rare earth mine in California made the United States the leading producer. Today, the Indian and South African deposits still produce some rare-earth concentrates, but they were dwarfed by the scale of Chinese production. In 2017, China produced 81% of the world's rare-earth supply, mostly in Inner Mongolia, although it had only 36.7% of reserves. Australia was the second and only other major producer with 15% of world production. All of the world's heavy rare earths (such as dysprosium) come from Chinese rare-earth sources such as the polymetallic Bayan Obo deposit. The Browns Range mine, located 160 km south east of Halls Creek in northern Western Australia, was under development in 2018 and is positioned to become the first significant dysprosium producer outside of China.
REE is increasing in demand due to the fact that they are essential for new and innovative technology that is being created. These new products that need REEs to be produced are high-technology equipment such as smart phones, digital cameras, computer parts, semiconductors, etc. In addition, these elements are more prevalent in the following industries: renewable energy technology, military equipment, glass making, and metallurgy.
Increased demand has strained supply, and there is growing concern that the world may soon face a shortage of the rare earths. In several years from 2009 worldwide demand for rare-earth elements is expected to exceed supply by 40,000 metric tons annually unless major new sources are developed. In 2013, it was stated that the demand for REEs would increase due to the dependence of the EU on these elements, the fact that rare-earth elements cannot be substituted by other elements and that REEs have a low recycling rate. Furthermore, due to the increased demand and low supply, future prices are expected to increase and there is a chance that countries other than China will open REE mines. In addition there are over a hundred ongoing mining projects with many options outside of China.
China
These concerns have intensified due to the actions of China, the predominant supplier. Specifically, China has announced regulations on exports and a crackdown on smuggling. On September 1, 2009, China announced plans to reduce its export quota to 35,000 tons per year in 2010–2015 to conserve scarce resources and protect the environment. On October 19, 2010, China Daily, citing an unnamed Ministry of Commerce official, reported that China will "further reduce quotas for rare-earth exports by 30 percent at most next year to protect the precious metals from over-exploitation." The government in Beijing further increased its control by forcing smaller, independent miners to merge into state-owned corporations or face closure. At the end of 2010, China announced that the first round of export quotas in 2011 for rare earths would be 14,446 tons, which was a 35% decrease from the previous first round of quotas in 2010. China announced further export quotas on 14 July 2011 for the second half of the year with total allocation at 30,184 tons with total production capped at 93,800 metric tons. In September 2011, China announced the halt in production of three of its eight major rare-earth mines, responsible for almost 40% of China's total rare-earth production. In March 2012, the US, EU, and Japan confronted China at WTO about these export and production restrictions. China responded with claims that the restrictions had environmental protection in mind. In August 2012, China announced a further 20% reduction in production.
The United States, Japan, and the European Union filed a joint lawsuit with the World Trade Organization in 2012 against China, arguing that China should not be able to deny such important exports.
In response to the opening of new mines in other countries (Lynas in Australia and Molycorp in the United States), prices of rare earths dropped.
The price of dysprosium oxide was US$994/kg in 2011, but dropped to US$265/kg by 2014.
On August 29, 2014, the WTO ruled that China had broken free-trade agreements, and the WTO said in the summary of key findings that "the overall effect of the foreign and domestic restrictions is to encourage domestic extraction and secure preferential use of those materials by Chinese manufacturers." China declared that it would implement the ruling on September 26, 2014, but would need some time to do so. By January 5, 2015, China had lifted all quotas from the export of rare earths, but export licenses will still be required.
In 2019, China supplied between 85% and 95% of the global demand for the 17 rare-earth powders, half of them sourced from Myanmar. After the 2021 military coup in that country, future supplies of critical ores were possibly constrained. Additionally, it was speculated that the PRC could again reduce rare-earth exports to counter-act economic sanctions imposed by the US and EU countries. Rare-earth metals serve as crucial materials for electric vehicle manufacturing and high-tech military applications.
Myanmar (Burma)
Kachin State in Myanmar is the world's largest source of rare earths. In 2021, China imported of rare earths from Myanmar in December 2021, exceeding 20,000 metric tons. Rare earths were discovered near Pang War in Chipwi Township along the China–Myanmar border in the late 2010s. As China has shut down domestic mines due to the detrimental environmental impact, it has largely outsourced rare-earth mining to Kachin State. Chinese companies and miners illegally set up operations in Kachin State without government permits, and instead circumvent the central government by working with a Border Guard Force militia under the Tatmadaw, formerly known as the New Democratic Army – Kachin, which has profited from this extractive industry. , 2,700 mining collection pools scattered across 300 separate locations were found in Kachin State, encompassing the area of Singapore, and an exponential increase from 2016. Land has also been seized from locals to conduct mining operations.
Other countries
As a result of the increased demand and tightening restrictions on exports of the metals from China, some countries are stockpiling rare-earth resources. Searches for alternative sources in Australia, Brazil, Canada, South Africa, Tanzania, Greenland, and the United States are ongoing. Mines in these countries were closed when China undercut world prices in the 1990s, and it will take a few years to restart production as there are many barriers to entry. Significant sites under development outside China include Steenkampskraal in South Africa, the world's highest grade rare earths and thorium mine, closed in 1963, but has been gearing to go back into production. Over 80% of the infrastructure is already complete. Other mines include the Nolans Project in Central Australia, the Bokan Mountain project in Alaska, the remote Hoidas Lake project in northern Canada, and the Mount Weld project in Australia. The Hoidas Lake project has the potential to supply about 10% of the $1 billion of REE consumption that occurs in North America every year. Vietnam signed an agreement in October 2010 to supply Japan with rare earths from its northwestern Lai Châu Province, however the deal was never realized due to disagreements.
The largest rare-earth deposit in the U.S. is at Mountain Pass, California, sixty miles south of Las Vegas. Originally opened by Molycorp, the deposit has been mined, off and on, since 1951. A second large deposit of REEs at Elk Creek in southeast Nebraska is under consideration by NioCorp Development Ltd who hopes to open a niobium, scandium, and titanium mine there. That mine may be able to produce as much as 7200 metric tons of ferro niobium and 95 metric tons of scandium trioxide annually, although, as of 2022, financing is still in the works.
In the UK, Pensana has begun construction of their US$195 million rare-earth processing plant which secured funding from the UK government's Automotive Transformation Fund. The plant will process ore from the Longonjo mine in Angola and other sources as they become available. The company are targeting production in late 2023, before ramping up to full capacity in 2024. Pensana aim to produce 12,500 metric tons of separated rare earths, including 4,500 metric tons of magnet metal rare earths.
Also under consideration for mining are sites such as Thor Lake in the Northwest Territories, and various locations in Vietnam. Additionally, in 2010, a large deposit of rare-earth minerals was discovered in Kvanefjeld in southern Greenland. Pre-feasibility drilling at this site has confirmed significant quantities of black lujavrite, which contains about 1% rare-earth oxides (REO). The European Union has urged Greenland to restrict Chinese development of rare-earth projects there, but as of early 2013, the government of Greenland has said that it has no plans to impose such restrictions. Many Danish politicians have expressed concerns that other nations, including China, could gain influence in thinly populated Greenland, given the number of foreign workers and investment that could come from Chinese companies in the near future because of the law passed December 2012.
In central Spain, Ciudad Real Province, the proposed rare-earth mining project 'Matamulas' may provide, according to its developers, up to 2,100 Tn/year (33% of the annual UE demand). However, this project has been suspended by regional authorities due to social and environmental concerns.
Adding to potential mine sites, ASX listed Peak Resources announced in February 2012, that their Tanzanian-based Ngualla project contained not only the 6th largest deposit by tonnage outside of China but also the highest grade of rare-earth elements of the 6.
North Korea has been reported to have exported rare-earth ore to China, about US$1.88 million worth during May and June 2014.
In May 2012, researchers from two universities in Japan announced that they had discovered rare earths in Ehime Prefecture, Japan.
On 12 January 2023, Swedish state-owned mining company LKAB announced that it had discovered a deposit of over 1 million metric tons of rare earths in the country's Kiruna area, which would make it the largest such deposit in Europe.
China processes about 90% of the world's REEs and 60% of the world's lithium. As a result, the European Union imports practically all of its rare earth elements from China. The EU Critical Raw Materials Act of 2023 has set in action the required policy adjustments for Europe to start producing two-thirds of the lithium-ion batteries required for electric vehicles and energy storage. In 2024, an EU backed lithium mining project created large scale protests in Serbia.
In 2024 American Rare Earths Inc. disclosed that its reserves near Wheatland Wyoming totaled 2.34 billion metric tons, possibly the world's largest and larger than a separate 1.2 million metric ton deposit in northeastern Wyoming.
In June 2024, Rare Earths Norway found a rare-earth oxide deposit of 8.8 million metric tons in Telemark, Norway, making it Europe's largest known rare-earth element deposit. The mining firm predicted that it would finish developing the first stage of mining in 2030.
Malaysian refining plans
In early 2011, Australian mining company Lynas was reported to be "hurrying to finish" a US$230 million rare-earth refinery on the eastern coast of Peninsular Malaysia's industrial port of Kuantan. The plant would refine ore — lanthanides concentrate from the Mount Weld mine in Australia. The ore would be trucked to Fremantle and transported by container ship to Kuantan. Within two years, Lynas was said to expect the refinery to be able to meet nearly a third of the world's demand for rare-earth materials, not counting China. The Kuantan development brought renewed attention to the Malaysian town of Bukit Merah in Perak, where a rare-earth mine operated by a Mitsubishi Chemical subsidiary, Asian Rare Earth, closed in 1994 and left continuing environmental and health concerns. In mid-2011, after protests, Malaysian government restrictions on the Lynas plant were announced. At that time, citing subscription-only Dow Jones Newswire reports, a Barrons report said the Lynas investment was $730 million, and the projected share of the global market it would fill put at "about a sixth." An independent review initiated by the Malaysian Government, and conducted by the International Atomic Energy Agency (IAEA) in 2011 to address concerns of radioactive hazards, found no non-compliance with international radiation safety standards.
However, the Malaysian authorities confirmed that as of October 2011, Lynas was not given any permit to import any rare-earth ore into Malaysia. On February 2, 2012, the Malaysian AELB (Atomic Energy Licensing Board) recommended that Lynas be issued a temporary operating license subject to meeting a number of conditions. On 2 September 2014, Lynas was issued a 2-year full operating stage license by the AELB.
On November 17 2024, economy minister Rafizi Ramli said he hoped Malaysia is able to produce rare-earth elements within three years, through discussions with China to provide technology. In the past, plans to mine rare-earth elements at Kedah caused concerns of destroying forest reserves and harming water catchment areas.
Other sources
Mine tailings
Significant quantities of rare-earth oxides are found in tailings accumulated from 50 years of uranium ore, shale, and loparite mining at Sillamäe, Estonia. Due to the rising prices of rare earths, extraction of these oxides has become economically viable. The country currently exports around 3,000 metric tons per year, representing around 2% of world production. Similar resources are suspected in the western United States, where gold rush-era mines are believed to have discarded large amounts of rare earths, because they had no value at the time.
Ocean mining
In January 2013 a Japanese deep-sea research vessel obtained seven deep-sea mud core samples from the Pacific Ocean seafloor at 5,600 to 5,800 meters depth, approximately south of the island of Minami-Tori-Shima. The research team found a mud layer 2 to 4 meters beneath the seabed with concentrations of up to 0.66% rare-earth oxides. A potential deposit might compare in grade with the ion-absorption-type deposits in southern China that provide the bulk of Chinese REO mine production, which grade in the range of 0.05% to 0.5% REO.
Waste and recycling
Another recently developed source of rare earths is electronic waste and other wastes that have significant rare-earth components. Advances in recycling technology have made the extraction of rare earths from these materials less expensive. Recycling plants operate in Japan, where an estimated 300,000 tons of rare earths are found in unused electronics. In France, the Rhodia group is setting up two factories, in La Rochelle and Saint-Fons, that will produce 200 tons of rare earths a year from used fluorescent lamps, magnets, and batteries. Coal and coal by-products, such as ash and sludge, are a potential source of critical elements including rare-earth elements (REE) with estimated amounts in the range of 50 million metric tons.
Methods
One study mixed fly ash with carbon black and then sent a 1-second current pulse through the mixture, heating it to . The fly ash contains microscopic bits of glass that encapsulate the metals. The heat shatters the glass, exposing the rare earths. Flash heating also converts phosphates into oxides, which are more soluble and extractable. Using hydrochloric acid at concentrations less than 1% of conventional methods, the process extracted twice as much material.
Properties
According to chemistry professor Andrea Sella, rare-earth elements differ from other elements, in that when looked at analytically, they are virtually inseparable, having almost the same chemical properties. However, in terms of their electronic and magnetic properties, each one occupies a unique technological niche that nothing else can. For example, "the rare-earth elements praseodymium (Pr) and neodymium (Nd) can both be embedded inside glass and they completely cut out the glare from the flame when one is doing glass-blowing."
Uses
The uses, applications, and demand for rare-earth elements have expanded over the years. Globally, most REEs are used for catalysts and magnets. In the US, more than half of REEs are used for catalysts; ceramics, glass, and polishing are also main uses.
Other important uses of rare-earth elements are applicable to the production of high-performance magnets, alloys, glasses, and electronics. Ce and La are important as catalysts, and are used for petroleum refining and as diesel additives. Nd is important in magnet production in traditional and low-carbon technologies. Rare-earth elements in this category are used in the electric motors of hybrid and electric vehicles, generators in some wind turbines, hard disc drives, portable electronics, microphones, and speakers.
Ce, La, and Nd are important in alloy making, and in the production of fuel cells and nickel-metal hydride batteries. Ce, Ga, and Nd are important in electronics and are used in the production of LCD and plasma screens, fiber optics, and lasers, and in medical imaging. Additional uses for rare-earth elements are as tracers in medical applications, fertilizers, and in water treatment.
REEs have been used in agriculture to increase plant growth, productivity, and stress resistance seemingly without negative effects for human and animal consumption. REEs are used in agriculture through REE-enriched fertilizers which is a widely used practice in China. In addition, REEs are feed additives for livestock which has resulted in increased production such as larger animals and a higher production of eggs and dairy products. However, this practice has resulted in REE bioaccumulation within livestock and has impacted vegetation and algae growth in these agricultural areas. Additionally while no ill effects have been observed at current low concentrations the effects over the long term and with accumulation over time are unknown prompting some calls for more research into their possible effects.
Environmental considerations
REEs are naturally found in very low concentrations in the environment. Mines are often in countries where environmental and social standards are very low, leading to human rights violations, deforestation, and contamination of land and water. Generally, it is estimated that extracting 1 metric ton of rare earth element creates around 2,000 metric tons of waste, partly toxic, including 1 ton of radioactive waste. The largest mining site of REEs, Bayan Obo in China produced more than 70,000 tons of radioactive waste, that contaminated ground water.
Near mining and industrial sites, the concentrations of REEs can rise to many times the normal background levels. Once in the environment, REEs can leach into the soil where their transport is determined by numerous factors such as erosion, weathering, pH, precipitation, groundwater, etc. Acting much like metals, they can speciate depending on the soil condition being either motile or adsorbed to soil particles. Depending on their bio-availability, REEs can be absorbed into plants and later consumed by humans and animals. The mining of REEs, use of REE-enriched fertilizers, and the production of phosphorus fertilizers all contribute to REE contamination. Furthermore, strong acids are used during the extraction process of REEs, which can then leach out into the environment and be transported through water bodies and result in the acidification of aquatic environments. Another additive of REE mining that contributes to REE environmental contamination is cerium oxide (), which is produced during the combustion of diesel and released as exhaust, contributing heavily to soil and water contamination.
Mining, refining, and recycling of rare earths have serious environmental consequences if not properly managed. Low-level radioactive tailings resulting from the occurrence of thorium and uranium in rare-earth ores present a potential hazard and improper handling of these substances can result in extensive environmental damage. In May 2010, China announced a major, five-month crackdown on illegal mining in order to protect the environment and its resources. This campaign is expected to be concentrated in the South, where mines – commonly small, rural, and illegal operations – are particularly prone to releasing toxic waste into the general water supply. However, even the major operation in Baotou, in Inner Mongolia, where much of the world's rare-earth supply is refined, has caused major environmental damage. China's Ministry of Industry and Information Technology estimated that cleanup costs in Jiangxi province at $5.5 billion.
It is, however, possible to filter out and recover any rare-earth elements that flow out with the wastewater from mining facilities. However, such filtering and recovery equipment may not always be present on the outlets carrying the wastewater.
Recycling and reusing REEs
REEs are amongst the most critical elements to modern technologies and society. Despite this, typically only around 1% of REEs are recycled from end-products. Recycling and reusing REEs is not easy: these elements are mostly present in tiny amounts in small electronic parts and they are difficult to separate chemically. For example, recovery of neodymium requires manual disassembly of hard disk drives because shredding the drives only recovers 10% of the REE.
REE recycling and reuse have been increasingly focused on in recent years. The main concerns include environmental pollution during REE recycling and increasing recycling efficiency. Literature published in 2004 suggests that, along with previously established pollution mitigation, a more circular supply chain would help mitigate some of the pollution at the extraction point. This means recycling and reusing REEs that are already in use or reaching the end of their life cycle. A study published in 2014 suggests a method to recycle REEs from waste nickel-metal hydride batteries, demonstrating a recovery rate of 95.16%. Rare-earth elements could also be recovered from industrial wastes with practical potential to reduce environmental and health impacts from mining, waste generation, and imports if known and experimental processes are scaled up. A study suggests that "fulfillment of the circular economy approach could reduce up to 200 times the impact in the climate change category and up to 70 times the cost due to the REE mining." In most of the reported studies reviewed by a scientific review, "secondary waste is subjected to chemical and or bioleaching followed by solvent extraction processes for clean separation of REEs."
Currently, people take two essential resources into consideration for the secure supply of REEs: one is to extract REEs from primary resources like mines harboring REE-bearing ores, regolith-hosted clay deposits, ocean bed sediments, coal fly ash, etc. A work developed a green system for recovery of REEs from coal fly ash by using citrate and oxalate who are strong organic ligand and capable of complexing or precipItating with REE. The other one is from secondary resources such as electronic, industrial waste and municipal waste. E-waste contains a significant concentration of REEs, and thus is the primary option for REE recycling now. According to a study, approximately 50 million metric tons of electronic waste are dumped in landfills worldwide each year. Despite the fact that e-waste contains a significant amount of rare-earth elements (REE), only 12.5% of e-waste is currently being recycled for all metals.
Impact of REE contamination
On vegetation
The mining of REEs has caused the contamination of soil and water around production areas, which has impacted vegetation in these areas by decreasing chlorophyll production, which affects photosynthesis and inhibits the growth of the plants. However, the impact of REE contamination on vegetation is dependent on the plants present in the contaminated environment: not all plants retain and absorb REEs. Also, the ability of the vegetation to intake the REE is dependent on the type of REE present in the soil, hence there are a multitude of factors that influence this process. Agricultural plants are the main type of vegetation affected by REE contamination in the environment, the two plants with a higher chance of absorbing and storing REEs being apples and beets. Furthermore, there is a possibility that REEs can leach out into aquatic environments and be absorbed by aquatic vegetation, which can then bio-accumulate and potentially enter the human food chain if livestock or humans choose to eat the vegetation. An example of this situation was the case of the water hyacinth (Eichhornia crassipes) in China, where the water was contaminated due to a REE-enriched fertilizer being used in a nearby agricultural area. The aquatic environment became contaminated with cerium and resulted in the water hyacinth becoming three times more concentrated in cerium than its surrounding water.
On human health
The chemical properties of the REEs are so similar that they are expected to show similar toxicity in humans.
Mortality studies show REEs are not highly toxic. Long term (18 months) inhalation of dust containing high levels (60%) of REEs has been shown to cause pneumoconiosis but the mechanism is unknown.
While REEs are not major pollutants, the increase application of REEs in new technologies has increased the need to understand their safe levels of exposure for humans. One side effect of mining REEs can be exposure to harmful radioactive Thorium as has been demonstrated at large mine in Batou (Mongolia).
The rare-earth mining and smelting process can release airborne fluoride which will associate with total suspended particles (TSP) to form aerosols that can enter human respiratory systems. Research from Baotou, China shows that the fluoride concentration in the air near REE mines is higher than the limit value from WHO, but the health effects of this exposure are unknown.
Analysis of people living near mines in China had many times the levels of REEs in their blood, urine, bone, and hair compared to controls far from mining sites, suggesting possible bioaccumulation of REEs. This higher level was related to the high levels of REEs present in the vegetables they cultivated, the soil, and the water from the wells, indicating that the high levels were caused by the nearby mine. However the levels found were not high enough to cause health effects.
Analysis of REEs in street dust in China suggest "no augmented health hazard".
Similarly, analysis of cereal crops in mining areas in China found levels too low for health risks.
On animal health
Experiments exposing rats to various cerium compounds have found accumulation primarily in the lungs and liver. This resulted in various negative health outcomes associated with those organs. REEs have been added to feed in livestock to increase their body mass and increase milk production. They are most commonly used to increase the body mass of pigs, and it was discovered that REEs increase the digestibility and nutrient use of pigs' digestive systems. Studies point to a dose-response when considering toxicity versus positive effects. While small doses from the environment or with proper administration seem to have no ill effects, larger doses have been shown to have negative effects specifically in the organs where they accumulate. The process of mining REEs in China has resulted in soil and water contamination in certain areas, which when transported into aquatic bodies could potentially bio-accumulate within aquatic biota. Furthermore, in some cases, animals that live in REE-contaminated areas have been diagnosed with organ or system problems. REEs have been used in freshwater fish farming because it protects the fish from possible diseases. One main reason why they have been avidly used in animal livestock feeding is that they have had better results than inorganic livestock feed enhancers.
Remediation after pollution
After the 1982 Bukit Merah radioactive pollution, the mine in Malaysia has been the focus of a US$100 million cleanup that is proceeding in 2011. After having accomplished the hilltop entombment of 11,000 truckloads of radioactively contaminated material, the project is expected to entail in summer, 2011, the removal of "more than 80,000 steel barrels of radioactive waste to the hilltop repository."
In May 2011, after the Fukushima nuclear disaster, widespread protests took place in Kuantan over the Lynas refinery and radioactive waste from it. The ore to be processed has very low levels of thorium, and Lynas founder and chief executive Nicholas Curtis said "There is absolutely no risk to public health." T. Jayabalan, a doctor who says he has been monitoring and treating patients affected by the Mitsubishi plant, "is wary of Lynas's assurances. The argument that low levels of thorium in the ore make it safer doesn't make sense, he says, because radiation exposure is cumulative." Construction of the facility has been halted until an independent United Nations IAEA panel investigation is completed, which is expected by the end of June 2011. New restrictions were announced by the Malaysian government in late June.
An IAEA panel investigation was completed and no construction has been halted. Lynas is on budget and on schedule to start producing in 2011. The IAEA concluded in a report issued in June 2011 that it did not find any instance of "any non-compliance with international radiation safety standards" in the project.
If the proper safety standards are followed, REE mining is relatively low impact. Molycorp (before going bankrupt) often exceeded environmental regulations to improve its public image.
In Greenland, there is a significant dispute on whether to start a new rare-earth mine in Kvanefjeld due to environmental concerns.
Geopolitical considerations
China has officially cited resource depletion and environmental concerns as the reasons for a nationwide crackdown on its rare-earth mineral production sector. However, non-environmental motives have also been imputed to China's rare-earth policy. According to The Economist, "Slashing their exports of rare-earth metals ... is all about moving Chinese manufacturers up the supply chain, so they can sell valuable finished goods to the world rather than lowly raw materials." Furthermore, China currently has an effective monopoly on the world's REE Value Chain. (All of the refineries and processing plants that transform the raw ore into valuable elements.) In the words of Deng Xiaoping, a Chinese politician from the late 1970s to the late 1980s, "The Middle East has oil; we have rare earths ... it is of extremely important strategic significance; we must be sure to handle the rare earth issue properly and make the fullest use of our country's advantage in rare-earth resources."
One possible example of market control is the division of General Motors that deals with miniaturized magnet research, which shut down its US office and moved its entire staff to China in 2006 (China's export quota only applies to the metal but not products made from these metals such as magnets).
It was reported, but officially denied, that China instituted an export ban on shipments of rare-earth oxides (but not alloys) to Japan on 22 September 2010, in response to the detainment of a Chinese fishing boat captain by the Japanese Coast Guard. On September 2, 2010, a few days before the fishing boat incident, The Economist reported that "China ... in July announced the latest in a series of annual export reductions, this time by 40% to precisely 30,258 tonnes."
The United States Department of Energy in its 2010 Critical Materials Strategy report identified dysprosium as the element that was most critical in terms of import reliance.
A 2011 report "China's Rare-Earth Industry", issued by the US Geological Survey and US Department of the Interior, outlines industry trends within China and examines national policies that may guide the future of the country's production. The report notes that China's lead in the production of rare-earth minerals has accelerated over the past two decades. In 1990, China accounted for only 27% of such minerals. In 2009, world production was 132,000 metric tons; China produced 129,000 of those tons. According to the report, recent patterns suggest that China will slow the export of such materials to the world: "Owing to the increase in domestic demand, the Government has gradually reduced the export quota during the past several years." In 2006, China allowed 47 domestic rare-earth producers and traders and 12 Sino-foreign rare-earth producers to export. Controls have since tightened annually; by 2011, only 22 domestic rare-earth producers and traders and 9 Sino-foreign rare-earth producers were authorized. The government's future policies will likely keep in place strict controls: "According to China's draft rare-earth development plan, annual rare-earth production may be limited to between 130,000 and 140,000 [metric tons] during the period from 2009 to 2015. The export quota for rare-earth products may be about 35,000 [metric tons] and the Government may allow 20 domestic rare-earth producers and traders to export rare earths."
The United States Geological Survey is actively surveying southern Afghanistan for rare-earth deposits under the protection of United States military forces. Since 2009 the USGS has conducted remote sensing surveys as well as fieldwork to verify Soviet claims that volcanic rocks containing rare-earth metals exist in Helmand Province near the village of Khanashin. The USGS study team has located a sizable area of rocks in the center of an extinct volcano containing light rare-earth elements including cerium and neodymium. It has mapped 1.3 million metric tons of desirable rock, or about ten years of supply at current demand levels. The Pentagon has estimated its value at about $7.4 billion.
It has been argued that the geopolitical importance of rare earths has been exaggerated in the literature on the geopolitics of renewable energy, underestimating the power of economic incentives for expanded production. This especially concerns neodymium. Due to its role in permanent magnets used for wind turbines, it has been argued that neodymium will be one of the main objects of geopolitical competition in a world running on renewable energy. But this perspective has been criticized for failing to recognize that most wind turbines have gears and do not use permanent magnets.
In popular culture
The plot of Eric Ambler's now-classic 1967 international crime-thriller Dirty Story (aka This Gun for Hire, but not to be confused with the movie This Gun for Hire (1942)) features a struggle between two rival mining cartels to control a plot of land in a fictional African country, which contains rich minable rare-earth ore deposits.
| Physical sciences | Lanthanides | Chemistry |
145478 | https://en.wikipedia.org/wiki/TIFF | TIFF | Tag Image File Format or Tagged Image File Format, commonly known by the abbreviations TIFF or TIF, is an image file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word processing, optical character recognition, image manipulation, desktop publishing, and page-layout applications. The format was created by the Aldus Corporation for use in desktop publishing. It published the latest version 6.0 in 1992, subsequently updated with an Adobe Systems copyright after the latter acquired Aldus in 1994. Several Aldus or Adobe technical notes have been published with minor extensions to the format, and several specifications have been based on TIFF 6.0, including TIFF/EP (ISO 12234-2), TIFF/IT (ISO 12639), TIFF-F (RFC 2306) and TIFF-FX (RFC 3949).
History
TIFF was created as an attempt to get desktop scanner vendors of the mid-1980s to agree on a common scanned image file format, in place of a multitude of proprietary formats. In the beginning, TIFF was only a binary image format (only two possible values for each pixel), because that was all that desktop scanners could handle. As scanners became more powerful, and as desktop computer disk space became more plentiful, TIFF grew to accommodate grayscale images, then color images. Today, TIFF, along with JPEG and PNG, is a popular format for deep-color images.
The first version of the TIFF specification was published by the Aldus Corporation in the autumn of 1986 after two major earlier draft releases. It can be labeled as Revision 3.0. It was published after a series of meetings with various scanner manufacturers and software developers. In April 1987 Revision 4.0 was released and it contained mostly minor enhancements. In October 1988 Revision 5.0 was released and it added support for palette color images and LZW compression.
TIFF is a complex format, defining many tags of which typically only a few are used in each file. This led to implementations supporting many varying subsets of the format, a situation that gave rise to the joke that TIFF stands for Thousands of Incompatible File Formats. This problem was addressed in revision 6.0 of the TIFF specification (June 1992) by introducing a distinction between Baseline TIFF (which all implementations were required to support) and TIFF Extensions (which are optional). Additional extensions are defined in two supplements to the specification which were published in September 1995 and March 2002 respectively.
Overview
A TIFF file contains one or several images, termed subfiles in the specification. The basic use case for having multiple subfiles is to encode a multipage telefax in a single file, but it is also allowed to have different subfiles be different variants of the same image, for example scanned at different resolutions. Rather than being a continuous range of bytes in the file, each subfile is a data structure whose top-level entity is called an image file directory (IFD). Baseline TIFF readers are only required to make use of the first subfile, but each IFD has a field for linking to a next IFD.
The IFDs are where the tags for which TIFF is named are located. Each IFD contains one or several entries, each of which is identified by its tag. The tags are arbitrary 16-bit numbers; their symbolic names such as ImageWidth often used in discussions of TIFF data do not appear explicitly in the file itself. Each IFD entry has an associated value, which may be decoded based on general rules of the format, but it depends on the tag what that value then means. There may within a single IFD be no more than one entry with any particular tag. Some tags are for linking to the actual image data, other tags specify how the image data should be interpreted, and still other tags are used for image metadata.
TIFF images are made up of rectangular grids of pixels. The two axes of this geometry are termed horizontal (or X, or width) and vertical (or Y, or length). Horizontal and vertical resolution need not be equal (since in a telefax they typically would not be equal). A baseline TIFF image divides the vertical range of the image into one or several strips, which are encoded (in particular: compressed) separately. Historically this served to facilitate TIFF readers (such as fax machines) with limited capacity to store uncompressed data — one strip would be decoded and then immediately printed — but the present specification motivates it by "increased editing flexibility and efficient I/O buffering". A TIFF extension provides the alternative of tiled images, in which case both the horizontal and the vertical ranges of the image are decomposed into smaller units.
An example of these things, which also serves to give a flavor of how tags are used in the TIFF encoding of images, is that a striped TIFF image would use tags 273 (StripOffsets), 278 (RowsPerStrip), and 279 (StripByteCounts). The StripOffsets point to the blocks of image data, the StripByteCounts say how long each of these blocks are (as stored in the file), and RowsPerStrip says how many rows of pixels there are in a strip; the latter is required even in the case of having just one strip, in which case it merely duplicates the value of tag 257 (ImageLength). A tiled TIFF image instead uses tags 322 (TileWidth), 323 (TileLength), 324 (TileOffsets), and 325 (TileByteCounts). The pixels within each strip or tile appear in row-major order, left to right and top to bottom.
The data for one pixel is made up of one or several samples; for example an RGB image would have one Red sample, one Green sample, and one Blue sample per pixel, whereas a greyscale or palette color image only has one sample per pixel. TIFF allows for both additive (e.g. RGB, RGBA) and subtractive (e.g. CMYK) color models. TIFF does not constrain the number of samples per pixel (except that there must be enough samples for the chosen color model), nor does it constrain how many bits are encoded for each sample, but baseline TIFF only requires that readers support a few combinations of color model and bit-depth of images. Support for custom sets of samples is very useful for scientific applications; 3 samples per pixel is at the low end of multispectral imaging, and hyperspectral imaging may require hundreds of samples per pixel. TIFF supports having all samples for a pixel next to each other within a single strip/tile (PlanarConfiguration = 1) but also different samples in different strips/tiles (PlanarConfiguration = 2). The default format for a sample value is as an unsigned integer, but a TIFF extension allows declaring them as alternatively being signed integers or IEEE-754 floats, as well as specify a custom range for valid sample values.
TIFF images may be uncompressed, compressed using a lossless compression scheme, or compressed using a lossy compression scheme. The lossless LZW compression scheme has at times been regarded as the standard compression for TIFF, but this is technically a TIFF extension, and the TIFF6 specification notes the patent situation regarding LZW. Compression schemes vary significantly in at what level they process the data: LZW acts on the stream of bytes encoding a strip or tile (without regard to sample structure, bit depth, or row width), whereas the JPEG compression scheme both transforms the sample structure of pixels (switching to a different color model) and encodes pixels in 8×8 blocks rather than row by row.
Most data in TIFF files are numerical, but the format supports declaring data as rather being textual, if appropriate for a particular tag. Tags that take textual values include Artist, Copyright, DateTime, DocumentName, InkNames, and Model.
MIME type
The MIME type image/tiff (defined in RFC 3302) without an application parameter is used for Baseline TIFF 6.0 files or to indicate that it is not necessary to identify a specific subset of TIFF or TIFF extensions. The optional "application" parameter (Example: Content-type: image/tiff; application=foo) is defined for image/tiff to identify a particular subset of TIFF and TIFF extensions for the encoded image data, if it is known. According to RFC 3302, specific TIFF subsets or TIFF extensions used in the application parameter must be published as an RFC.
MIME type image/tiff-fx (defined in RFC 3949 and RFC 3950) is based on TIFF 6.0 with TIFF Technical | Technology | File formats | null |
8507183 | https://en.wikipedia.org/wiki/Vomiting | Vomiting | Vomiting (also known as emesis, puking and throwing up) is the involuntary, forceful expulsion of the contents of one's stomach through the mouth and sometimes the nose.
Vomiting can be the result of ailments like food poisoning, gastroenteritis, pregnancy, motion sickness, or hangover; or it can be an after effect of diseases such as brain tumors, elevated intracranial pressure, or overexposure to ionizing radiation. The feeling that one is about to vomit is called nausea; it often precedes, but does not always lead to vomiting. Impairment due to alcohol or anesthesia can cause inhalation of vomit. In severe cases, where dehydration develops, intravenous fluid may be required. Antiemetics are sometimes necessary to suppress nausea and vomiting. Self-induced vomiting can be a component of an eating disorder such as bulimia nervosa, and is itself now classified as an eating disorder on its own, purging disorder.
Complications
Aspiration
Vomiting is dangerous if gastric content enters the respiratory tract. Under normal circumstances, the gag reflex and coughing prevent this from occurring; however, these protective reflexes are compromised in persons who are under the influence of certain substances (including alcohol) or even mildly anesthetized. The individual may choke and asphyxiate or develop aspiration pneumonia.
Dehydration and electrolyte imbalance
Prolonged and excessive vomiting depletes the body of water (dehydration), and may alter the electrolyte status. Gastric vomiting leads to the loss of acid (protons) and chloride directly. Combined with the resulting alkaline tide, this leads to hypochloremic metabolic alkalosis (low chloride levels together with high and and increased blood pH) and often hypokalemia (potassium depletion). The hypokalemia is an indirect result of the kidney compensating for the loss of acid. With the loss of intake of food the individual may eventually become cachectic. A less frequent occurrence results from a vomiting of intestinal contents, including bile acids and .
Mallory–Weiss tear
Repeated or profuse vomiting may cause erosions to the esophagus or small tears in the esophageal mucosa (Mallory–Weiss tear). This may become apparent if fresh red blood is mixed with vomit after several episodes.
Dentistry
Recurrent vomiting, such as observed in bulimia nervosa or more rarely anorexia nervosa, may lead to the destruction of the tooth enamel due to the acidity of the vomit. Digestive enzymes can also have a negative effect on oral health, by degrading the tissue of the gums.
Pathophysiology
Receptors on the floor of the fourth ventricle of the brain represent a chemoreceptor trigger zone, known as the area postrema, stimulation of which can lead to vomiting. The area postrema is a circumventricular organ and as such lies outside the blood–brain barrier; it can therefore be stimulated by blood-borne drugs that can stimulate vomiting or inhibit it.
There are various sources of input to the vomiting center:
The chemoreceptor trigger zone at the base of the fourth ventricle has numerous dopamine D2 receptors, serotonin 5-HT3 receptors, opioid receptors, acetylcholine receptors, and receptors for substance P. Stimulation of different receptors are involved in different pathways leading to emesis, in the final common pathway substance P appears involved.
The vestibular system, which sends information to the brain via cranial nerve VIII (vestibulocochlear nerve), plays a major role in motion sickness, and is rich in muscarinic receptors and histamine H1 receptors.
The cranial nerve X (vagus nerve) is activated when the pharynx is irritated, leading to a gag reflex.
The vagal and enteric nervous system inputs transmit information regarding the state of the gastrointestinal system. Irritation of the GI mucosa by chemotherapy, radiation, distention, or acute infectious gastroenteritis activates the 5-HT3 receptors of these inputs.
The CNS mediates vomiting that arises from psychiatric disorders and stress from higher brain centers.
The medulla plays an important role for triggering the vomiting act.
The vomiting act encompasses three types of outputs initiated by the chemoreceptor trigger zone: Motor, parasympathetic nervous system (PNS), and sympathetic nervous system (SNS). They are as follows:
Increased salivation to protect tooth enamel from stomach acids. (Excessive vomiting leads to dental erosion.) This is part of the PNS output.
The body takes a deep breath to avoid aspirating vomit.
Retroperistalsis starts from the middle of the small intestine and sweeps up digestive tract contents into the stomach, through the relaxed pyloric sphincter.
Intrathoracic pressure lowers (by inspiration against a closed glottis), coupled with an increase in abdominal pressure as the abdominal muscles contract, propels stomach contents into the esophagus as the lower esophageal sphincter relaxes. The stomach itself does not contract in the process of vomiting except for at the angular notch, nor is there any retroperistalsis in the esophagus.
Vomiting is ordinarily preceded by retching.
Vomiting also initiates an SNS response causing both sweating and increased heart rate.
Phases
The vomiting act has two phases. In the retching phase, the abdominal muscles undergo a few rounds of coordinated contractions together with the diaphragm and the muscles used in respiratory inspiration. For this reason, an individual may confuse this phase with an episode of violent hiccups. In this retching phase, nothing has yet been expelled. In the next phase, also termed the expulsive phase, intense pressure is formed in the stomach brought about by enormous shifts in both the diaphragm and the abdomen. These shifts are, in essence, vigorous contractions of these muscles that last for extended periods of time—much longer than a normal period of muscular contraction. The pressure is then suddenly released when the upper esophageal sphincter relaxes resulting in the expulsion of gastric contents. As the mouth and nasal cavity are connected via the back of the throat, particularly forceful vomiting, or producing large quantities of vomit may result in material being ejected through the nostrils in addition to the mouth. Individuals who do not regularly exercise their abdominal muscles may experience pain in those muscles for a few days. The decrease in pressure and the release of endorphins into the bloodstream after the expulsion causes the vomiter to feel relief almost immediately after vomiting.
Contents
Gastric secretions and likewise vomit are highly acidic. Recent food intake appears in the gastric vomit. Irrespective of the content, vomit tends to be malodorous.
The content of the vomitus (vomit) may be of medical interest. Fresh blood in the vomit is termed hematemesis ("blood vomiting"). Altered blood bears resemblance to coffee grounds (as the iron in the blood is oxidized) and, when this matter is identified, the term coffee-ground vomiting is used. Bile can enter the vomit during subsequent heaves due to duodenal contraction if the vomiting is severe. Fecal vomiting is often a consequence of intestinal obstruction or a gastrocolic fistula and is treated as a warning sign of this potentially serious problem (signum mali ominis).
If the vomiting reflex continues for an extended period with no appreciable vomitus, the condition is known as non-productive emesis or "dry heaves", which can be painful and debilitating.
Color of vomit
Bright red in the vomit suggests bleeding from the esophagus
Dark red vomit with liver-like clots suggests profuse bleeding in the stomach, such as from a perforated ulcer
Coffee-ground-like vomit suggests less severe bleeding in the stomach because the gastric acid has had time to change the composition of the blood
Yellow or green vomit suggests bile, indicating that the pyloric valve is open and bile is flowing into the stomach from the duodenum. This may occur during successive episodes of vomiting after the stomach contents have been completely expelled.
Causes
Vomiting may be due to a large number of causes, and protracted vomiting has a long differential diagnosis.
Digestive tract
Causes in the digestive tract
Gastritis (inflammation of the gastric wall)
Gastroenteritis
Gastroesophageal reflux disease
Celiac disease
Non-celiac gluten sensitivity
Pyloric stenosis (in babies, this typically causes a very forceful "projectile vomiting" and is an indication for urgent surgery)
Bowel obstruction
Overeating (stomach too full)
Acute abdomen and/or peritonitis
Ileus
Food allergies (often in conjunction with hives or swelling)
Cholecystitis, pancreatitis, appendicitis, hepatitis
Food poisoning
In children, it can be caused by an allergic reaction to cow's milk proteins (milk allergy or lactose intolerance)
Sensory system and brain
Causes in the sensory system:
Movement leading to motion sickness (which is caused by overstimulation of the labyrinthine canals of the ear)
Ménière's disease
Vertigo
Causes in the brain:
Concussion
Cerebral hemorrhage
Cerebral aneurysm
Migraine
Brain tumors, which can cause the chemoreceptors to malfunction
Benign intracranial hypertension and hydrocephalus
Metabolic disturbances (these may irritate both the stomach and the parts of the brain that coordinate vomiting):
Hypercalcemia (high calcium levels)
Uremia (urea accumulation, usually due to kidney failure)
Adrenal insufficiency
Hypoglycemia
Hyperglycemia
Pregnancy:
Hyperemesis, morning sickness
Drug reaction (vomiting may occur as an acute somatic response to):
Alcohol, which can be partially oxidized into acetaldehyde that causes the symptoms of hangover, including nausea, vomiting, shortness of breath, and fast heart rate.
Opioids
Selective serotonin reuptake inhibitors
Many chemotherapy drugs
Some entheogens (such as peyote or ayahuasca)
High altitude:
Altitude sickness
Illness (sometimes colloquially known as "stomach flu"—a broad name that refers to gastric inflammation caused by a range of viruses and bacteria):
Norovirus (formerly Norwalk virus or Norwalk agent)
Swine influenza
Psychiatric/behavioral:
Bulimia nervosa
Food neophobia
Purging disorder
Emetics
An emetic, such as syrup of ipecac, is a substance that induces vomiting when administered orally or by injection. An emetic is used medically when a substance has been ingested and must be expelled from the body immediately. For this reason, many toxic and easily digestible products such as rat poison contain an emetic. This presents no problem for the effectiveness of the rodenticide as rodents are unable to vomit. Inducing vomiting can remove the substance before it is absorbed into the body.
Emetics can be divided into two categories, those which produce their effect by acting on the vomiting center in the medulla, and those which act directly on the stomach itself. Some emetics, such as ipecac, fall into both categories; they initially act directly on the stomach, while their further and more vigorous effect occurs by stimulation of the medullary center.
Salt water and mustard water, which act directly on the stomach, have been used since ancient times as emetics. Care must be taken with salt, as excessive intake can potentially be harmful.
Copper sulfate was also used in the past as an emetic. It is now considered too toxic for this use.
Hydrogen peroxide is used as an emetic in veterinary practice.
Self-induced
Eating disorders (anorexia nervosa or bulimia nervosa)
To eliminate an ingested poison (some poisons should not be vomited as they may be more toxic when inhaled or aspirated; it is better to ask for help before inducing vomiting)
Some people who engage in binge drinking induce vomiting to make room in their stomachs for more alcohol consumption.
Participants in milk chugging typically end up vomiting most of the milk they consume, as proteins in the ingested milk (such as casein) rapidly denature and unravel on contact with gastric acid and protease enzymes, rapidly filling the stomach. Once the stomach becomes full, stretch receptors in the stomach wall trigger signals to vomit to expel any further liquid the participant ingests.
People suffering from nausea may induce vomiting in hopes of feeling better.
Miscellaneous
After surgery (postoperative nausea and vomiting)
Disagreeable sights or disgust, smells, tastes, sounds or thoughts (such as decayed matter, others' vomit, thinking of vomiting), etc.
Extreme pain, such as an intense headache or myocardial infarction (heart attack)
Extreme emotions
Cyclic vomiting syndrome (a poorly understood condition with attacks of vomiting)
Cannabinoid hyperemesis syndrome (similar to cyclic vomiting syndrome, but has cannabis use as its underlying cause).
High doses of ionizing radiation sometimes trigger a vomit reflex.
Violent fits of coughing, hiccups, or asthma
Anxiety
Depression
Overexertion (doing too much strenuous exercise can lead to vomiting shortly afterwards).
Other types
Projectile vomiting is vomiting that ejects the gastric contents with great force. It is a classic symptom of infantile hypertrophic pyloric stenosis, in which it typically follows feeding and can be so forceful that some material exits through the nose.
Treatment
An antiemetic is a drug that is effective against vomiting and nausea. Antiemetics are typically used to treat motion sickness and the side effects of medications such as opioids and chemotherapy.
Antiemetics act by inhibiting the receptor sites associated with emesis. Hence, anticholinergics, antihistamines, dopamine antagonists, serotonin antagonists, and cannabinoids are used as antiemetics.
Evidence to support the use of antiemetics for nausea and vomiting among adults in the emergency department is poor. It is unclear if any medication is better than another or better than no active treatment.
Epidemiology
Nausea and/or vomiting are the main complaints in 1.6% of visits to family physicians in Australia.
Society and culture
Herodotus, writing on the culture of the ancient Persians and highlighting the differences with those of the Greeks, notes that to vomit in the presence of others is prohibited among Persians.
Social cues
It is quite common that, when one person vomits, others nearby become nauseated, particularly when smelling the vomit of others, and often to the point of vomiting themselves. It is believed that this is an evolved trait among primates. Many primates in the wild tend to browse for food in small groups. Should one member of the party react adversely to some ingested food, it may be advantageous (in a survival sense) for other members of the party to also vomit. This tendency in human populations has been observed at drinking parties, where excessive consumption of alcoholic beverages may cause a number of party members to vomit nearly simultaneously, this being triggered by the initial vomiting of a single member of the party. This phenomenon has been touched on in popular culture: notorious instances appear in the films Monty Python's The Meaning of Life (1983) and Stand by Me (1986).
Intense vomiting in ayahuasca ceremonies is a common phenomenon. However, people who experience "la purga" after drinking ayahuasca, in general, regard the practise as both a physical and spiritual cleanse and often come to welcome it. It has been suggested that the consistent emetic effects of ayahuasca—in addition to its many other therapeutic properties—was of medicinal benefit to indigenous peoples of the Amazon, in helping to clear parasites from the gastrointestinal system.
There have also been documented cases of a single ill and vomiting individual inadvertently causing others to vomit, when they are especially fearful of also becoming ill, through a form of mass hysteria.
Most people try to contain their vomit by vomiting into a sink, toilet, or trash can, as vomit is difficult and unpleasant to clean. On airplanes and boats, special bags are supplied for sick passengers to vomit into. A special disposable bag (leakproof, puncture-resistant, odorless) containing absorbent material that solidifies the vomit quickly is also available, making it convenient and safe to store until there is an opportunity to dispose of it conveniently.
People who vomit chronically (e.g., as part of an eating disorder such as bulimia nervosa) may devise various ways to hide this disorder.
An online study of people's responses to "horrible sounds" found vomiting "the most disgusting". Professor Trevor Cox of the University of Salford's Acoustic Research Centre said, "We are pre-programmed to be repulsed by horrible things such as vomiting, as it is fundamental to staying alive to avoid nasty stuff." It is thought that disgust is triggered by the sound of vomiting to protect those nearby from possibly diseased food.
Psychology
Emetophilia is sexual arousal from vomiting, or watching others vomit. Emetophobia is a phobia that causes overwhelming, intense anxiety pertaining to vomiting.
| Biology and health sciences | Symptoms and signs | Health |
2711029 | https://en.wikipedia.org/wiki/Physics%20education | Physics education | Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education.
History
In Ancient Greece, Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas.
Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts.
Teaching strategies
Teaching strategies are the various techniques used to facilitate the education of students with different learning styles.
The different teaching strategies are intended to help students develop critical thinking and engage with the material. The choice of teaching strategy depends on the concept being taught, and indeed on the interest of the students.
Methods/Approaches for teaching physics
Lecture: Lecturing is one of the more traditional ways of teaching science. Owing to the convenience of this method, and the fact that most teachers are taught by it, it remains popular in spite of certain limitations (compared to other methods, it does little to develop critical thinking and scientific attitude among students). This method is teacher centric.
Recitation: Also known as the Socratic method. In this method, the student plays a greater role than they would in a lecture. The teacher asks questions with the aim of prompting the thoughts of the students. This method can be very effective in developing higher order thinking in pupils. To apply this strategy, the students should be partially informed about the content. The efficacy of the recitation method depends largely on the quality of the questions. This method is student centric.
Demonstration: In this method, the teacher performs certain experiments, which students observe and ask questions about. After the demonstration, the teacher can explain the experiment further and test the students' understanding via questions. This method is an important one, as science is not an entirely theoretical subject.
Lecture-cum-Demonstration: As its name suggests, this is a combination of two of the above methods: lecture and demonstration. The teacher performs the experiment and explains it simultaneously. By this method, the teacher can provide more information in less time. As with the demonstration method, the students only observe; they do not get any practical experience of their own. It is not possible to teach all topics by this method.
Laboratory Activities: Laboratories have students conduct physics experiments and collect data by interacting with physics equipment. Generally, students follow instructions in a lab manual. These instructions often take students through an experiment step-by-step. Typical learning objectives include reinforcing the course content through real-world interaction (similar to demonstrations) and thinking like experimental physicists. Lately, there has been some effort to shift lab activities toward the latter objective by separating from the course content, having students make their own decisions, and calling to question the notion of a "correct" experimental result. Unlike the demonstration method, the laboratory method gives students practical experience performing experiments like professional scientists. However, it often requires a significant amount of time and resources to work properly.
Problem-based learning: A group of 8-10 students and a tutor meet together to study a "case" or trigger problem. One student acts as a chair and one as a scribe to record the session. Students interact to understand the terminology and issues of the problem, discussing possible solutions and a set of learning objectives. The group breaks up for private study then return to share results. The approach has been used in many UK medical schools. The technique fosters independence, engagement, development of communication skill, and integration of new knowledge with real world issues. However, the technique requires more staff per student, staff willing to facilitate rather than lecture, and well designed and documented trigger scenarios. The technique has been shown to be effective in teaching physics.
Research
Physics education research is the study of how physics is taught and how students learn physics. It a subfield of educational research.
Worldwide
Physics education in Hong Kong
Physics education in the United Kingdom
| Physical sciences | Physics basics: General | Physics |
2714149 | https://en.wikipedia.org/wiki/Symmetry%20in%20mathematics | Symmetry in mathematics | Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations.
Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This can occur in many ways; for example, if X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (i.e., an isometry).
In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above.
Symmetry in geometry
The types of symmetry considered in basic geometry include reflectional symmetry, rotational symmetry, translational symmetry and glide reflection symmetry, which are described more fully in the main article Symmetry (geometry).
Symmetry in calculus
Even and odd functions
Even functions
Let f(x) be a real-valued function of a real variable, then f is even if the following equation holds for all x and -x in the domain of f:
Geometrically speaking, the graph face of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions include , x2, x4, cos(x), and cosh(x).
Odd functions
Again, let f be a real-valued function of a real variable, then f is odd if the following equation holds for all x and -x in the domain of f:
That is,
Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are x, x3, sin(x), sinh(x), and erf(x).
Integrating
The integral of an odd function from −A to +A is zero, provided that A is finite and that the function is integrable (e.g., has no vertical asymptotes between −A and A).
The integral of an even function from −A to +A is twice the integral from 0 to +A, provided that A is finite and the function is integrable (e.g., has no vertical asymptotes between −A and A). This also holds true when A is infinite, but only if the integral converges.
Series
The Maclaurin series of an even function includes only even powers.
The Maclaurin series of an odd function includes only odd powers.
The Fourier series of a periodic even function includes only cosine terms.
The Fourier series of a periodic odd function includes only sine terms.
Symmetry in linear algebra
Symmetry in matrices
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose (i.e., it is invariant under matrix transposition). Formally, matrix A is symmetric if
By the definition of matrix equality, which requires that the entries in all corresponding positions be equal, equal matrices must have the same dimensions (as matrices of different sizes or shapes cannot be equal). Consequently, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if the entries are written as A = (aij), then aij = aji, for all indices i and j.
For example, the following 3×3 matrix is symmetric:
Every square diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.
In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.
Symmetry in abstract algebra
Symmetric groups
The symmetric group Sn (on a finite set of n symbols) is the group whose elements are all the permutations of the n symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself. Since there are n! (n factorial) possible permutations of a set of n symbols, it follows that the order (i.e., the number of elements) of the symmetric group Sn is n!.
Symmetric polynomials
A symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n, one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn).
Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view, the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.
Examples
In two variables X1 and X2, one has symmetric polynomials such as:
and in three variables X1, X2 and X3, one has as a symmetric polynomial:
Symmetric tensors
In mathematics, a symmetric tensor is tensor that is invariant under a permutation of its vector arguments:
for every permutation σ of the symbols {1,2,...,r}.
Alternatively, an rth order symmetric tensor represented in coordinates as a quantity with r indices satisfies
The space of symmetric tensors of rank r on a finite-dimensional vector space is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics.
Galois theory
Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say A and B, that . The central idea of Galois theory is to consider those permutations (or rearrangements) of the roots having the property that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. An important proviso is that we restrict ourselves to algebraic equations whose coefficients are rational numbers. Thus, Galois theory studies the symmetries inherent in algebraic equations.
Automorphisms of algebraic objects
In abstract algebra, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Examples
In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X.
In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field.
A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group.
In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V).
A field automorphism is a bijective ring homomorphism from a field to itself. In the cases of the rational numbers (Q) and the real numbers (R) there are no nontrivial field automorphisms. Some subfields of R have nontrivial field automorphisms, which however do not extend to all of R (because they cannot preserve the property of a number having a square root in R). In the case of the complex numbers, C, there is a unique nontrivial automorphism that sends R into R: complex conjugation, but there are infinitely (uncountably) many "wild" automorphisms (assuming the axiom of choice). Field automorphisms are important to the theory of field extensions, in particular Galois extensions. In the case of a Galois extension L/K the subgroup of all automorphisms of L fixing K pointwise is called the Galois group of the extension.
Symmetry in representation theory
Symmetry in quantum mechanics: bosons and fermions
In quantum mechanics, bosons have representatives that are symmetric under permutation operators, and fermions have antisymmetric representatives.
This implies the Pauli exclusion principle for fermions. In fact, the Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state and the other in state :
and antisymmetry under exchange means that . This implies that , which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity is not a matrix but an antisymmetric rank-two tensor.
Conversely, if the diagonal quantities are zero in every basis, then the wavefunction component:
is necessarily antisymmetric. To prove it, consider the matrix element:
This is zero, because the two particles have zero probability to both be in the superposition state . But this is equal to
The first and last terms on the right hand side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey:
.
or
Symmetry in set theory
Symmetric relation
We call a relation symmetric if every time the relation stands from A to B, it stands too from B to A.
Note that symmetry is not the exact opposite of antisymmetry.
Symmetry in metric spaces
Isometries of a space
An isometry is a distance-preserving map between metric spaces. Given a metric space, or a set and scheme for assigning distances between elements of the set, an isometry is a transformation which maps elements to another metric space such that the distance between the elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional space, two geometric figures are congruent if they are related by an isometry: related by either a rigid motion, or a composition of a rigid motion and a reflection. Up to a relation by a rigid motion, they are equal if related by a direct isometry.
Isometries have been used to unify the working definition of symmetry in geometry and for functions, probability distributions, matrices, strings, graphs, etc.
Symmetries of differential equations
A symmetry of a differential equation is a transformation that leaves the differential equation invariant. Knowledge of such symmetries may help solve the differential equation.
A Line symmetry of a system of differential equations is a continuous symmetry of the system of differential equations. Knowledge of a Line symmetry can be used to simplify an ordinary differential equation through reduction of order.
For ordinary differential equations, knowledge of an appropriate set of Lie symmetries allows one to explicitly calculate a set of first integrals, yielding a complete solution without integration.
Symmetries may be found by solving a related set of ordinary differential equations. Solving these equations is often much simpler than solving the original differential equations.
Symmetry in probability
In the case of a finite number of possible outcomes, symmetry with respect to permutations (relabelings) implies a discrete uniform distribution.
In the case of a real interval of possible outcomes, symmetry with respect to interchanging sub-intervals of equal length corresponds to a continuous uniform distribution.
In other cases, such as "taking a random integer" or "taking a random real number", there are no probability distributions at all symmetric with respect to relabellings or to exchange of equally long subintervals. Other reasonable symmetries do not single out one particular distribution, or in other words, there is not a unique probability distribution providing maximum symmetry.
There is one type of isometry in one dimension that may leave the probability distribution unchanged, that is reflection in a point, for example zero.
A possible symmetry for randomness with positive outcomes is that the former applies for the logarithm, i.e., the outcome and its reciprocal have the same distribution. However this symmetry does not single out any particular distribution uniquely.
For a "random point" in a plane or in space, one can choose an origin, and consider a probability distribution with circular or spherical symmetry, respectively.
| Mathematics | Geometry | null |
2715469 | https://en.wikipedia.org/wiki/Symmetry%20%28physics%29 | Symmetry (physics) | The symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation.
A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group).
These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems.
Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is described in special relativity by a group of transformations of the spacetime known as the Poincaré group. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity.
As a kind of invariance
Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observer's position within the room.
Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve the shape of its surface from any given vantage point.
Invariance in force
The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well.
For example, an electric field due to an electrically charged wire of infinite length is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the wire will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. This is not true in general for an arbitrary system of charges.
In Newton's theory of mechanics, given two bodies, each with mass m, starting at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis.
The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged.
Local and global
Symmetries may be broadly classified as global or local. A global symmetry is one that keeps a property invariant for a transformation that is applied simultaneously at all points of spacetime, whereas a local symmetry is one that keeps a property invariant when a possibly different symmetry transformation is applied at each point of spacetime; specifically a local symmetry transformation is parameterised by the spacetime coordinates, whereas a global symmetry is not. This implies that a global symmetry is also a local symmetry. Local symmetries play an important role in physics as they form the basis for gauge theories.
Continuous
The two examples of rotational symmetry described above – spherical and cylindrical – are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by transformations that change continuously as a function of their parameterization. An important subclass of continuous symmetries in physics are spacetime symmetries.
Spacetime
Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time.
Time translation: A physical system may have the same features over a certain interval of time Δt; this is expressed mathematically as invariance under the transformation for any real parameters t and in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy mgh when suspended from a height h above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time t and also at , the particle's total gravitational potential energy will be preserved.
Spatial translation: These spatial symmetries are represented by transformations of the form and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room.
Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry.
Poincaré transformations: These are spatio-temporal symmetries which preserve distances in Minkowski spacetime, i.e. they are isometries of Minkowski space. They are studied primarily in special relativity. Those isometries that leave the origin fixed are called Lorentz transformations and give rise to the symmetry known as Lorentz covariance.
Projective symmetries: These are spatio-temporal symmetries which preserve the geodesic structure of spacetime. They may be defined on any smooth manifold, but find many applications in the study of exact solutions in general relativity.
Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant.
Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system.
Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries.
Discrete
A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges.
Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation, . For example, Newton's second law of motion still holds if, in the equation , is replaced by . This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height.
Spatial inversion: These are represented by transformations of the form and indicate an invariance property of a system when the coordinates are 'inverted'. Stated another way, these are symmetries between a certain object and its mirror image.
Glide reflection: These are represented by a composition of a translation and a reflection. These symmetries occur in some crystals and in some planar symmetries, known as wallpaper symmetries.
C, P, and T
The Standard Model of particle physics has three related natural near-symmetries. These state that the universe in which we live should be indistinguishable from one where a certain type of change is introduced.
C-symmetry (charge symmetry), a universe where every particle is replaced with its antiparticle.
P-symmetry (parity symmetry), a universe where everything is mirrored along the three physical axes. This excludes weak interactions as demonstrated by Chien-Shiung Wu.
T-symmetry (time reversal symmetry), a universe where the direction of time is reversed. T-symmetry is counterintuitive (the future and the past are not symmetrical) but explained by the fact that the Standard Model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the Big Bang and the resulting low-entropy state in the "future". Since we perceive the "past" ("future") as having lower (higher) entropy than the present, the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past, and vice versa.
These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics.
Supersymmetry
A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the Standard Model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the Standard Model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. Currently LHC is preparing for a run which tests supersymmetry.
Generalized symmetries
Generalized symmetries encompass a number of recently recognized generalizations of the concept of a global symmetry. These include higher form symmetries, higher group symmetries, non-invertible symmetries, and subsystem symmetries.
Mathematics of physical symmetry
The transformations describing physical symmetries typically form a mathematical group. Group theory is an important area of mathematics for physicists.
Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group SO(3). (The '3' refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is SO(3). Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group).
Discrete groups describe discrete symmetries. For example, the symmetries of an equilateral triangle are characterized by the symmetric group S.
A type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard Model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.)
Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology).
Conservation laws and symmetry
The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, spatial translation symmetry (i.e. homogeneity of space) gives rise to conservation of (linear) momentum, and temporal translation symmetry (i.e. homogeneity of time) gives rise to conservation of energy.
The following table summarizes some fundamental symmetries and the associated conserved quantity.
Mathematics
Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitesimal transformations is equivalent to a third infinitesimal transformation of the same kind hence they form a Lie algebra.
A general coordinate transformation described as the general field (also known as a diffeomorphism) has the infinitesimal effect on a scalar , spinor or vector field that can be expressed (using the Einstein summation convention):
Without gravity only the Poincaré symmetries are preserved which restricts to be of the form:
where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example, local gauge transformations apply to both a vector and spinor field:
where are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types.
Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind:
If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form:
with D generating scale transformations and K generating special conformal transformations. For example, N = 4 supersymmetric Yang–Mills theory has this symmetry while general relativity does not although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models.
In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields.
| Physical sciences | Basics_6 | null |
1959215 | https://en.wikipedia.org/wiki/Organosulfur%20chemistry | Organosulfur chemistry | Organosulfur chemistry is the study of the properties and synthesis of organosulfur compounds, which are organic compounds that contain sulfur. They are often associated with foul odors, but many of the sweetest compounds known are organosulfur derivatives, e.g., saccharin. Nature is abound with organosulfur compounds—sulfur is vital for life. Of the 20 common amino acids, two (cysteine and methionine) are organosulfur compounds, and the antibiotics penicillin and sulfa drugs both contain sulfur. While sulfur-containing antibiotics save many lives, sulfur mustard is a deadly chemical warfare agent. Fossil fuels, coal, petroleum, and natural gas, which are derived from ancient organisms, necessarily contain organosulfur compounds, the removal of which is a major focus of oil refineries.
Sulfur shares the chalcogen group with oxygen, selenium, and tellurium, and it is expected that organosulfur compounds have similarities with carbon–oxygen, carbon–selenium, and carbon–tellurium compounds.
A classical chemical test for the detection of sulfur compounds is the Carius halogen method.
Structural classes
Organosulfur compounds can be classified according to the sulfur-containing functional groups, which are listed (approximately) in decreasing order of their occurrence.
Sulfides
Sulfides, formerly known as thioethers, are characterized by C−S−C bonds Relative to C−C bonds, C−S bonds are both longer, because sulfur atoms are larger than carbon atoms, and about 10% weaker. Representative bond lengths in sulfur compounds are 183 pm for the S−C single bond in methanethiol and 173 pm in thiophene. The C−S bond dissociation energy for thiomethane is 89 kcal/mol (370 kJ/mol) compared to methane's 100 kcal/mol (420 kJ/mol) and when hydrogen is replaced by a methyl group the energy decreases to 73 kcal/mol (305 kJ/mol). The single carbon to oxygen bond is shorter than that of the C−C bond. The bond dissociation energies for dimethyl sulfide and dimethyl ether are respectively 73 and 77 kcal/mol (305 and 322 kJ/mol).
Sulfides are typically prepared by alkylation of thiols. Alkylating agents include not only alkyl halides, but also epoxides, aziridines, and Michael acceptors.
They can also be prepared via the Pummerer rearrangement.
In the Ferrario reaction, phenyl ether is converted to phenoxathiin by action of elemental sulfur and aluminium chloride.
Thioacetals and thioketals feature C−S−C−S−C bond sequence. They represent a subclass of sulfides. The thioacetals are useful in "umpolung" of carbonyl groups. Thioacetals and thioketals can also be used to protect a carbonyl group in organic syntheses.
The above classes of sulfur compounds also exist in saturated and unsaturated heterocyclic structures, often in combination with other heteroatoms, as illustrated by thiiranes, thiirenes, thietanes, thietes, dithietanes, thiolanes, thianes, dithianes, thiepanes, thiepines, thiazoles, isothiazoles, and thiophenes, among others. The latter three compounds represent a special class of sulfur-containing heterocycles that are aromatic. The resonance stabilization of thiophene is 29 kcal/mol (121 kJ/mol) compared to 20 kcal/mol (84 kJ/mol) for the oxygen analogue furan. The reason for this difference is the higher electronegativity for oxygen drawing away electrons to itself at the expense of the aromatic ring current. Yet as an aromatic substituent the thio group is less electron-releasing than the alkoxy group. Dibenzothiophenes (see diagram), tricyclic heterocycles consisting of two benzene rings fused to a central thiophene ring, occurs widely in heavier fractions of petroleum.
Thiols, disulfides, polysulfides
Thiol groups contain the functionality R−SH. Thiols are structurally similar to the alcohol group, but these functionalities are very different in their chemical properties. Thiols are more nucleophilic, more acidic, and more readily oxidized. This acidity can differ by 5 pKa units.
The difference in electronegativity between sulfur (2.58) and hydrogen (2.20) is small and therefore hydrogen bonding in thiols is not prominent. Aliphatic thiols form monolayers on gold, which are topical in nanotechnology.
Certain aromatic thiols can be accessed through a Herz reaction.
Disulfides R−S−S−R with a covalent sulfur to sulfur bond are important for crosslinking: in biochemistry for the folding and stability of some proteins and in polymer chemistry for the crosslinking of rubber.
Longer sulfur chains are also known, such as in the natural product varacin which contains an unusual pentathiepin ring (5-sulfur chain cyclised onto a benzene ring).
Thioesters
Thioesters have general structure R−C(O)−S−R. They are related to regular esters (R−C(O)−O−R) but are more susceptible to hydrolysis and related reactions. Thioesters formed from coenzyme A are prominent in biochemistry, especially in fatty acid synthesis.
Sulfoxides, sulfones and thiosulfinates
A sulfoxide, R−S(O)−R, is the S-oxide of a sulfide ("sulfide oxide"), a sulfone, R−S(O)2−R, is the S,S-dioxide of a sulfide, a thiosulfinate, R−S(O)−S−R, is the S-oxide of a disulfide, and a thiosulfonate, R−S(O)2−S−R, is the S,S-dioxide of a disulfide. All of these compounds are well known with extensive chemistry, e.g., dimethyl sulfoxide, dimethyl sulfone, and allicin (see drawing).
Sulfimides, sulfoximides, sulfonediimines
Sulfimides (also called a sulfilimines) are sulfur–nitrogen compounds of structure R2S=NR′, the nitrogen analog of sulfoxides. They are of interest in part due to their pharmacological properties. When two different R groups are attached to sulfur, sulfimides are chiral. Sulfimides form stable α-carbanions.
Sulfoximides (also called sulfoximines) are tetracoordinate sulfur–nitrogen compounds, isoelectronic with sulfones, in which one oxygen atom of the sulfone is replaced by a substituted nitrogen atom, e.g., R2S(O)=NR′. When two different R groups are attached to sulfur, sulfoximides are chiral. Much of the interest in this class of compounds is derived from the discovery that methionine sulfoximide (methionine sulfoximine) is an inhibitor of glutamine synthetase.
Sulfonediimines (also called sulfodiimines, sulfodiimides or sulfonediimides) are tetracoordinate sulfur–nitrogen compounds, isoelectronic with sulfones, in which both oxygen atoms of the sulfone are replaced by a substituted nitrogen atom, e.g., R2S(=NR′)2. They are of interest because of their biological activity and as building blocks for heterocycle synthesis.
S-Nitrosothiols
S-Nitrosothiols, also known as thionitrites, are compounds containing a nitroso group attached to the sulfur atom of a thiol, e.g. R−S−N=O. They have received considerable attention in biochemistry because they serve as donors of the nitrosonium ion, NO+, and nitric oxide, NO, which may serve as signaling molecules in living systems, especially related to vasodilation.
Sulfur halides
A wide range of organosulfur compounds are known which contain one or more halogen atom ("X" in the chemical formulas that follow) bonded to a single sulfur atom, e.g.: sulfenyl halides, RSX; sulfinyl halides, RS(O)X; sulfonyl halides, RSO2X; alkyl and arylsulfur trichlorides, RSCl3 and trifluorides, RSF3; and alkyl and arylsulfur pentafluorides, RSF5. Less well known are dialkylsulfur tetrahalides, mainly represented by the tetrafluorides, e.g., R2SF4.
Thioketones, thioaldehydes, and related compounds
Compounds with double bonds between carbon and sulfur are relatively uncommon, but include the important compounds carbon disulfide, carbonyl sulfide, and thiophosgene. Thioketones (RC(=S)R′) are uncommon with alkyl substituents, but one example is thiobenzophenone. Thioaldehydes are rarer still, reflecting their lack of steric protection ("thioformaldehyde" exists as a cyclic trimer). Thioamides, with the formula R1C(=S)N(R2)R3 are more common. They are typically prepared by the reaction of amides with Lawesson's reagent. Isothiocyanates, with formula R−N=C=S, are found naturally. Vegetable foods with characteristic flavors due to isothiocyanates include wasabi, horseradish, mustard, radish, Brussels sprouts, watercress, nasturtiums, and capers.
S-Oxides and S,S-dioxides of thiocarbonyl compounds
The S-oxides of thiocarbonyl compounds are known as thiocarbonyl S-oxides: (R2C=S=O, and thiocarbonyl S,S-dioxides or sulfenes, R2C=SO2). The thione S-oxides have also been known as sulfines, and while IUPAC considers this term obsolete, the name persists in the literature. These compounds are well known with extensive chemistry. Examples include syn-propanethial-S-oxide and sulfene.
Triple bonds between carbon and sulfur
Triple bonds between sulfur and carbon in sulfaalkynes are rare and can be found in carbon monosulfide (CS) and have been suggested for the compounds F3CCSF3 and F5SCSF3. The compound HCSOH is also represented as having a formal triple bond.
Thiocarboxylic acids and thioamides
Thiocarboxylic acids (RC(O)SH) and dithiocarboxylic acids (RC(S)SH) are well known. They are structurally similar to carboxylic acids but more acidic. Thioamides are analogous to amides.
Sulfonic, sulfinic and sulfenic acids, esters, amides, and related compounds
Sulfonic acids have functionality R−S(=O)2−OH. They are strong acids that are typically soluble in organic solvents. Sulfonic acids like trifluoromethanesulfonic acid is a frequently used reagent in organic chemistry. Sulfinic acids have functionality R−S(O)−OH while sulfenic acids have functionality R−S−OH. In the series sulfonic—sulfinic—sulfenic acids, both the acid strength and stability diminish in that order. Sulfonamides, sulfinamides and sulfenamides, with formulas R−SO2NR′2, R−S(O)NR′2, and R−SNR′2, respectively, each have a rich chemistry. For example, sulfa drugs are sulfonamides derived from aromatic sulfonation. Chiral sulfinamides are used in asymmetric synthesis, while sulfenamides are used extensively in the vulcanization process to assist cross-linking. Thiocyanates, R−S−CN, are related to sulfenyl halides and esters in terms of reactivity.
Sulfonium, oxosulfonium and related salts
A sulfonium ion is a positively charged ion featuring three organic substituents attached to sulfur, with the formula [R3S]+. Together with their negatively charged counterpart, the anion, the compounds are called sulfonium salts. An oxosulfonium ion is a positively charged ion featuring three organic substituents and an oxygen attached to sulfur, with the formula [R3S=O]+. Together with their negatively charged counterpart, the anion, the compounds are called oxosulfonium salts. Related species include alkoxysulfonium and chlorosulfonium ions, [R2SOR]+ and [R2SCl]+, respectively.
Sulfonium, oxosulfonium and thiocarbonyl ylides
Deprotonation of sulfonium and oxosulfonium salts affords ylides, of structure R2S+−C−−R′2 and R2S(O)+−C−−R′2. While sulfonium ylides, for instance in the Johnson–Corey–Chaykovsky reaction used to synthesize epoxides, are sometimes drawn with a C=S double bond, e.g., R2S=CR′2, the ylidic carbon–sulfur bond is highly polarized and is better described as being ionic. Sulfonium ylides are key intermediates in the synthetically useful Stevens rearrangement. Thiocarbonyl ylides (RR′C=S+−C−−RR′) can form by ring-opening of thiiranes, photocyclization of aryl vinyl sulfides, as well as by other processes.
Sulfuranes and persulfuranes
Sulfuranes are relatively specialized functional group that feature tetravalent sulfur, with the formula SR4 Likewise, persulfuranes feature hexavalent SR6.
One of the few all-carbon persulfuranes has two methyl and two biphenylene ligands:
It is prepared from the corresponding sulfurane 1 with xenon difluoride / boron trifluoride in acetonitrile to the sulfuranyl dication 2 followed by reaction with methyllithium in tetrahydrofuran to (a stable) persulfurane 3 as the cis isomer. X-ray diffraction shows C−S bond lengths ranging between 189 and 193 pm (longer than the standard bond length) with the central sulfur atom in a distorted octahedral molecular geometry.
Organosulfur compounds in nature
A variety of organosulfur compounds occur in nature. Most abundant are the amino acids methionine, cysteine, and cystine. The vitamins biotin and thiamine, as well as lipoic acid contain sulfur heterocycles. Glutathione is the primary intracellular antioxidant. Penicillin and cephalosporin are life-saving antibiotics, derived from fungi. Gliotoxin is a sulfur-containing mycotoxin produced by several species of fungi under investigation as an antiviral agent.
In fossil fuels
Common organosulfur compounds present in petroleum fractions at the level of 200–500 ppm. Common compounds are thiophenes, especially dibenzothiophenes. By the process of hydrodesulfurization (HDS) in refineries, these compounds are removed as illustrated by the hydrogenolysis of thiophene:
Flavor and odor
Compounds like allicin and ajoene are responsible for the odor of garlic. Lenthionine contributes to the flavor of shiitake mushrooms. Volatile organosulfur compounds also contribute subtle flavor characteristics to wine, nuts, cheddar cheese, chocolate, coffee, and tropical fruit flavors. Many of these natural products also have important medicinal properties such as preventing platelet aggregation or fighting cancer.
Humans and other animals have an exquisitely sensitive sense of smell toward the odor of low-valent organosulfur compounds such as thiols, sulfides, and disulfides. Malodorous volatile thiols are protein-degradation products found in putrid food, so sensitive identification of these compounds is crucial to avoiding intoxication. Low-valent volatile sulfur compounds are also found in areas where oxygen levels in the air are low, posing a risk of suffocation.
Copper is required for the highly sensitive detection of certain volatile thiols and related organosulfur compounds by olfactory receptors in mice. Whether humans, too, require copper for sensitive detection of thiols is not yet known.
| Physical sciences | Organic compounds | null |
1960043 | https://en.wikipedia.org/wiki/Abies%20procera | Abies procera | Abies procera, the noble fir, also called red fir and Christmas tree, is a species of fir native to the Cascade Range and Pacific Coast Ranges of the northwestern Pacific Coast of the United States. It occurs at altitudes of .
Description
A. procera is a large evergreen conifer with a narrow conic crown, growing up to tall and in trunk diameter, rarely to tall and thick. The bark on young trees is smooth and gray with resin blisters, becoming red-brown, rough and fissured on old trees, usually less than thick; the inner bark is reddish. The leaves are needle-like, long, glaucous blue-green above and below with strong stomal bands, and a blunt to notched tip. They are arranged spirally on the shoot, but twisted slightly S-shaped to be upcurved above the shoot. The cones are upright, long and thick, with the purple scales almost completely hidden by the long exserted yellow-green bract scales; they ripen brown and disintegrate to release the winged seeds in fall. Viable seeds are only produced every few years.
The species can grow for up to 200 years.
Taxonomy
David Douglas discovered the species in the Cascade Range in the early 19th century, calling it "noble fir".
The specific epithet procera means "tall". It is the world's tallest true fir.
Distribution
The species is native to the Cascade Range and Pacific Coast Ranges of western Washington and Oregon, as well as the extreme northwest of California. It is a high-altitude tree, typically occurring at altitudes of , often above , and only rarely reaching the tree line.
Ecology
The species is closely related to Abies magnifica (red fir), which replaces it further southeast in southernmost Oregon and California, being best distinguished by the leaves having a groove along the midrib on the upper side; red fir does not show this. Red fir also tends to have the leaves less closely packed, with the shoot bark visible between the leaves, whereas the shoot is largely hidden in noble fir. Red fir cones also mostly have shorter bracts, except in A. magnifica var. shastensis (Shasta red fir); this variety hybridizes with noble fir and may itself be a hybrid between noble fir and red fir. As opposed to Shasta red fir, noble fir is shade-intolerant, leaving its lower trunk branchless.
Noble fir occurs with Douglas-fir and western hemlock at middle elevations, and with Pacific silver fir and mountain hemlock at higher elevations. It occurs in cool, humid areas similar to those occupied by Pacific silver fir. While it benefits from occasional disturbances (e.g. the 1980 eruption of Mount St. Helens), it is very susceptible to fire but is usually protected by its moist environment. It is relatively resistant to damage from wind, insects or diseases. Although the roots grow slowly, it can survive in rocky soil as long as it is moist.
Uses
The Paiute used the foliage to treat coughs and colds.
The superior light and strong wood was recognized early by loggers, who called it "larch" to avoid conflating it with inferior firs. The wood is used for specialized applications such as ladders, general structural purposes and paper manufacture. It may have been used for the frames of the Royal Air Force's Mosquito bombers during World War II.
David Douglas sent noble fir seeds to Britain in 1830, introducing it to horticulturalists. It is a popular and favored Christmas tree. The prostrate grey cultivar A. procera (Glauca Group) 'Glauca Prostrata' has gained the Royal Horticultural Society's Award of Garden Merit.
| Biology and health sciences | Pinaceae | Plants |
1960695 | https://en.wikipedia.org/wiki/Spring%20green | Spring green | Spring green is a color that was traditionally considered to be on the yellow side of green, but in modern computer systems based on the RGB color model is halfway between cyan and green on the color wheel.
The modern spring green, when plotted on the CIE chromaticity diagram, corresponds to a visual stimulus of about 505 nanometers on the visible spectrum. In HSV color space, the expression of which is known as the RGB color wheel, spring green has a hue of 150°. Spring green is one of the tertiary colors on the RGB color wheel, where it is the complementary color of rose.
The first recorded use of spring green as a color name in English was in 1766, referring to roughly the color now called spring bud.
Spring green (computer)
Spring green (HTML)
Spring green is a web color, common to X11 and HTML.
Medium spring green
Displayed at right is the color medium spring green.
Medium spring green is a web color. It is close to but not right on the color wheel and it is a little closer to cyan than to green.
Dark spring green
At right is displayed the web color dark spring green.
Additional variations of web spring green
Mint cream
Displayed at right is the web color mint cream, a pale pastel tint of spring green.
The color mint cream is a representation of the color of the interior of an after dinner mint (which is disc shaped with mint flavored buttercream on the inside and a chocolate coating on the outside).
Sea green
Sea green is a shade of cyan color that resembles the hue of shallow seawater as seen from the surface.
Sea green is notable for being the emblematic color of the Levellers party in the politics of 1640s England. Leveller supporters would wear a sea-green ribbon, in a similar manner to the present-day red AIDS awareness ribbon.
Medium sea green
At right is displayed the web color medium sea green, a medium shade of spring green.
Aquamarine
Aquamarine is a color that is a pale bright tint of spring green toned toward cyan. It represents the color of the aquamarine gemstone. Aquamarine is the birthstone for those born on January 21 to February 20 in tropical zodiac, and February 14 to March 15 in sidereal zodiac.
Spring green (traditional)
Spring bud
Spring bud is the color that used to be called spring green before the X11 web color spring green was formulated in 1987 when the X11 colors were first promulgated. This color is now called spring bud to avoid confusion with the web color.
The color is also called soft spring green, spring green (traditional), or spring green (M&P).
The first recorded use of spring green as a color name in English (meaning the color that is now called spring bud) was in 1766.
Additional variations of traditional spring green
Emerald
Emerald, also called emerald green, is a tone of green that is particularly light and bright, with a faint bluish cast. The name derives from the typical appearance of the emerald gemstone.
The first recorded use of emerald as a color name in English was in 1598.
Ireland is sometimes referred to as the Emerald Isle due to its lush greenery. The May birthstone is emerald. Seattle is sometimes referred to as the Emerald City, because its abundant rainfall creates lush vegetation. In the Middle Ages, The Emerald Tablet of Hermes Trismegistus was believed to contain the secrets of alchemy. "Emerald City", from the story of The Wonderful Wizard of Oz, by L. Frank Baum, is a city where everything from food to people are emerald green. However, it is revealed at the end of the story that everything in the city is normal colored, but the glasses everyone wears are emerald tinted. The Green Zone in Baghdad is sometimes ironically and cynically referred to as the Emerald City. The Emerald Buddha is a figurine of the sitting Buddha, made of green jade (rather than emerald), clothed in gold, and about 45 cm tall. It is kept in the Chapel of the Emerald Buddha (Wat Phra Kaew) on the grounds of the Grand Palace in Bangkok. The Emerald Triangle refers to the three counties of Mendocino, Humboldt, and Trinity in Northern California, United States because these three counties are the biggest marijuana producing counties in California and also the US. A county-commissioned study reports pot accounts for up to two-thirds of the economy of Mendocino. Emerald Cities: Urban Sustainability and Economic Development is a book published in 2010 by Joan Fitzgerald, director of the law, policy and society program at Northeastern University, about ecologically sustainable city planning.
Emerald was invented in Germany in 1814. By taking acetic acid, mixing and boiling it with vinegar, and then by adding some arsenic, a bright blue-green hue was formed. During the 19th century, the arsenic-containing dye Paris green was marketed as emerald green. It was notorious for causing deaths due to it being a popular color used for wallpaper. Victorian women used this bright color for dresses, and florists used it on fake flowers.
Viridian
At right is displayed the color viridian, a medium tone of spring green.
The first recorded use of viridian as a color name in English was in the 1860s (exact year uncertain).
Other variations of spring green
Green (CMYK) (pigment green)
The color defined as green in the CMYK color system used in printing, also known as pigment green, is the tone of green that is achieved by mixing process (printer's) cyan and process (printer's) yellow in equal proportions. It is displayed at adjacent.
The purpose of the CMYK color system is to provide the maximum possible gamut of color reproducible in printing.
The color indicated is only approximate as the colors of printing inks may vary.
Green (NCS) (psychological primary green)
The color defined as green in the NCS or Natural Color System is shown at adjacent (NCS 2060-G). The natural color system is a color system based on the four unique hues or psychological primary colors red, yellow, green, and blue. The NCS is based on the opponent process theory of vision.
The Natural Color System is widely used in Scandinavia.
Green (Munsell)
The color defined as green in the Munsell color system (Munsell 5G) is shown adjacent. The Munsell color system is a color space that specifies colors based on three color dimensions: hue, value (lightness), and chroma (color purity), spaced uniformly in three dimensions in the elongated oval at an angle shaped Munsell color solid according to the logarithmic scale which governs human perception. In order for all the colors to be spaced uniformly, it was found necessary to use a color wheel with five primary colors—red, yellow, green, blue, and purple.
The Munsell colors displayed are only approximate as they have been adjusted to fit into the sRGB gamut.
Green (Pantone)
Green (Pantone) is the color that is called green in Pantone.
The source of this color is the "Pantone Textile Paper eXtended (TPX)" color list, color # green C, EC, HC, PC, U, or UP—green.
Green (Crayola)
Green (Crayola) is the color called green in Crayola crayons.
Green was one of the original Crayola crayons introduced in 1903.
Erin
Adjacent is displayed the color erin. The first recorded use of erin as a color name was in 1922.
Bright mint
Displayed adjacent is the color bright mint.
Dark green
Dark green is a dark shade of green. A different shade of green has been designated as "dark green (X11)" for certain computer uses.
Dark pastel green
Adjacent is the color dark pastel green.
Screamin' green
The color screamin' green is shown adjacent.
This color was renamed from ultra green by Crayola in 1990.
This color is a fluorescent color.
Cambridge blue
Cambridge blue is the color commonly used by sports teams from Cambridge University.
This color is actually a medium tone of spring green. Spring green colors are colors with an h code (hue code) of between 135 and 165; this color has an h code of 140, putting it within the range of spring green colors on the RGB color wheel.
Caribbean green
Adjacent is displayed the color Caribbean green. This is a Crayola color formulated in 1997.
Magic mint
Adjacent is displayed the color magic mint, a light tint of spring green.
The color magic mint is a light tint of the color mint.
Ceramic tiles in a similar color, often with a contrasting black border, were a popular choice for bathroom, kitchen and upmarket hotel swimming pool décor during the 1930s.
This is a Crayola color formulated in 1990 (later retired in 2003).
Mint
The color mint, also known as mint leaf, is a representation of the color of mint.
The first recorded use of mint as a color name in English was in 1920.
Mountain meadow
Displayed adjacent is the color mountain meadow.
Mountain meadow is a Crayola crayon color formulated in 1998.
Persian green
Persian green is a color used in pottery and Persian carpets in Iran.
Other colors associated with Persia include Persian red and Persian blue. The color Persian green is named from the green color of some Persian pottery and is a representation of the color of the mineral malachite. It is a popular color in Iran because the color green symbolizes gardens, nature, heaven, and sanctity. The first recorded use of Persian green as a color name in English was in 1892.
Sea foam green
This is the Crayola version of the above color, a much brighter and lighter shade. It was introduced in 2001.
Shamrock green (Irish green)
Shamrock green is a tone of green that represents the color of shamrocks, a symbol of Ireland.
The first recorded use of shamrock as a color name in English was in the 1820s (exact year uncertain).
This green is also defined as Irish green Pantone 347.
This green is used as the green on the national flag of Ireland.
It is customary in Ireland, Australia, New Zealand, Canada, and the United States to wear this or any other tone of green on St. Patrick's Day, 17 March.
The State of California uses this shade of green of the grass under the bear on their state flag.
The Boston Celtics of the National Basketball Association use this shade for their uniforms, logos, and other memorabilia.
Sap green
Sap green is a green pigment that was traditionally made of ripe buckthorn berries. However, modern colors marketed under this name are usually a blend of other pigments, commonly with a basis of Phthalocyanine Green G. Sap green paint was frequently used on Bob Ross's TV show, The Joy of Painting.
Jade
Jade, also called jade green, is a representation of the color of the gemstone called jade, although the stone itself varies widely in hue.
The color name jade green was first used in Spanish in the form piedra de ijada in 1569.
The first recorded use of jade green as a color name in English was in 1892.
Malachite
Malachite, also called malachite green, is a color that is a representation of the color of the mineral malachite.
The first recorded use of malachite green as a color name in English was in the 1200s (exact year uncertain).
Opal
Displayed adjacent is the color opal.
It is a pale shade of cyan that is reminiscent of the color of an opal gemstone, although as with many gemstones, opals come in a wide variety of colors.
Brunswick green
Brunswick green is a common name for green pigments made from copper compounds, although the name has also been used for other formulations that produce a similar hue, such as mixtures of chrome yellow and Prussian blue. The pigment is named after Braunschweig, Germany (also known as Brunswick in English) where it was first manufactured. It is a deep, dark green, which may vary from intense to very dark, almost black.
The first recorded use of Brunswick green as a color name in English was in 1764. Another name for this color is English green. The first use of English green as a synonym for Brunswick green was in 1923.
Deep Brunswick green is commonly recognized as part of the British racing green spectrum, the national auto racing color of the United Kingdom.
A different color, also called Brunswick green, was the color for passenger locomotives of the grouping and then the nationalized British Railways. There were three shades of these colors and they are defined under British Standard BS381C – 225, BS381C – 226, and BS381C – 227 (ordered from lightest to darkest). The Brunswick green used by the Nationalised British Railways – Western Region for passenger locomotives was BS381C – 227 (rgb(30:62:46)). RAL6005 is a close substitute to BS381C – 227. A characteristic of these colors was the ease for various railway locations to mix them by using whole pots of primary colors – hence the ability to get reasonably consistent colors with manual mixing half a century and more ago.
The color used by the Pennsylvania Railroad for locomotives was often called Brunswick green, but officially was termed dark green locomotive enamel (DGLE). This was a shade of green so dark as to be almost black, but which turned greener with age and weathering as the copper compounds further oxidized.
Castleton green
Castleton green is one of the two official colors of Castleton University in Vermont. The official college colors are green (PMS 343) and white. The Castleton University Office of Marketing and Communications created the Castleton colors for web and logo development and has technical guidelines, copyright and privacy protection; as well as logos and images that developers are asked to follow in the college's guidelines for using official Castleton logos. If web developers are using green on a university website, they are encouraged to use Castleton green. It is prominently used for representing Castleton's athletic teams, the Castleton Spartans.
Bottle green
Bottle green is a dark shade of green, similar to pine green. It is a representation of the color of green glass bottles.
The first recorded use of bottle green as a color name in English was in 1816.
Bottle green is a color in Prismacolor marker and pencil sets. It is also the color of the uniform of the Police Service of Northern Ireland replacing the Royal Ulster Constabulary's "rifle green" colored uniforms in 2001. It is also the green used in uniforms for South Sydney High School in Sydney.
Bottle green is also the color most associated with guide signs and street name signs in the United States.
Bottle green is also the background color of the Flag of Bangladesh, as defined by the government of Bangladesh. Another name for this color is Bangladesh green.
Dartmouth green
Dartmouth green is the official color of Dartmouth College, adopted in 1866. It was chosen for being the only decent primary color that had not been taken already. It is prominently used as the name of the Dartmouth College athletic team, the Dartmouth Big Green. The Dartmouth athletic teams adopted this new name after the college officially discontinued the use of its unofficial mascot, the Dartmouth Indian, in 1974.
Dartmouth green and white are the main colors of Lithuanian basketball club Žalgiris Kaunas.
GO Transit green
GO green was the color used for the brand of GO Transit, the regional commuter service in the Greater Toronto Area.
Between 1967 and 2013, the brand and color that has adorned each of its trains, buses, and other property generally remained unchanged. It also matched the shade of green used on signs for highways in Ontario. In July 2013, GO Transit updated its look to a two-tone color scheme.
Gotham green
Gotham green is the official color of the New York Jets as of 4 April 2019. The name is a reference to one of the Nicknames of New York City.
Pakistan green
Pakistan green is a shade of dark green, used in web development and graphic design. It is also the background color of the national flag of Pakistan. It is almost identical to the HTML/X11 dark green in sRGB and HSV values.
Sacramento State green
In 2004, California State University, Sacramento rebranded itself as Sacramento State, while keeping the official name as the long form. In the process of rebranding a new logo was selected, and in 2005 it formalized the colors which it would use.
Paris green
Paris green is a color that ranges from pale and vivid blue green to deeper true green. It comes from the inorganic compound copper (II) acetoarsenite and was once a popular pigment in artists' paints.
Spanish green
Spanish green is the color that is called "verde" (the Spanish word for "green") in the Guía de coloraciones (Guide to colorations) by Rosa Gallego and Juan Carlos Sanz, a color dictionary published in 2005 that is widely popular in the Hispanophone realm.
UNT green
UNT green is one of three official colors used by the University of North Texas. It is the primary color that appears on branding and promotional material produced by and on behalf of the university.
UP forest green
Adjacent is one of the official colors used by the University of the Philippines, designated as "UP forest green". It is based on the approved color specifications to be used for the seal of the university.
Hooker's green
Hooker's green is a dark green color created by mixing Prussian blue and gamboge. It is displayed adjacent. Hooker's green takes its name from botanical artist William Hooker (1779–1832) who first created it particularly for illustrating leaves.
Aero blue
Aero blue is a fluorescent greenish-cyan color. Aero blue was used as rainshower in one of the Sharpie permanent markers but not as bright on the marker. However, there is no mechanism for showing fluorescence on a computer screen.
Morning sky
Morning sky, also known as Morning blue is a representation of the color of the morning sky.
The year of the first recorded use of morning blue as a color name in English is unknown.
Feldgrau green
Feldgrau (field grey) was the color of the field uniform of the German Army from 1937 to 1945, and the East German NVA armies. Metaphorically, feldgrau used to refer to the armies of Germany (the Imperial German Army and the Heer [army] component of the Reichswehr and the Wehrmacht).
| Physical sciences | Colors | Physics |
1961488 | https://en.wikipedia.org/wiki/Ferrocerium | Ferrocerium | Ferrocerium (also known in Europe as Auermetall) is a synthetic pyrophoric alloy of mischmetal (cerium, lanthanum, neodymium, other trace lanthanides and some iron – about 95% lanthanides and 5% iron) hardened by blending in oxides of iron and/or magnesium. When struck with a harder material, friction produces hot fragments that oxidize rapidly when exposed to the oxygen in the air, producing sparks that can reach temperatures of . The effect is due to the low ignition temperature of cerium, between .
Ferrocerium has many commercial applications, such as the ignition source for lighters, strikers for gas welding and cutting torches, deoxidization in metallurgy, and ferrocerium rods. Because of ferrocerium's ability to ignite in adverse conditions, rods of ferrocerium (also called ferro rods, spark rods, and flint-spark-lighters) are commonly used as an emergency firelighting device in survival kits. The ferrocerium is referred to as a "flint" in this case, as both are used in fire lighting. However, ferrocerium and natural flint have opposite mechanical operation.
Discovery
Ferrocerium alloy was invented in 1903 by the Austrian chemist Carl Auer von Welsbach. It takes its name from its two primary components: iron (from ), and the rare-earth element cerium, which is the most prevalent of the lanthanides in the mixture. Except for the extra iron and magnesium oxides added to harden it, the mixture is approximately the combination found naturally in tailings from thorium mining, which Auer von Welsbach was investigating. The pyrophoric effect is dependent on the brittleness of the alloy and its low autoignition temperature.
Composition
In Auer von Welsbach's first alloy, 30% iron (ferrum) was added to purified cerium, hence the name "ferro-cerium". Two subsequent Auermetalls were developed: the second also included lanthanum to produce brighter sparks, and the third added other heavy metals.
A modern ferrocerium firesteel product is composed of an alloy of rare-earth metals called mischmetal, containing approximately 20.8% iron, 41.8% cerium, about 4.4% each of praseodymium, neodymium, and magnesium, plus 24.2% lanthanum. A variety of other components are added to modify the spark and processing characteristics. Most contemporary flints are hardened with iron oxide and magnesium oxide.
Uses
Ferrocerium is used in fire lighting in conjunction with a striker, similarly to natural flint-and-steel, though ferrocerium takes on the opposite role to the traditional system; instead of a natural flint rock striking tiny iron particles from a firesteel, a striker (which may be in the form of hardened steel wheel) strikes particles of ferrocerium off of the "flint". This manual rubbing action creates a spark due to cerium's low ignition temperature between . Any material that is harder than the rod itself may be used to produce sparks. Though the striker must have a sharp corner, sharp edge, or a knurled surface in order to produce sparks, carbon steel is not required. The idea that carbon steel is needed to produce sparks from a ferrocerium rod is an oft repeated myth. Though carbon steel does make the spark more prevalent when striking.
Ferrocerium is most commonly used to start Bunsen burners and oxyacetylene welding torches.
About 700 tons were produced in 2000.
| Physical sciences | Specific alloys | Chemistry |
3675281 | https://en.wikipedia.org/wiki/Vector-valued%20function | Vector-valued function | A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range.
Example: Helix
A common example of a vector-valued function is one that depends on a single real parameter , often representing time, producing a vector as the result. In terms of the standard unit vectors , , of Cartesian , these specific types of vector-valued functions are given by expressions such as
where , and are the coordinate functions of the parameter , and the domain of this vector-valued function is the intersection of the domains of the functions , , and . It can also be referred to in a different notation:
The vector has its tail at the origin and its head at the coordinates evaluated by the function.
The vector shown in the graph to the right is the evaluation of the function near (between and ; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as increases from zero through .
In 2D, we can analogously speak about vector-valued functions as:
or
Linear case
In the linear case the function can be expressed in terms of matrices:
where is an output vector, is a vector of inputs, and is an matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form
where in addition {{math|b'}} is an vector of parameters.
The linear case arises often, for example in multiple regression, where for instance the vector of predicted values of a dependent variable is expressed linearly in terms of a vector () of estimated values of model parameters:
in which (playing the role of in the previous generic form) is an matrix of fixed (empirically based) numbers.
Parametric representation of a surface
A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters and determine the three Cartesian coordinates of any point on the surface:
Here is a vector-valued function. For a surface embedded in -dimensional space, one similarly has the representation
Derivative of a three-dimensional vector function
Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if
is a vector-valued function, then
The vector derivative admits the following physical interpretation: if represents the position of a particle, then the derivative is the velocity of the particle
Likewise, the derivative of the velocity is the acceleration
Partial derivative
The partial derivative of a vector function with respect to a scalar variable is defined as
where is the scalar component of in the direction of . It is also called the direction cosine of and or their dot product. The vectors , , form an orthonormal basis fixed in the reference frame in which the derivative is being taken.
Ordinary derivative
If is regarded as a vector function of a single scalar variable, such as time , then the equation above reduces to the first ordinary time derivative of a with respect to ,
Total derivative
If the vector is a function of a number of scalar variables , and each is only a function of time , then the ordinary derivative of with respect to can be expressed, in a form known as the total derivative, as
Some authors prefer to use capital to indicate the total derivative operator, as in . The total derivative differs from the partial time derivative in that the total derivative accounts for changes in due to the time variance of the variables .
Reference frames
Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship.
Derivative of a vector function with nonfixed bases
The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is
where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of in the reference frame where , , are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a''' itself. Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is
where is the angular velocity of the reference frame E relative to the reference frame N.
One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity in inertial reference frame N of a rocket R located at position can be found using the formula
where is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, and are the derivatives of in reference frames N and E, respectively. By substitution,
where is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth.
Derivative and vector multiplication
The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions. Specifically, in the case of scalar multiplication of a vector, if is a scalar variable function of ,
In the case of dot multiplication, for two vectors and that are both functions of ,
Similarly, the derivative of the cross product of two vector functions is
Derivative of an n-dimensional vector function
A function of a real number with values in the space can be written as . Its derivative equals
If is a function of several variables, say of then the partial derivatives of the components of form a matrix called the Jacobian matrix of .
Infinite-dimensional vector functions
If the values of a function lie in an infinite-dimensional vector space , such as a Hilbert space, then may be called an infinite-dimensional vector function''.
Functions with values in a Hilbert space
If the argument of is a real number and is a Hilbert space, then the derivative of at a point can be defined as in the finite-dimensional case:
Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., or even , where is an infinite-dimensional vector space).
N.B. If is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if
(i.e., where is an orthonormal basis of the space  ), and exists, then
However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space.
Other infinite-dimensional vector spaces
Most of the above hold for other topological vector spaces too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases.
Vector field
| Mathematics | Functions: General | null |
27376057 | https://en.wikipedia.org/wiki/Trapeziidae | Trapeziidae | Trapeziidae is a family of crabs, commonly known as coral crabs. All the species in the family are found in a close symbiosis with cnidarians. They are found across the Indo-Pacific, and can best be identified to the species level by the colour patterns they display. Members of the family Tetraliidae were previously included in the Trapeziidae, but the similarities between the taxa is the result of convergent evolution.
Subfamilies and genera
The World Register of Marine Species lists the following subfamilies and genera:
Calocarcininae Stevcic, 2005
Calocarcinus Calman, 1909
Philippicarcinus Garth & Kim, 1983
Sphenomerides Rathbun, 1897
Quadrellinae Stevcic, 2005
Hexagonalia Galil, 1986
Hexagonaloides Komai, Higashiji & Castro, 2010
Quadrella Dana, 1851
Trapeziinae Miers, 1886
Trapezia Latreille, 1828
| Biology and health sciences | Crabs and hermit crabs | Animals |
4984440 | https://en.wikipedia.org/wiki/Orange%20%28fruit%29 | Orange (fruit) | The orange, also called sweet orange to distinguish it from the bitter orange (Citrus × aurantium), is the fruit of a tree in the family Rutaceae. Botanically, this is the hybrid Citrus × sinensis, between the pomelo (Citrus maxima) and the mandarin orange (Citrus reticulata). The chloroplast genome, and therefore the maternal line, is that of pomelo. There are many related hybrids including of mandarins and sweet orange. The sweet orange has had its full genome sequenced.
The orange originated in a region encompassing Southern China, Northeast India, and Myanmar; the earliest mention of the sweet orange was in Chinese literature in 314 BC. Orange trees are widely grown in tropical and subtropical areas for their sweet fruit. The fruit of the orange tree can be eaten fresh or processed for its juice or fragrant peel. In 2022, 76 million tonnes of oranges were grown worldwide, with Brazil producing 22% of the total, followed by India and China.
Oranges, variously understood, have featured in human culture since ancient times. They first appear in Western art in the Arnolfini Portrait by Jan van Eyck, but they had been depicted in Chinese art centuries earlier, as in Zhao Lingrang's Song dynasty fan painting Yellow Oranges and Green Tangerines. By the 17th century, an orangery had become an item of prestige in Europe, as seen at the Versailles Orangerie. More recently, artists such as Vincent van Gogh, John Sloan, and Henri Matisse included oranges in their paintings.
Description
The orange tree is a relatively small evergreen, flowering tree, with an average height of , although some very old specimens can reach . Its oval leaves, which are alternately arranged, are long and have crenulate margins. Sweet oranges grow in a range of different sizes, and shapes varying from spherical to oblong. Inside and attached to the rind is a porous white tissue, the white, bitter mesocarp or albedo (pith). The orange contains a number of distinct carpels (segments or pigs, botanically the fruits) inside, typically about ten, each delimited by a membrane and containing many juice-filled vesicles and usually a few pips. When unripe, the fruit is green. The grainy irregular rind of the ripe fruit can range from bright orange to yellow-orange, but frequently retains green patches or, under warm climate conditions, remains entirely green. Like all other citrus fruits, the sweet orange is non-climacteric, not ripening off the tree. The Citrus sinensis group is subdivided into four classes with distinct characteristics: common oranges, blood or pigmented oranges, navel oranges, and acidless oranges. The fruit is a hesperidium, a modified berry; it is covered by a rind formed by a rugged thickening of the ovary wall.
History
Hybrid origins
Citrus trees are angiosperms, and most species are almost entirely interfertile. This includes grapefruits, lemons, limes, oranges, and many citrus hybrids. As the interfertility of oranges and other citrus has produced numerous hybrids and cultivars, and bud mutations have also been selected, citrus taxonomy has proven difficult.
The sweet orange, Citrus x sinensis, is not a wild fruit, but arose in domestication in East Asia. It originated in a region encompassing Southern China, Northeast India, and Myanmar.
The fruit was created as a cross between a non-pure mandarin orange and a hybrid pomelo that had a substantial mandarin component. Since its chloroplast DNA is that of pomelo, it was likely the hybrid pomelo, perhaps a pomelo BC1 backcross, that was the maternal parent of the first orange. Based on genomic analysis, the relative proportions of the ancestral species in the sweet orange are approximately 42% pomelo and 58% mandarin. All varieties of the sweet orange descend from this prototype cross, differing only by mutations selected for during agricultural propagation. Sweet oranges have a distinct origin from the bitter orange, which arose independently, perhaps in the wild, from a cross between pure mandarin and pomelo parents.
Sweet oranges have in turn given rise to many further hybrids including the grapefruit, which arose from a sweet orange x pomelo backcross. Spontaneous and engineered backcrosses between the sweet orange and mandarin oranges or tangerines have produced the clementine and murcott. The ambersweet is a complex sweet orange x (Orlando tangelo x clementine) hybrid. The citranges are a group of sweet orange x trifoliate orange (Citrus trifoliata) hybrids.
Arab Agricultural Revolution
In Europe, the Moors introduced citrus fruits including the bitter orange, lemon, and lime to Al-Andalus in the Iberian Peninsula during the Arab Agricultural Revolution. Large-scale cultivation started in the 10th century, as evidenced by complex irrigation techniques specifically adapted to support orange orchards. Citrus fruits—among them the bitter orange—were introduced to Sicily in the 9th century during the period of the Emirate of Sicily, but the sweet orange was unknown there until the late 15th century or the beginnings of the 16th century, when Italian and Portuguese merchants brought orange trees into the Mediterranean area.
Spread across Europe
Shortly afterward, the sweet orange quickly was adopted as an edible fruit. It was considered a luxury food grown by wealthy people in private conservatories, called orangeries. By 1646, the sweet orange was well known throughout Europe; it went on to become the most often cultivated of all fruit trees. Louis XIV of France had a great love of orange trees and built the grandest of all royal Orangeries at the Palace of Versailles. At Versailles, potted orange trees in solid silver tubs were placed throughout the rooms of the palace, while the Orangerie allowed year-round cultivation of the fruit to supply the court. When Louis condemned his finance minister, Nicolas Fouquet, in 1664, part of the treasures that he confiscated were over 1,000 orange trees from Fouquet's estate at Vaux-le-Vicomte.
To the Americas
Spanish travelers introduced the sweet orange to the American continent. On his second voyage in 1493, Christopher Columbus may have planted the fruit on Hispaniola. Subsequent expeditions in the mid-1500s brought sweet oranges to South America and Mexico, and to Florida in 1565, when Pedro Menéndez de Avilés founded St Augustine. Spanish missionaries brought orange trees to Arizona between 1707 and 1710, while the Franciscans did the same in San Diego, California, in 1769. Archibald Menzies, the botanist on the Vancouver Expedition, collected orange seeds in South Africa, raised the seedlings on board, and gave them to several Hawaiian chiefs in 1792. The sweet orange came to be grown across the Hawaiian Islands, but its cultivation stopped after the arrival of the Mediterranean fruit fly in the early 1900s. Florida farmers obtained seeds from New Orleans around 1872, after which orange groves were established by grafting the sweet orange on to sour orange rootstocks.
Etymology
The word "orange" derives ultimately from Proto-Dravidian or Tamil (). From there the word entered Sanskrit (), meaning 'orange tree'. The Sanskrit word reached European languages through Persian () and its Arabic derivative ().
The word entered Late Middle English in the 14th century via Old French . Other forms include Old Provençal , Italian arancia, formerly narancia. In several languages, the initial n present in earlier forms of the word dropped off because it may have been mistaken as part of an indefinite article ending in an n sound. In French, for example, may have been heard as . This linguistic change is called juncture loss. The color was named after the fruit, with the first recorded use of orange as a color name in English in 1512.
Composition
Nutrition
Orange flesh is 87% water, 12% carbohydrates, 1% protein, and contains negligible fat (see table). As a 100-gram reference amount, orange flesh provides 47 calories, and is a rich source of vitamin C, providing 64% of the Daily Value. No other micronutrients are present in significant amounts (see table).
Phytochemicals
Oranges contain diverse phytochemicals, including carotenoids (beta-carotene, lutein and beta-cryptoxanthin), flavonoids (e.g. naringenin) and numerous volatile organic compounds producing orange aroma, including aldehydes, esters, terpenes, alcohols, and ketones. Orange juice contains only about one-fifth the citric acid of lime or lemon juice (which contain about 47 g/L).
Taste
The taste of oranges is determined mainly by the ratio of sugars to acids, whereas orange aroma derives from volatile organic compounds, including alcohols, aldehydes, ketones, terpenes, and esters. Bitter limonoid compounds, such as limonin, decrease gradually during development, whereas volatile aroma compounds tend to peak in mid- to late-season development. Taste quality tends to improve later in harvests when there is a higher sugar/acid ratio with less bitterness. As a citrus fruit, the orange is acidic, with pH levels ranging from 2.9 to 4.0. Taste and aroma vary according to genetic background, environmental conditions during development, ripeness at harvest, postharvest conditions, and storage duration.
Cultivars
Common
Common oranges (also called "white", "round", or "blond" oranges) constitute about two-thirds of all orange production. The majority of this crop is used for juice.
Valencia
The Valencia orange is a late-season fruit; it is popular when navel oranges are out of season. Thomas Rivers, an English nurseryman, imported this variety from the Azores and catalogued it in 1865 under the name Excelsior. Around 1870, he provided trees to S. B. Parsons, a Long Island nurseryman, who in turn sold them to E. H. Hart of Federal Point, Florida.
Navel
Navel oranges have a characteristic second fruit at the apex, which protrudes slightly like a human navel. They are mainly an eating fruit, as their thicker skin makes them easy to peel, they are less juicy and their bitterness makes them less suitable for juice. The parent variety was probably the Portuguese navel orange or Umbigo. The cultivar rapidly spread to other countries, but being seedless it had to be propagated by cutting and grafting.
The Cara cara orange is a type of navel orange grown mainly in Venezuela, South Africa and California's San Joaquin Valley. It is sweet and low in acid, with distinctively pinkish red flesh. It was discovered at the Hacienda Cara Cara in Valencia, Venezuela, in 1976.
Blood
Blood oranges, with an intense red coloration inside, are widely grown around the Mediterranean; there are several cultivars. The development of the red color requires cool nights. The redness is mainly due to the anthocyanin pigment chrysanthemin (cyanidin 3-O-glucoside).
Acidless
Acidless oranges are an early-season fruit with very low levels of acid. They also are called "sweet" oranges in the United States, with similar names in other countries: douce in France, sucrena in Spain, dolce or maltese in Italy, meski in North Africa and the Near East (where they are especially popular), succari in Egypt, and lima in Brazil. The lack of acid, which protects orange juice against spoilage in other groups, renders them generally unfit for processing as juice, so they are primarily eaten. They remain profitable in areas of local consumption, but rapid spoilage renders them unsuitable for export to major population centres of Europe, Asia, or the United States.
Cultivation
Climate
Like most citrus plants, oranges do well under moderate temperatures—between —and require considerable amounts of sunshine and water. They are principally grown in tropical and subtropical regions.
As oranges are sensitive to frost, farmers have developed methods to protect the trees from frost damage. A common process is to spray the trees with water so as to cover them with a thin layer of ice, insulating them even if air temperatures drop far lower. This practice, however, offers protection only for a very short time. Another procedure involves burning fuel oil in smudge pots put between the trees. These burn with a great deal of particulate emission, so condensation of water vapor on the particulate soot prevents condensation on plants and raises the air temperature very slightly. Smudge pots were developed after a disastrous freeze in southern California in January 1913 destroyed a whole crop.
Propagation
Commercially grown orange trees are propagated asexually by grafting a mature cultivar onto a suitable seedling rootstock to ensure the same yield, identical fruit characteristics, and resistance to diseases throughout the years. Propagation involves two stages: first, a rootstock is grown from seed. Then, when it is approximately one year old, the leafy top is cut off and a bud taken from a specific scion variety, is grafted into its bark. The scion is what determines the variety of orange, while the rootstock makes the tree resistant to pests and diseases and adaptable to specific soil and climatic conditions. Thus, rootstocks influence the rate of growth and have an effect on fruit yield and quality. Rootstocks must be compatible with the variety inserted into them because otherwise, the tree may decline, be less productive, or die. Among the advantages to grafting are that trees mature uniformly and begin to bear fruit earlier than those reproduced by seeds (3 to 4 years in contrast with 6 to 7 years), and that farmers can combine the best attributes of a scion with those of a rootstock.
Harvest
Canopy-shaking mechanical harvesters are being used increasingly in Florida to harvest oranges. Current canopy shaker machines use a series of six-to-seven-foot-long tines to shake the tree canopy at a relatively constant stroke and frequency. Oranges are picked once they are pale orange.
Degreening
Oranges must be mature when harvested. In the United States, laws forbid harvesting immature fruit for human consumption in Texas, Arizona, California and Florida. Ripe oranges, however, often have some green or yellow-green color in the skin. Ethylene gas is used to turn green skin to orange. This process is known as "degreening", "gassing", "sweating", or "curing". Oranges are non-climacteric fruits and cannot ripen internally in response to ethylene gas after harvesting, though they will de-green externally.
Storage
Commercially, oranges can be stored by refrigeration in controlled-atmosphere chambers for up to twelve weeks after harvest. Storage life ultimately depends on cultivar, maturity, pre-harvest conditions, and handling. At home, oranges have a shelf life of about one month, and are best stored loose.
Pests and diseases
Pests
The first major pest that attacked orange trees in the United States was the cottony cushion scale (Icerya purchasi), imported from Australia to California in 1868. Within 20 years, it wiped out the citrus orchards around Los Angeles, and limited orange growth throughout California. In 1888, the USDA sent Alfred Koebele to Australia to study this scale insect in its native habitat. He brought back with him specimens of an Australian ladybird, Novius cardinalis (the Vedalia beetle), and within a decade the pest was controlled. This was one of the first successful applications of biological pest control on any crop. The orange dog caterpillar of the giant swallowtail butterfly, Papilio cresphontes, is a pest of citrus plantations in North America, where it eats new foliage and can defoliate young trees.
Diseases
Citrus greening disease, caused by the bacterium Liberobacter asiaticum, has been the most serious threat to orange production since 2010. It is characterized by streaks of different shades on the leaves, and deformed, poorly colored, unsavory fruit. In areas where the disease is endemic, citrus trees live for only five to eight years and never bear fruit suitable for consumption. In the western hemisphere, the disease was discovered in Florida in 1998, where it has attacked nearly all the trees ever since. It was reported in Brazil by Fundecitrus Brasil in 2004. As from 2009, 0.87% of the trees in Brazil's main orange growing areas (São Paulo and Minas Gerais) showed symptoms of greening, an increase of 49% over 2008.
The disease is spread primarily by psyllid plant lice such as the Asian citrus psyllid (Diaphorina citri Kuwayama), an efficient vector of the bacterium. Foliar insecticides reduce psyllid populations for a short time, but also suppress beneficial predatory ladybird beetles. Soil application of aldicarb provided limited control of Asian citrus psyllid, while drenches of imidacloprid to young trees were effective for two months or more. Management of citrus greening disease requires an integrated approach that includes use of clean stock, elimination of inoculum via voluntary and regulatory means, use of pesticides to control psyllid vectors in the citrus crop, and biological control of the vectors in non-crop reservoirs.
Greasy spot, a fungal disease caused by the ascomycete Mycosphaerella citri, produces leaf spots and premature defoliation, thus reducing the tree's vigour and yield. Ascospores of M. citri are generated in pseudothecia in decomposing fallen leaves.
Production
In 2022, world production of oranges was 76 million tonnes, led by Brazil with 22% of the total, followed by India, China, and Mexico.
The United States Department of Agriculture has established grades for Florida oranges, primarily for oranges sold as fresh fruit. In the United States, groves are located mainly in Florida, California, and Texas. The majority of California's crop is sold as fresh fruit, whereas Florida's oranges are destined to juice products. The Indian River area of Florida produces high quality juice, which is often sold fresh and blended with juice from other regions, because Indian River trees yield sweet oranges but in relatively small quantities.
Culinary use
Dessert fruit and juice
Oranges, whose flavor may vary from sweet to sour, are commonly peeled and eaten fresh raw as a dessert. Orange juice is obtained by squeezing the fruit on a special tool (a juicer or squeezer) and collecting the juice in a tray or tank underneath. This can be made at home or, on a much larger scale, industrially. Orange juice is a traded commodity on the Intercontinental Exchange. Frozen orange juice concentrate is made from freshly squeezed and filtered juice.
Marmalade
Oranges are made into jam in many countries; in Britain, bitter Seville oranges are used to make marmalade. Almost the whole Spanish production is exported to Britain for this purpose. The entire fruit is cut up and boiled with sugar; the pith contributes pectin, which helps the marmalade to set. The first recipe was by an Englishwoman, Mary Kettilby, in 1714. Pieces of peel were first added by Janet Keiller of Dundee in the 1790s, contributing a distinctively bitter taste. Orange peel contains the bitter substances limonene and naringin.
Extracts
Zest is scraped from the coloured outer part of the peel, and used as a flavoring and garnish in desserts and cocktails.
Sweet orange oil is a by-product of the juice industry produced by pressing the peel. It is used for flavoring food and drinks; it is employed in the perfume industry and in aromatherapy for its fragrance. The oil consists of approximately 90% D-limonene, a solvent used in household chemicals such as wood conditioners for furniture and—along with other citrus oils—detergents and hand cleansers. It is an efficient cleaning agent with a pleasant smell, promoted for being environmentally friendly and therefore preferable to petrochemicals. It is, however, irritating to the skin and toxic to aquatic life.
In human culture
Oranges have featured in human culture since ancient times. The earliest mention of the sweet orange in Chinese literature dates from 314 BC. Larissa Pham, in The Paris Review, notes that sweet oranges were available in China much earlier than in the West. She writes that Zhao Lingrang's fan painting Yellow Oranges and Green Tangerines pays attention not to the fruit's colour but the shape of the fruit-laden trees, and that Su Shi's poem on the same subject runs "You must remember, / the best scenery of the year, / Is exactly now, / when oranges turn yellow and tangerines green."
The scholar Cristina Mazzoni has examined the multiple uses of the fruit in Italian art and literature, from Catherine of Siena's sending of candied oranges to Pope Urban, to Sandro Botticelli's setting of his painting Primavera in an orange grove. She notes that oranges symbolised desire and wealth on the one hand, and deformity on the other, while in the fairy-stories of Sicily, they have magical properties. Pham comments that the Arnolfini Portrait by Jan van Eyck contains in a small detail one of the first representations of oranges in Western art, the costly fruit perhaps traded by the merchant Arnolfini himself. By the 17th century, orangeries were added to great houses in Europe, both to enable the fruit to be grown locally and for prestige, as seen in the Versailles Orangerie completed in 1686.
The Dutch Post-Impressionist artist Vincent van Gogh portrayed oranges in paintings such as his 1889 Still Life of Oranges and Lemons with Blue Gloves and his 1890 A Child with Orange, both works late in his life. The American artist of the Ashcan School, John Sloan, made a 1935 painting Blond Nude with Orange, Blue Couch, while Henri Matisse's last painting was his 1951 Nude with Oranges; after that he only made cut-outs.
| Biology and health sciences | Sapindales | null |
4984756 | https://en.wikipedia.org/wiki/Coordination%20sphere | Coordination sphere | In coordination chemistry, the first coordination sphere refers to the array of molecules and ions (the ligands) directly attached to the central metal atom. The second coordination sphere consists of molecules and ions that attached in various ways to the first coordination sphere.
First coordination sphere
The first coordination sphere refers to the molecules that are attached directly to the metal. The interactions between the first and second coordination spheres usually involve hydrogen-bonding. For charged complexes, ion pairing is important.
In hexamminecobalt(III) chloride ([Co(NH3)6]Cl3), the cobalt cation plus the 6 ammonia ligands comprise the first coordination sphere. The coordination sphere of this ion thus consists of a central MN6 core "decorated" by 18 N−H bonds that radiate outwards.
Second coordination sphere
Metal ions can be described as consisting of series of two concentric coordination spheres, the first and second. More distant from the second coordination sphere, the solvent molecules behave more like "bulk solvent." Simulation of the second coordination sphere is of interest in computational chemistry. The second coordination sphere can consist of ions (especially in charged complexes), molecules (especially those that hydrogen bond to ligands in the first coordination sphere) and portions of a ligand backbone. Compared to the first coordination sphere, the second coordination sphere has a less direct influence on the reactivity and chemical properties of the metal complex. Nonetheless the second coordination sphere is relevant to understanding reactions of the metal complex, including the mechanisms of ligand exchange and catalysis.
Role in catalysis
Mechanisms of metalloproteins often invoke modulation of the second coordination sphere by the protein.
Role in mechanistic inorganic chemistry
The rates at which ligands exchange between the first and the second coordination sphere is the first step in ligand substitution reactions. In associative ligand substitution, the entering nucleophile resides in the second coordination sphere. These effects are relevant to practical applications such as contrast agents used in MRI.
The energetics of inner sphere electron transfer reactions are discussed in terms of second coordination sphere. Some proton coupled electron transfer reactions involve atom transfer between the second coordination spheres of the reactants:
[Fe*(H2O)6]2+ + [Fe(H2O)5(OH)]2+ → [Fe(H2O)6]3+ + [Fe*(H2O)5(OH)]2+
Role in spectroscopy
Solvent effects on colors and stability are often attributable to changes in the second coordination sphere. Such effects can be pronounced in complexes where the ligands in the first coordination sphere are strong hydrogen-bond donors and acceptors, e.g. respectively [Co(NH3)6]3+ and [Fe(CN)6]3−. Crown-ethers bind to polyamine complexes through their second coordination sphere. Polyammonium cations bind to the nitrogen centres of cyanometallates.
Role in supramolecular chemistry
Macrocyclic molecules such as cyclodextrins act often as the second coordination sphere for metal complexes.
| Physical sciences | Bond structure | Chemistry |
25512250 | https://en.wikipedia.org/wiki/Truth%20table | Truth table | A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, Boolean functions, and propositional calculus—which sets out the functional values of logical expressions on each of their functional arguments, that is, for each combination of values taken by their logical variables. In particular, truth tables can be used to show whether a propositional expression is true for all legitimate input values, that is, logically valid.
A truth table has one column for each input variable (for example, A and B), and one final column showing all of the possible results of the logical operation that the table represents (for example, A XOR B). Each row of the truth table contains one possible configuration of the input variables (for instance, A=true, B=false), and the result of the operation for those values.
A truth table is a structured representation that presents all possible combinations of truth values for the input variables of a Boolean function and their corresponding output values. A function f from A to F is a special relation, a subset of A×F, which simply means that f can be listed as a list of input-output pairs. Clearly, for the Boolean functions, the output belongs to a binary set, i.e. F = {0, 1}. For an n-ary Boolean function, the inputs come from a domain that is itself a Cartesian product of binary sets corresponding to the input Boolean variables. For example for a binary function, f(A, B), the domain of f is A×B, which can be listed as: A×B = {(A = 0, B = 0), (A = 0, B = 1), (A = 1, B = 0), (A = 1, B = 1)}. Each element in the domain represents a combination of input values for the variables A and B. These combinations now can be combined with the output of the function corresponding to that combination, thus forming the set of input-output pairs as a special relation that is a subset of A×F. For a relation to be a function, the special requirement is that each element of the domain of the function must be mapped to one and only one member of the codomain. Thus, the function f itself can be listed as: f = {((0, 0), f0), ((0, 1), f1), ((1, 0), f2), ((1, 1), f3)}, where f0, f1, f2, and f3 are each Boolean, 0 or 1, values as members of the codomain {0, 1}, as the outputs corresponding to the member of the domain, respectively. Rather than a list (set) given above, the truth table then presents these input-output pairs in a tabular format, in which each row corresponds to a member of the domain paired with its corresponding output value, 0 or 1. Of course, for the Boolean functions, we do not have to list all the members of the domain with their images in the codomain; we can simply list the mappings that map the member to "1", because all the others will have to be mapped to "0" automatically (that leads us to the minterms idea).
Ludwig Wittgenstein is generally credited with inventing and popularizing the truth table in his Tractatus Logico-Philosophicus, which was completed in 1918 and published in 1921. Such a system was also independently proposed in 1921 by Emil Leon Post.
History
Irving Anellis's research shows that C.S. Peirce appears to be the earliest logician (in 1883) to devise a truth table matrix.
From the summary of Anellis's paper:
In 1997, John Shosky discovered, on the verso of a page of the typed transcript of Bertrand Russell's 1912 lecture on "The Philosophy of Logical Atomism" truth table matrices. The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883–84 in connection with the composition of Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.
Applications
Truth tables can be used to prove many other logical equivalences. For example, consider the following truth table:
This demonstrates the fact that is logically equivalent to .
Truth table for most commonly used logical operators
Here is a truth table that gives definitions of the 7 most commonly used out of the 16 possible truth functions of two Boolean variables P and Q:
Condensed truth tables for binary operators
For binary operators, a condensed form of truth table is also used, where the row headings and the column headings specify the operands and the table cells specify the result. For example, Boolean logic uses this condensed truth table notation:
This notation is useful especially if the operations are commutative, although one can additionally specify that the rows are the first operand and the columns are the second operand. This condensed notation is particularly useful in discussing multi-valued extensions of logic, as it significantly cuts down on combinatoric explosion of the number of rows otherwise needed. It also provides for quickly recognizable characteristic "shape" of the distribution of the values in the table which can assist the reader in grasping the rules more quickly.
Truth tables in digital logic
Truth tables are also used to specify the function of hardware look-up tables (LUTs) in digital logic circuitry. For an n-input LUT, the truth table will have 2^n values (or rows in the above tabular format), completely specifying a Boolean function for the LUT. By representing each Boolean value as a bit in a binary number, truth table values can be efficiently encoded as integer values in electronic design automation (EDA) software. For example, a 32-bit integer can encode the truth table for a LUT with up to 5 inputs.
When using an integer representation of a truth table, the output value of the LUT can be obtained by calculating a bit index k based on the input values of the LUT, in which case the LUT's output value is the kth bit of the integer. For example, to evaluate the output value of a LUT given an array of n Boolean input values, the bit index of the truth table's output value can be computed as follows: if the ith input is true, let , else let . Then the kth bit of the binary representation of the truth table is the LUT's output value, where .
Truth tables are a simple and straightforward way to encode Boolean functions, however given the exponential growth in size as the number of inputs increase, they are not suitable for functions with a large number of inputs. Other representations which are more memory efficient are text equations and binary decision diagrams.
Applications of truth tables in digital electronics
In digital electronics and computer science (fields of applied logic engineering and mathematics), truth tables can be used to reduce basic Boolean operations to simple correlations of inputs to outputs, without the use of logic gates or code. For example, a binary addition can be represented with the truth table:
where A is the first operand, B is the second operand, C is the carry digit, and R is the result.
This truth table is read left to right:
Value pair (A, B) equals value pair (C, R).
Or for this example, A plus B equal result R, with the Carry C.
This table does not describe the logic operations necessary to implement this operation, rather it simply specifies the function of inputs to output values.
With respect to the result, this example may be arithmetically viewed as modulo 2 binary addition, and as logically equivalent to the exclusive-or (exclusive disjunction) binary logic operation.
In this case it can be used for only very simple inputs and outputs, such as 1s and 0s. However, if the number of types of values one can have on the inputs increases, the size of the truth table will increase.
For instance, in an addition operation, one needs two operands, A and B. Each can have one of two values, zero or one. The number of combinations of these two values is 2×2, or four. So the result is four possible outputs of C and R. If one were to use base 3, the size would increase to 3×3, or nine possible outputs.
The first "addition" example above is called a half-adder. A full-adder is when the carry from the previous operation is provided as input to the next adder. Thus, a truth table of eight rows would be needed to describe a full adder's logic:
A B C* | C R
0 0 0 | 0 0
0 1 0 | 0 1
1 0 0 | 0 1
1 1 0 | 1 0
0 0 1 | 0 1
0 1 1 | 1 0
1 0 1 | 1 0
1 1 1 | 1 1
Same as previous, but..
C* = Carry from previous adder
Methods of writing truth tables
Regarding the guide columns to the left of a table, which represent propositional variables, different authors have different recommendations about how to fill them in, although this is of no logical significance.
Alternating method
Lee Archie, a professor at Lander University, recommends this procedure, which is commonly followed in published truth-tables:
Write out the number of variables (corresponding to the number of statements) in alphabetical order.
The number of lines needed is 2n where n is the number of variables. (E. g., with three variables, 23 = 8).
Start in the right-hand column and alternate T's and F's until you run out of lines.
Then move left to the next column and alternate pairs of T's and F's until you run out of lines.
Then continue to the next left-hand column and double the numbers of T's and F's until completed.
This method results in truth-tables such as the following table for "P ⊃ (Q ∨ R ⊃ (R ⊃ ¬P))", produced by Stephen Cole Kleene:
Combinatorial method
Colin Howson, on the other hand, believes that "it is a good practical rule" to do the following:to start with all Ts, then all the ways (three) two Ts can be combined with one F, then all the ways (three) one T can be combined with two Fs, and then finish with all Fs. If a compound is built up from n distinct sentence letters, its truth table will have 2n rows, since there are two ways of assigning T or F to the first letter, and for each of these there will be two ways of assigning T or F to the second, and for each of these there will be two ways of assigning T or F to the third, and so on, giving 2.2.2. …, n times, which is equal to 2n.
This results in truth tables like this table "showing that (A→C)∧(B→C) and (A∨B)→C are truth-functionally equivalent", modeled after a table produced by Howson:
Size of truth tables
If there are n input variables then there are 2n possible combinations of their truth values. A given function may produce true or false for each combination so the number of different functions of n variables is the double exponential 22n.
Truth tables for functions of three or more variables are rarely given.
Function Tables
It can be useful to have the output of a truth table expressed as a function of some variable values, instead of just a literal truth or false value. These may be called "function tables" to differentiate them from the more general "truth tables". For example, one value, , may be used with an XOR gate to conditionally invert another value, . In other words, when is false, the output is , and when is true, the output is . The function table for this would look like:
Similarly, a 4-to-1 multiplexer with select imputs and , data inputs , , and , and output (as displayed in the image) would have this function table:
Sentential operator truth tables
Overview table
Here is an extended truth table giving definitions of all sixteen possible truth functions of two Boolean variables p and q:
{| class="wikitable" style="margin:left margin:1em auto 1em auto; text-align:center;"
|-
! p || q
! style="background:black" |
!F0||NOR1 ||↚2||¬p3||NIMPLY4||¬q5||XOR6||NAND7||AND8||XNOR9||q10||IMPLY11 || p12||←13||OR14|| T15
|-
| T || T
| style="background:black" | || F || F || F || F || F || F || F || F || T || T || T || T || T || T || T || T
|-
| T || F
| style="background:black" | || F || F || F || F || T || T || T || T || F || F || F || F || T || T || T || T
|-
| F || T
| style="background:black" | || F || F || T || T || F || F || T || T || F || F || T || T || F || F || T || T
|-
| F || F
| style="background:black" | || F || T || F || T || F || T || F || T || F || T || F || T || F || T || F || T
|-
| colspan="19" style="background:black" |
|-
| colspan="2" style="background: #;" |
| style="background:black" | || ✓ || ✓ || || || || || ✓ || ✓ || ✓ || ✓ || || || || || ✓ || ✓
|-
| colspan="2" style="background: #;" |
| style="background:black" | || ✓ || || || || || || ✓ || || ✓ || ✓ || ✓ || || ✓ || || ✓ || ✓
|-
| colspan="2" style="background: #;" |
| style="background:black" | || F0 || NOR1 || ↛4 || ¬q5 || ↚2 || ¬p3 || XOR6 || NAND7 || AND8 || XNOR9 || p12 || ←13 || q10 || →11 || OR14 || T15
|-
| colspan="2" style="background: #;" |
| style="background:black" | || T15 || OR14 || ←13 || p12 || IMPLY11 || q10 || XNOR9 || AND8 || NAND7 || XOR6 || ¬q5 || NIMPLY4 || ¬p3 || ↚2 || NOR1|| F0
|-
| colspan="2" style="background: #;" |
| style="background:black" | || T15 || NAND7 || →11 || ¬p3 || ←13 || ¬q5 || XNOR9 || NOR1 || OR14 || XOR6 || q10 || ↚2 || p12 || ↛4 || AND8|| F0
|-
| colspan="2" style="background: #;" |
| style="background:black" | || || || F || || || || F || || T || T || T,F || T || || || F ||
|-
| colspan="2" style="background: #;" |
| style="background:black" | || || || || || F || || F || || T || T || || || T,F || T || F ||
|}
where
T = true.
F = false.
The superscripts 0 to 15 is the number resulting from reading the four truth values as a binary number with F = 0 and T = 1.
The Com row indicates whether an operator, op, is commutative - P op Q = Q op P.
The Assoc row indicates whether an operator, op, is associative - (P op Q) op R = P op (Q op R).
The Adj row shows the operator op2 such that P op Q = Q op2 P.
The Neg row shows the operator op2 such that P op Q = ¬(P op2 Q).
The Dual row shows the dual operation obtained by interchanging T with F, and AND with OR.
The L id row shows the operator's left identities if it has any - values I such that I op Q = Q.
The R id row shows the operator's right identities if it has any - values I such that P op I = P.
Wittgenstein table
In proposition 5.101 of the Tractatus Logico-Philosophicus, Wittgenstein listed the table above as follows:
{| class="wikitable" style="margin:left margin:1em auto 1em auto; text-align:left;"
|-
! scope=col |
! scope=col | Truthvalues
! scope=col |
! scope=col colspan="2" | Operator
! scope=col | Operation name
! scope=col | Tractatus
|-
| 0 ||(F F F F)(p, q)|| ⊥ || false || Opq || Contradiction || p and not p; and q and not q
|-
| 1 ||(F F F T)(p, q)|| NOR || p ↓ q || Xpq || Logical NOR || neither p nor q
|-
| 2 ||(F F T F)(p, q)|| ↚ || p ↚ q || Mpq || Converse nonimplication ||q and not p
|-
| 3 ||(F F T T)(p, q)|| ¬p, ~p || ¬p || Np, Fpq || Negation || not p
|-
| 4 ||(F T F F)(p, q)|| ↛ || p ↛ q
|| Lpq || Material nonimplication ||p and not q
|-
| 5 ||(F T F T)(p, q)|| ¬q, ~q || ¬q || Nq, Gpq || Negation || not q
|-
| 6 ||(F T T F)(p, q)|| XOR ||p ⊕ q || Jpq || Exclusive disjunction || p or q, but not both
|-
| 7 || (F T T T)(p, q)|| NAND || p ↑ q || Dpq || Logical NAND || not both p and q
|-
| 8 || (T F F F)(p, q)|| AND || p ∧ q || Kpq || Logical conjunction || p and q
|-
| 9 || (T F F T)(p, q)|| XNOR || p iff q || Epq || Logical biconditional || if p then q; and if q then p
|-
| 10 || (T F T F)(p, q)|| q || q || Hpq || Projection function || q
|-
| 11 || (T F T T)(p, q)|| p → q || if p then q || Cpq || Material implication || if p then q
|-
| 12 || (T T F F)(p, q)|| p || p || Ipq || Projection function || p
|-
| 13 || (T T F T)(p, q)|| p ← q || if q then p || Bpq || Converse implication || if q then p
|-
| 14 || (T T T F)(p, q)|| OR || p ∨ q || Apq || Logical disjunction || p or q
|-
| 15 || (T T T T)(p, q)|| ⊤ || true || Vpq || Tautology || if p then p; and if q then q
|}
The truth table represented by each row is obtained by appending the sequence given in Truthvaluesrow to the table
{| class="wikitable" style="margin:left margin:1em auto 1em auto; text-align:left;"
!scope=row | p
| T || T || F || F
|-
!scope=row | q
| T || F || T || F
|}
For example, the table
{| class="wikitable" style="margin:left margin:1em auto 1em auto; text-align:left;"
!scope=row | p
| T || T || F || F
|-
!scope=row | q
| T || F || T || F
|-
!scope=row | 11
| T || F || T || T
|}
represents the truth table for Material implication. Logical operators can also be visualized using Venn diagrams.
Nullary operations
There are 2 nullary operations:
Always true
Never true, unary falsum
Logical true
The output value is always true, because this operator has zero operands and therefore no input values
Logical false
The output value is never true: that is, always false, because this operator has zero operands and therefore no input values
Unary operations
There are 2 unary operations:
Unary identity
Unary negation
Logical identity
Logical identity is an operation on one logical value p, for which the output value remains p.
The truth table for the logical identity operator is as follows:
Logical negation
Logical negation is an operation on one logical value, typically the value of a proposition, that produces a value of true if its operand is false and a value of false if its operand is true.
The truth table for NOT p (also written as ¬p, Np, Fpq, or ~p) is as follows:
Binary operations
There are 16 possible truth functions of two binary variables, each operator has its own name.
Logical conjunction (AND)
Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if both of its operands are true.
The truth table for p AND q (also written as p ∧ q, Kpq, p & q, or p q) is as follows:
In ordinary language terms, if both p and q are true, then the conjunction p ∧ q is true. For all other assignments of logical values to p and to q the conjunction p ∧ q is false.
It can also be said that if p, then p ∧ q is q, otherwise p ∧ q is p.
Logical disjunction (OR)
Logical disjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if at least one of its operands is true.
The truth table for p OR q (also written as p ∨ q, Apq, p || q, or p + q) is as follows:
Stated in English, if p, then p ∨ q is p, otherwise p ∨ q is q.
Logical implication
Logical implication and the material conditional are both associated with an operation on two logical values, typically the values of two propositions, which produces a value of false if the first operand is true and the second operand is false, and a value of true otherwise.
The truth table associated with the logical implication p implies q (symbolized as p ⇒ q, or more rarely Cpq) is as follows:
The truth table associated with the material conditional if p then q (symbolized as p → q) is as follows:
p ⇒ q and p → q are equivalent to ¬p ∨ q.
Logical equality
Logical equality (also known as biconditional or exclusive nor) is an operation on two logical values, typically the values of two propositions, that produces a value of true if both operands are false or both operands are true.
The truth table for p XNOR q (also written as p ↔ q, Epq, p = q, or p ≡ q) is as follows:
So p EQ q is true if p and q have the same truth value (both true or both false), and false if they have different truth values.
Exclusive disjunction
Exclusive disjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if one but not both of its operands is true.
The truth table for p XOR q (also written as Jpq, or p ⊕ q) is as follows:
For two propositions, XOR can also be written as (p ∧ ¬q) ∨ (¬p ∧ q).
Logical NAND
The logical NAND is an operation on two logical values, typically the values of two propositions, that produces a value of false if both of its operands are true. In other words, it produces a value of true if at least one of its operands is false.
The truth table for p NAND q (also written as p ↑ q, Dpq, or p | q) is as follows:
It is frequently useful to express a logical operation as a compound operation, that is, as an operation that is built up or composed from other operations. Many such compositions are possible, depending on the operations that are taken as basic or "primitive" and the operations that are taken as composite or "derivative".
In the case of logical NAND, it is clearly expressible as a compound of NOT and AND.
The negation of a conjunction: ¬(p ∧ q), and the disjunction of negations: (¬p) ∨ (¬q) can be tabulated as follows:
Logical NOR
The logical NOR is an operation on two logical values, typically the values of two propositions, that produces a value of true if both of its operands are false. In other words, it produces a value of false if at least one of its operands is true. ↓ is also known as the Peirce arrow after its inventor, Charles Sanders Peirce, and is a Sole sufficient operator.
The truth table for p NOR q (also written as p ↓ q, or Xpq) is as follows:
The negation of a disjunction ¬(p ∨ q), and the conjunction of negations (¬p) ∧ (¬q) can be tabulated as follows:
Inspection of the tabular derivations for NAND and NOR, under each assignment of logical values to the functional arguments p and q, produces the identical patterns of functional values for ¬(p ∧ q) as for (¬p) ∨ (¬q), and for ¬(p ∨ q) as for (¬p) ∧ (¬q). Thus the first and second expressions in each pair are logically equivalent, and may be substituted for each other in all contexts that pertain solely to their logical values.
This equivalence is one of De Morgan's laws.
| Mathematics | Mathematical logic | null |
21137621 | https://en.wikipedia.org/wiki/Mamenchisauridae | Mamenchisauridae | Mamenchisauridae is a family of sauropod dinosaurs belonging to Eusauropoda known from the Jurassic and Early Cretaceous of Asia and Africa. Some members of the group reached gigantic sizes, amongst the largest of all sauropods.
Classification
The family Mamenchisauridae was first erected by Chinese paleontologists Yang Zhongjian ("C.C. Young") and Zhao Xijin in 1972, in a paper describing Mamenchisaurus hochuanensis.
The most complete cladogram of Mamenchisauridae is presented by Moore et al., 2020, which includes several named species. Notably, some iterations of their analysis recover Euhelopus and kin, usually considered somphospondylians, as relatives of mamenchisaurids, mirroring earlier conceptions about the family.
Topology A: Implied-weights analysis, Gonzàlez Riga dataset
Topology B: Time-calibrated Bayesian analysis, Gonzàlez Riga dataset
In addition to several taxa above, a paper by Ren et al. (2022) also includes Rhoetosaurus, Spinophorosaurus, and Yuanmousaurus within the family.
A 2023 study also includes Hudiesaurus, Xinjiangtitan, Rhomaleopakhus, and the juvenile Bellusaurus and Daanosaurus within the family.
Paleobiology
Long-bone histology enables researchers to estimate the age that a specific individual reached. A study by Griebeler et al. (2013) examined long bone histological data and concluded that the unnamed mamenchisaurid SGP 2006/9 weighed , reached sexual maturity at 20 years and died at age 31.
Paleoecology
Fossils of Mamenchisaurus and Omeisaurus have been found in the Shaximiao Formation, dating to the Oxfordian-Tithonian interval, around 159-150 Ma (million years ago). Chuanjiesaurus fossils date between 166.1 and 163.5 Ma, while those of Eomamenchisaurus were found in the Zhanghe Formation, believed to be around 175.6-161.2 million years old. Fossils of Tonganosaurus date to even earlier, from the (Pliensbachian) Early Jurassic. The Tendaguru Formation taxon Wamweracaudia from Tanzania extends the geographic distribution of Mamenchisauridae into Africa, while fossil remains from the Itat Formation in Russia suggest they also reached Siberia. Additionally, an indeterminate cervical vertebra from the Phu Kradung Formation of Thailand demonstrates survival of Mamenchisauridae into the Cretaceous combined with new radiometric dates for the Suining Formation that has yielded fossils of Mamenchisaurus anyuensis.
| Biology and health sciences | Sauropods | Animals |
299939 | https://en.wikipedia.org/wiki/Cyperaceae | Cyperaceae | The Cyperaceae () are a family of graminoid (grass-like), monocotyledonous flowering plants known as sedges. The family is large; botanists have described some 5,500 known species in about 90 generathe largest being the "true sedges" (genus Carex), with over 2,000 species.
Distribution
Cyperaceae species are widely distributed, with the centers of diversity for the group occurring in tropical Asia and tropical South America. While sedges grow in almost all environments, many thrive in wetlands, or in poor soils. Ecological communities dominated by sedges are known as s or as sedge meadows.
Classification
Some species superficially resemble the closely related rushes and the more distantly related grasses. Features distinguishing members of the sedge family from grasses or rushes are stems with triangular cross-sections (with occasional exceptions, a notable example being the tule which has a round cross-section) and leaves that are spirally arranged in three ranks. In comparison, grasses have alternate leaves, forming two ranks. This leads to the mnemonic "sedges have edges," in order to tell them apart from generally round rushes or hollow, nodded grasses.
Some well-known sedges include the water chestnut (Eleocharis dulcis) and the papyrus sedge (Cyperus papyrus), from which the writing material papyrus was made. This family also includes cotton-grass (Eriophorum), spike-rush (Eleocharis), sawgrass (Cladium), nutsedge or nutgrass (also called chufa, Cyperus esculentus/Cyperus rotundus, a cultivated crop and common weed), white star sedge (Rhynchospora colorata), and umbrella sedge (Cyperus alternifolius), also known as umbrella papyrus
Features
Members of this family are characterised by the formation of dauciform (carrot-like) roots; an alteration in root morphology that researchers regard as analogous to cluster roots in Proteaceae, which help uptake of nutrients such as phosphorus from poor soil. Like other members of the order Poales, sedges are mostly wind-pollinated, but there are exceptions. Cyperus niveus and Cyperus sphaerocephalus, both with accordingly more conspicuous flowers, are insect-pollinated.
Evolution
Researchers have identified sedges occurring at least as early as the Eocene epoch.
Genera
, 93 genera are accepted by Kew's Plants of the World Online.
| Biology and health sciences | Poales | null |
299971 | https://en.wikipedia.org/wiki/Sedative | Sedative | A sedative or tranquilliser is a substance that induces sedation by reducing irritability or excitement. They are CNS depressants and interact with brain activity causing its deceleration. Various kinds of sedatives can be distinguished, but the majority of them affect the neurotransmitter gamma-aminobutyric acid (GABA). In spite of the fact that each sedative acts in its own way, most produce relaxing effects by increasing GABA activity.
This group is related to hypnotics. The term sedative describes drugs that serve to calm or relieve anxiety, whereas the term hypnotic describes drugs whose main purpose is to initiate, sustain, or lengthen sleep. Because these two functions frequently overlap, and because drugs in this class generally produce dose-dependent effects (ranging from anxiolysis to loss of consciousness) they are often referred to collectively as sedative-hypnotic drugs.
Sedatives can be used to produce an overly calming effect (alcohol being the most common sedating drug). In the event of an overdose or if combined with another sedative, many of these drugs can cause sleep and even death.
Terminology
There is some overlap between the terms "sedative" and "hypnotic".
Advances in pharmacology have permitted more specific targeting of receptors, and greater selectivity of agents, which necessitates greater precision when describing these agents and their effects:
Anxiolytic refers specifically to the effect upon anxiety. (However, some benzodiazepines can be all three: sedatives, hypnotics, and anxiolytics).
Tranquilizer can refer to anxiolytics or antipsychotics.
Soporific and sleeping pill are near-synonyms for hypnotics.
The term "chemical cosh"
The term "chemical cosh" (cosh being a term for a blunt weapon such as a club) is sometimes used colloquially for a strong sedative, particularly for:
Widespread dispensation of antipsychotic drugs in residential care to make people with dementia easier to manage.
Use of drugs like Methylphenidate and Amphetamine to calm children with attention deficit hyperactivity disorder, though paradoxically these drugs are known to be stimulants.
| Biology and health sciences | General concepts_2 | Health |
300026 | https://en.wikipedia.org/wiki/Stream%20bed | Stream bed | A streambed or stream bed is the bottom of a stream or river (bathymetry) and is confined within a channel, or the banks of the waterway. Usually, the bed does not contain terrestrial (land) vegetation and instead supports different types of aquatic vegetation (aquatic plant), depending on the type of streambed material and water velocity. Streambeds are what would be left once a stream is no longer in existence. The beds are usually well preserved even if they get buried because the banks and canyons made by the stream are typically hard, although soft sand and debris often fill the bed. Dry, buried streambeds can actually be underground water pockets. During times of rain, sandy streambeds can soak up and retain water, even during dry seasons, keeping the water table close enough to the surface to be obtainable by local people.
The nature of any streambed is always a function of the flow dynamics and the local geologic materials. The climate of an area will determine the amount of precipitation a stream receives and therefore the amount of water flowing over the streambed. A streambed is usually a mix of particle sizes which depends on the water velocity and the materials introduced from upstream and from the watershed. Particle sizes can range from very fine silts and clays to large cobbles and boulders (grain size). In general, sands move most easily, and particles become more difficult to move as they increase in size. Silts and clays, although smaller than sands, can sometimes stick together, making them harder to move along the streambed. In streams with a gravel bed, the larger grain sizes are usually on the bed surface with finer grain sizes below. This is called armoring of the streambed.The streambed is very complex in terms of erosion and deposition. As the water flows downstream, different sized particles get sorted to different parts of a streambed as water velocity changes and sediment is transported, eroded and deposited on the streambed. Deposition usually occurs on the inside of curves, where water velocity slows, and erosion occurs on the outside of stream curves, where velocity is higher. This continued erosion and deposition of sediment tends to create meanders of the stream. In streams with a low to moderate grade, deeper, slower water pools (stream pools) and faster shallow water riffles often form as the stream meanders downhill. Pools can also form as water rushes over or around obstructions in the waterway.
Under certain conditions a river can branch from one streambed to multiple streambeds. For example, an anabranch may form when a section of stream or river goes around a small island and then rejoins the main channel. The buildup of sediment on a streambed may cause a channel to be abandoned in favor of a new one (avulsion (river)). A braided river may form as small threads come and go within a main channel.
Climate change
The intensity and frequency of both drought and rain events are expected to increase with climate change. Floods, or flood stage, occur when a stream overflows its banks. In undisturbed natural areas, flood water would be able to spread out within a floodplain and vegetation of either grassland or forest, would slow and absorb peak flows. In such areas, streambeds should remain more stable and exhibit minimal scour. They should retain rich organic matter and, therefore continue to support a rich biota (river ecosystem). The majority of sediment washed out in higher flows is "near-threshold" sediment that has been deposited during normal flow and only needs a slightly higher flow to become mobile again. This shows that the streambed is left mostly unchanged in size and shape over time.
In urban and suburban areas with little natural vegetation, high levels of impervious surface, and no floodplain, unnaturally high levels of surface runoff can occur. This causes an increase in flooding and watershed erosion which can lead to thinner soils upslope. Streambeds can exhibit a greater amount of scour, often down to bedrock, and banks may be undercut causing bank erosion. This increased bank erosion widens the stream and can lead to an increased sediment load downstream.
| Physical sciences | Fluvial landforms | Earth science |
300044 | https://en.wikipedia.org/wiki/Brucite | Brucite | Brucite is the mineral form of magnesium hydroxide, with the chemical formula Mg(OH)2. It is a common alteration product of periclase in marble; a low-temperature hydrothermal vein mineral in metamorphosed limestones and chlorite schists; and formed during serpentinization of dunites. Brucite is often found in association with serpentine, calcite, aragonite, dolomite, magnesite, hydromagnesite, artinite, talc and chrysotile.
It adopts a layered CdI2-like structure with hydrogen-bonds between the layers.
Discovery
Brucite was first described in 1824 by François Sulpice Beudant and named for the discoverer, American mineralogist, Archibald Bruce (1777–1818). A fibrous variety of brucite is called nemalite. It occurs in fibers or laths, usually elongated along [1010], but sometimes [1120] crystalline directions.
Occurrence
A notable location in the US is Wood's Chrome Mine, Cedar Hill Quarry, Lancaster County, Pennsylvania. Yellow, white and blue brucite with a botryoidal habit was discovered in Qila Saifullah District of Province Baluchistan, Pakistan. In a later discovery, brucite also occurred in the Bela Ophiolite of Wadh, Khuzdar District, Province Baluchistan, Pakistan. Brucite has also occurred from South Africa, Italy, Russia, Canada, and other localities as well, but the most notable discoveries are the US, Russian and Pakistani examples.
Industrial applications
Synthetic brucite is mainly consumed as a precursor to magnesia (MgO), a useful refractory and thermal insulator. It finds some use as a flame retardant because it thermally decomposes to release water in a similar way to aluminium hydroxide () and mixtures of huntite () and hydromagnesite (). It also constitutes a significant source of magnesium for industry. Although generally deemed safe, brucite can be contaminated with naturally occurring asbestos fibers.
Magnesium attack of cement and concrete
When cement or concrete are exposed to Mg2+, the neoformation of brucite, an expansive material, may induce mechanical stress in the hardened cement paste or may clog the porous network creating a and delaying the alteration/transformation of the C-S-H phase (the "glue" phase in the hardened cement paste) into M-S-H phase (a non-cohesive mineral phase). The exact magnitude of impact that brucite has on cement paste is still debatable. Prolonged contact between sea water or brines and concrete may induce durability issues for regularly immersed concrete components or structures.
The use of dolomite as aggregate in concrete can also cause magnesium attack and should be avoided.
Gallery
| Physical sciences | Minerals | Earth science |
300064 | https://en.wikipedia.org/wiki/Minke%20whale | Minke whale | The minke whale (), or lesser rorqual, is a species complex of baleen whale. The two species of minke whale are the common (or northern) minke whale and the Antarctic (or southern) minke whale. The minke whale was first described by the Danish naturalist Otto Fabricius in 1780, who assumed it must be an already known species and assigned his specimen to Balaena rostrata, a name given to the northern bottlenose whale by Otto Friedrich Müller in 1776. In 1804, Bernard Germain de Lacépède described a juvenile specimen of Balaenoptera acuto-rostrata. The name is a partial translation of Norwegian minkehval, possibly after a Norwegian whaler named Meincke, who mistook a northern minke whale for a blue whale.
Taxonomy
Most modern classifications split the minke whale into two species:
Common minke whale or northern minke whale (Balaenoptera acutorostrata), and
Antarctic minke whale or southern minke whale (Balaenoptera bonaerensis).
Taxonomists further categorize the common minke whale into two or three subspecies; the North Atlantic minke whale, the North Pacific minke whale and dwarf minke whale. All minke whales are part of the rorquals, a family that includes the humpback whale, the fin whale, the Bryde's whale, the sei whale and the blue whale.
The junior synonyms for B. acutorostrata are B. davidsoni (Scammon 1872), B. minimia (Rapp, 1837), and B. rostrata (Fabricius, 1780). There is one synonym for B. bonaerensis – B. huttoni (Gray 1874).
Writing in his 1998 classification, Rice recognized two of the subspecies of the common minke whale – B. a. scammoni (Scammon's minke whale) and a further taxonomically unnamed subspecies found in the Southern Hemisphere, the dwarf minke whale (first described by Best as "Type 3," 1985).
On at least one occasion, an Antarctic minke whale has been confirmed migrating to the Arctic. In addition, at least two wild hybrids between a common minke whale and an Antarctic minke whale have been confirmed.
Description
The minke whales are the second smallest baleen whale; only the pygmy right whale is smaller. Upon reaching sexual maturity (7–8 years of age), males measure an average of and and females measure an average of and in length and body mass, respectively; estimated maximum size for females suggest that they can reach lengths exceeding and weigh more than in body mass.
The minke whale has a black/gray/purple color. Common minke whales (Northern Hemisphere variety) are distinguished from other whales by a white band on each flipper. The body is usually black or dark-gray above and white underneath. Minke whales have between 240 and 360 baleen plates on each side of their mouths. Most of the length of the back, including dorsal fin and blowholes, appears at once when the whale surfaces to breathe.
Minke whales typically live between 30–50 years, but in some cases, they may live for up to 60 years. They have a gestation and calving period of approximately 10–11 months and 2 years, respectively.
Minke whales have a digestive system composed of four compartments with a high density of anaerobic bacteria throughout. The presence of the bacteria suggests minke whales rely on microbial digestion to extract nutrients provided by their food.
As with most Mysticetes, the auditory system for the minke whale is not well understood. However, magnetic resonance imaging points to evidence that the minke whale has fat deposits in their jaws intended for sound reception, much like Odontocetes.
The brains of minke whales have around 12.8 billion neocortical neurons and 98.2 billion neocortical glia. Additionally, despite its relatively large size, the minke whale is very fast, capable of swimming at speeds of , and their surfacing can be sporadic and hard to follow.
Behavior
The whale breathes three to five times at short intervals before "deep-diving" for 2 to 20 minutes. Deep dives are preceded by a pronounced arching of the back. The maximum swimming speed of minkes has been estimated at .
Migration
Both species undertake seasonal migration routes to the poles during spring and towards the tropics during fall and winter. The difference between the timing of the seasons may prevent the two closely related species from mixing. A long-term photo identification study on the British Columbian and Washington coasts showed that some individuals travel as far as 424 km north in the spring, and 398 km south to warmer waters in the autumn. Many specifics about migration in this species still remain unclear.
Reproduction
The gestation period for minke whales is ten months, and calves measure at birth. The newborns nurse for five to ten months. Breeding peaks during the summer months. Calving is thought to occur every two years.
The timing of conception and birth varies between region.
In the North Atlantic, conception takes place from December to May with a peak month of February with birth taking place from October to March with a peak in December. In the North Pacific off Japan there appears to be two phases of conception, the majority of which occurs from February to March but also from August to September, with births occurring from December to January and June to July. In the Yellow Sea stock these two phases have not been noted with conception occurring from July to September and birth peaking from May to June.
In the Southern Hemisphere conception takes place from June to December with a peak in August and September. Peak birth time occurs from July to August.
Predation
Killer whale predation on minke whales has been well documented. A study in 1975 found that in 49 killer whale stomachs, 84% had consumed minke whale. Minke whale carcasses investigated after attacks show that killer whales have an affinity for minke tongues and lower jaw. The anti-predatory mechanism of the minke whale is strictly a flight response, as when this fails no physical retaliation is observed. Chases most commonly lead into open ocean, although there have been records of minke whales inadvertently swimming into confined, shallow waters. There have been two recorded instances of minke whales ending high speed chases by hiding under a ship's hull; however, both instances were unsuccessful.
Diet
North Atlantic
Minke whales in the north Atlantic are observed to take a variety of food items. Before 1993, minke whales in the north Barents Sea fed predominantly on capelin until stocks collapsed and the whales switched to krill as their primary prey type. The minke whale population in the Norwegian Sea primarily feeds on adult herring while krill, capelin, and sand eels are also recorded prey types. In Scotland, sand eels are the most commonly observed prey species, followed by herring and sprat. Seasonal variations are observed off Finnmark, with krill the most popular prey type in the summer and cod in the autumn. Stable isotope analysis from 2003 shows minke whales in the north Atlantic also feed on prey from lower trophic levels.
North Pacific
Two stocks of minke whale are observed in the North Pacific: the "J stock" (Sea of Japan, Yellow Sea, East China sea) and the "O stock" (Okhotsk sea, west Pacific). Seasonal variations in diet exist. J-stock whales' primary prey type is Japanese anchovy during May and June, Pacific saury in July and August, and krill in September. O-stock whales primarily feed on krill in July and August. Most minke whales observed in 2002 (90.4%) fed solely on one prey species.
Antarctic
Antarctic minke whales are diurnal feeders. This minke whale population mainly feeds on Antarctic krill in offshore areas and ice krill in coastal areas on the continental shelf such as the Ross sea and Prydz bay. The population has been recorded to forage on ten known species: five fish (Antarctic silverfish, Antarctic jonasfish, Antarctic lanternfish, Chionodraco, and Notothenia), four euphausiids (Antarctic krill, ice krill, Euphausia frigida, Thysanoessa macrura), and one amphipod (Themisto gaudichaudii).
Population and conservation status
As of 2018, the IUCN Red List labels the common minke whale as Least Concern and the Antarctic minke whale as Near Threatened.
COSEWIC puts both species in the Not At Risk category. NatureServe lists them as G5 which means the species is secure on global range.
Population estimates are generated by the Scientific Committee of the International Whaling Commission. The 2004 estimate yielded 515,000 individuals for the Antarctic minke stock.
Whaling
Whaling was mentioned in Norwegian written sources as early as the year 800, and hunting minke whales with harpoons was common in the 11th century. In the 19th century, they were considered too small to chase, and received their name from a young Norwegian whale-spotter in the crew of Svend Foyn, who harpooned one, mistaking it for a blue whale and was derided for it.
By the end of the 1930s, they were the target of coastal whaling by Brazil, Canada, China, Greenland, Japan, Korea, Norway, and South Africa. Minke whales were not then regularly hunted by the large-scale whaling operations in the Southern Ocean because of their relatively small size. However, by the early 1970s, following the overhunting of larger whales such as the sei, fin, and blue whales, minkes became a more attractive target of whalers. By 1979, the minke was the only whale caught by Southern Ocean fleets. Hunting continued apace until the general moratorium on whaling began in 1986.
Following the moratorium, most hunting of minke whales ceased. Japan continued catching whales under the special research permit clause in the IWC convention, though in significantly smaller numbers. The stated purpose of the research is to establish data to support a case for the resumption of sustainable commercial whaling. Environmental organizations and several governments contend that research whaling is simply a cover for commercial whaling. The 2006 catch by Japanese whalers included 505 Antarctic minke whales. Between November 2017 and March 2018, Japan reported catches of a total of 333 Minke whales, of which 122 were pregnant females.
Although Norway initially followed the moratorium, they had placed an objection to it with the IWC and resumed a commercial hunt of the Common minke whale in 1993. The quota for 2006 was set at 1,052 animals, but only 546 were taken. The quota for 2011 was set at 1286. In August 2003, Iceland announced it would start research catches to estimate whether the stocks around the island could sustain hunting. Three years later, in 2006, Iceland resumed commercial whaling.
A 2007 analysis of DNA fingerprinting of whale meat estimated South Korean fishermen caught 827 minke between 1999 and 2003, approximately twice the officially reported number. This raised concerns that some whales were being caught deliberately.
In July 2019, Japan resumed commercial whaling activities. The permitted catch for the initial season (July 1 – December 31, 2019) is 227 whales, of which 52 can be minke.
Whale watching
Due to their relative abundance, minke whales are often the focus of whale-watching cruises setting sail from, for instance, the Isle of Mull in Scotland, County Cork in Ireland, and Húsavík in Iceland, and tours taken on the east coast of Canada. They are also one of the most commonly sighted whales seen on whale-watches from New England and eastern Canada. In contrast to humpback whales, minkes do not raise their flukes out of the water when diving and are less likely to breach (jump clear of the sea surface).
In the northern Great Barrier Reef (Australia), a swim-with-whales tourism industry has developed based on the June and July migration of dwarf minke whales. A limited number of reef tourism operators (based in Port Douglas and Cairns) have been granted permits by the Great Barrier Reef Marine Park Authority to conduct these swims, given strict adherence to a code of practice, and that operators report details of all sightings as part of a monitoring program.
Scientists from James Cook University and the Museum of Tropical Queensland have worked closely with participating operators and the Authority, researching tourism impacts and implementing management protocols to ensure these interactions are ecologically sustainable.
Minke whales are also occasionally sighted in Pacific waters, in and around the Haro Strait of British Columbia and Washington state.
| Biology and health sciences | Baleen whales | Animals |
300082 | https://en.wikipedia.org/wiki/Anechoic%20chamber | Anechoic chamber | An anechoic chamber (an-echoic meaning "non-reflective" or "without echoes") is a room designed to stop reflections or echoes of either sound or electromagnetic waves. They are also often isolated from energy entering from their surroundings. This combination means that a person or detector exclusively hears direct sounds (no reflected sounds), in effect simulating being outside in a free field.
Anechoic chambers, a term coined by American acoustics expert Leo Beranek, were initially exclusively used to refer to acoustic anechoic chambers. Recently, the term has been extended to other radio frequency (RF) and sonar anechoic chambers, which eliminate reflection and external noise caused by electromagnetic waves.
Anechoic chambers range from small compartments the size of household microwave ovens to ones as large as aircraft hangars. The size of the chamber depends on the size of the objects and frequency ranges being tested.
Acoustic anechoic chambers
The requirement for what was subsequently called an anechoic chamber originated to allow testing of loudspeakers that generated such intense sound levels that they could not be tested outdoors in inhabited areas.
Anechoic chambers are commonly used in acoustics to conduct experiments in nominally "free field" conditions, free field meaning that there are no reflected signals. All sound energy will be traveling away from the source with almost none reflected back. Common anechoic chamber experiments include measuring the transfer function of a loudspeaker or the directivity of noise radiation from industrial machinery. In general, the interior of an anechoic chamber can be very quiet, with typical noise levels in the 10–20 dBA range. In 2005, the best anechoic chamber measured at −9.4 dBA. In 2015, an anechoic chamber on the campus of Microsoft broke the world record with a measurement of −20.6 dBA. The human ear can typically detect sounds above 0 dBA, so a human in such a chamber would perceive the surroundings as devoid of sound. Anecdotally, some people may not like such silence and can become disoriented.
The mechanism by which anechoic chambers minimize the reflection of sound waves impinging onto their walls is as follows: In the included figure, an incident sound wave I is about to impinge onto a wall of an anechoic chamber. This wall is composed of a series of wedges W with height H. After the impingement, the incident wave I is reflected as a series of waves R which in turn "bounce up-and-down" in the gap of air A (bounded by dotted lines) between the wedges W. Such bouncing may produce (at least temporarily) a standing wave pattern in A. During this process, the acoustic energy of the waves R gets dissipated via the air's molecular viscosity, in particular near the corner C. In addition, with the use of foam materials to fabricate the wedges, another dissipation mechanism happens during the wave/wall interactions. As a result, the component of the reflected waves R along the direction of I that escapes the gaps A (and goes back to the source of sound), denoted R', is notably reduced. Even though this explanation is two-dimensional, it is representative and applicable to the actual three-dimensional wedge structures used in anechoic chambers.
Semi-anechoic and hemi-anechoic chambers
Full anechoic chambers aim to absorb energy in all directions. To do this, all surfaces, including the floor, need to be covered in correctly shaped wedges. A mesh grille is usually installed above the floor to provide a surface to walk on and place equipment. This mesh floor is typically placed at the same floor level as the rest of the building, meaning the chamber itself extends below floor level. This mesh floor is damped and floating on absorbent buffers to isolate it from outside vibration or electromagnetic signals.
In contrast, semi-anechoic or hemi-anechoic chambers have a solid floor that acts as a work surface for supporting heavy items, such as cars, washing machines, or industrial machinery, which could not be supported by the mesh grille in a full anechoic chamber. Recording studios are often semi-anechoic.
The distinction between "semi-anechoic" and "hemi-anechoic" is unclear. In some uses they are synonyms, or only one term is used. Other uses distinguish one as having an ideally reflective floor (creating free-field conditions with a single reflective surface) and the other as simply having a flat untreated floor. Still other uses distinguish them by size and performance, with one being likely an existing room retrofitted with acoustic treatment, and the other a purpose-built room which is likely larger and has better anechoic performance.
Radio-frequency anechoic chambers
The internal appearance of the radio frequency (RF) anechoic chamber is sometimes similar to that of an acoustic anechoic chamber; however, the interior surfaces of the RF anechoic chamber are covered with radiation absorbent material (RAM) instead of acoustically absorbent material. Uses for RF anechoic chambers include testing antennas and radars, and they are typically used to house the antennas for performing measurements of antenna radiation patterns and electromagnetic interference.
Performance expectations (gain, efficiency, pattern characteristics, etc.) constitute primary challenges in designing stand alone or embedded antennas. Designs are becoming ever more complex with a single device incorporating multiple technologies such as cellular, WiFi, Bluetooth, LTE, MIMO, RFID and GPS.
Radiation-absorbent material
RAM is designed and shaped to absorb incident RF radiation (also known as non-ionising radiation) as effectively as possible, from as many incident directions as possible. The more effective the RAM, the lower the resulting level of reflected RF radiation. Many measurements in electromagnetic compatibility (EMC) and antenna radiation patterns require that spurious signals arising from the test setup, including reflections, are negligible to avoid the risk of causing measurement errors and ambiguities.
Effectiveness over frequency
Waves of higher frequencies have shorter wavelengths and are higher in energy, while waves of lower frequencies have longer wavelengths and are lower in energy, according to the relationship where lambda represents wavelength, v is phase velocity of wave, and is frequency. To shield for a specific wavelength, the cone must be of appropriate size to absorb that wavelength. The performance quality of an RF anechoic chamber is determined by its lowest test frequency of operation, at which measured reflections from the internal surfaces will be the most significant compared to higher frequencies. Pyramidal RAM is at its most absorptive when the incident wave is at normal incidence to the internal chamber surface and the pyramid height is approximately equal to , where is the free space wavelength. Accordingly, increasing the pyramid height of the RAM for the same (square) base size improves the effectiveness of the chamber at low frequencies but results in increased cost and a reduced unobstructed working volume that is available inside a chamber of defined size.
Installation into a screened room
An RF anechoic chamber is usually built into a screened room, designed using the Faraday cage principle. This is because most of the RF tests that require an anechoic chamber to minimize reflections from the inner surfaces also require the properties of a screened room to attenuate unwanted signals penetrating inwards and causing interference to the equipment under test and prevent leakage from tests penetrating outside.
Chamber size and commissioning
At lower radiated frequencies, far-field measurement can require a large and expensive chamber. Sometimes, for example for radar cross-section measurements, it is possible to scale down the object under test and reduce the chamber size, provided that the wavelength of the test frequency is scaled down in direct proportion by testing at a higher frequency.
RF anechoic chambers are normally designed to meet the electrical requirements of one or more accredited standards. For example, the aircraft industry may test equipment for aircraft according to company specifications or military specifications such as MIL-STD 461E. Once built, acceptance tests are performed during commissioning to verify that the standard(s) are in fact met. Provided they are, a certificate will be issued to that effect. The chamber will need to be periodically retested.
Operational use
Test and supporting equipment configurations to be used within anechoic chambers must expose as few metallic (conductive) surfaces as possible, as these risk causing unwanted reflections. Often this is achieved by using non-conductive plastic or wooden structures for supporting the equipment under test. Where metallic surfaces are unavoidable, they may be covered with pieces of RAM after setting up to minimize such reflection as far as possible.
A careful assessment may be required as to whether the test equipment (as opposed to the equipment under test) should be placed inside or outside the chamber. Typically most of it is located in a separate screened room attached to the main test chamber, in order to shield it from both external interference and from the radiation within the chamber. Mains power and test signal cabling into the test chamber require high quality filtering.
Fiber optic cables are sometimes used for the signal cabling, as they are immune to ordinary RFI and also cause little reflection inside the chamber.
Health and safety risks associated with RF anechoic chamber
The following health and safety risks are associated with RF anechoic chambers:
RF radiation hazard
Fire hazard
Trapped personnel
Personnel are not normally permitted inside the chamber during a measurement as this not only can cause unwanted reflections from the human body but may also be a radiation hazard to the personnel concerned if tests are being performed at high RF powers. Such risks are from RF or non-ionizing radiation and not from the higher energy ionizing radiation.
As RAM is highly absorptive of RF radiation, incident radiation will generate heat within the RAM. If this cannot be dissipated adequately there is a risk that hot spots may develop and the RAM temperature may rise to the point of combustion. This can be a risk if a transmitting antenna inadvertently gets too close to the RAM. Even for quite modest transmitting power levels, high gain antennas can concentrate the power sufficiently to cause high power flux near their apertures. Although recently manufactured RAM is normally treated with a fire retardant to reduce such risks, they are difficult to eliminate.
| Physical sciences | Research methods | Basics and measurement |
300221 | https://en.wikipedia.org/wiki/Testicular%20cancer | Testicular cancer | Testicular cancer is cancer that develops in the testicles, a part of the male reproductive system. Symptoms may include a lump in the testicle or swelling or pain in the scrotum. Treatment may result in infertility.
Risk factors include an undescended testis, family history of the disease, and previous history of testicular cancer. More than 95% are germ cell tumors which are divided into seminomas and non-seminomas. Other types include sex-cord stromal tumors and lymphomas. Diagnosis is typically based on a physical exam, ultrasound, and blood tests. Surgical removal of the testicle with examination under a microscope is then done to determine the type.
Testicular cancer is highly treatable and usually curable. Treatment options may include surgery, radiation therapy, chemotherapy, or stem cell transplantation. Even in cases in which cancer has spread widely, chemotherapy offers a cure rate greater than 80%.
Globally testicular cancer affected about 686,000 people in 2015. That year it resulted in 9,400 deaths up from 7,000 deaths in 1990. Rates are lower in the developing than the developed world. Onset most commonly occurs in males 20 to 34 years old, rarely before 15 years old. The five-year survival rate in the United States is about 95%. Outcomes are better when the disease remains localized.
Signs and symptoms
One of the first signs of testicular cancer is often a lump or swelling in the testes. The U.S. Preventive Services Task Force (USPSTF) recommends against routine screening for testicular cancer in asymptomatic adolescent and adults including routine testicular self-exams. However, the American Cancer Society suggests that some men should examine their testicles monthly, especially if they have a family history of cancer, and the American Urological Association recommends monthly testicular self-examinations for all young men.
Symptoms may also include one or more of the following:
a lump in one testis which may or may not be painful
sharp pain or a dull ache in the lower abdomen or scrotum
a feeling often described as "heaviness" in the scrotum
firmness of the testicle
breast enlargement (gynecomastia) from hormonal effects of β-hCG
low back pain (lumbago) due to the cancer spreading to the lymph nodes along the back
It is not very common for testicular cancer to spread to other organs, apart from the lungs. If it has, however, the following symptoms may be present:
shortness of breath (dyspnea), cough or coughing up blood (hemoptysis) from metastatic spread to the lungs
a lump in the neck due to metastases to the lymph nodes
Testicular cancer, cryptorchidism, hypospadias, and poor semen quality make up the syndrome known as testicular dysgenesis syndrome.
Causes
A major risk factor for the development of testis cancer is cryptorchidism (undescended testicles). It is generally believed that the presence of a tumor contributes to cryptorchidism; when cryptorchidism occurs in conjunction with a tumor then the tumor tends to be large. Other risk factors include inguinal hernias, Klinefelter syndrome, and mumps orchitis. Physical activity is associated with decreased risk and sedentary lifestyle is associated with increased risk. Early onset of male characteristics is associated with increased risk. These may reflect endogenous or environmental hormones.
Higher rates of testicular cancer in Western nations have been linked to the use of cannabis.
Mechanisms
Most testicular germ cell tumors have too many chromosomes, and most often they are triploid to tetraploid. An isochromosome 12p (the short arm of chromosome 12 on both sides of the same centromere) is present in about 80% of the testicular cancers, and also the other cancers usually have extra material from this chromosome arm through other mechanisms of genomic amplification.
Diagnosis
The main way testicular cancer is diagnosed is via a lump or mass inside a testis. More generally, if a young adult or adolescent has a single enlarged testicle, which may or may not be painful, this should give doctors reason to suspect testicular cancer.
Other conditions may also have symptoms similar to testicular cancer:
Epididymitis or epididymo-orchitis
Hematocele
Varicocele
Orchitis
Prostate infections or inflammations (prostatitis), bladder infections or inflammations (cystitis), or kidney (renal) infections (nephritis) or inflammations which have spread to and caused swelling in the vessels of the testicles or scrotum
Testicular torsion or a hernia
Infection, inflammation, retro-peritonitis, or other conditions of the lymph nodes or vessels near the scrotum, testicles, pubis, anorectal area, and groin
Benign tumors or lesions of the testicles
Metastasis to the testicles from another, primary tumor site(s)
The nature of any palpated lump in the scrotum is often evaluated by scrotal ultrasound, which can determine exact location, size, and some characteristics of the lump, such as cystic vs solid, uniform vs heterogeneous, sharply circumscribed, or poorly defined. The extent of the disease is evaluated by CT scans, which are used to locate metastases.
The differential diagnosis of testicular cancer requires examining the histology of tissue obtained from an inguinal orchiectomy—that is, surgical excision of the entire testis along with attached structures (epididymis and spermatic cord). A biopsy should not be performed, as it raises the risk of spreading cancer cells into the scrotum.
Inguinal orchiectomy is the preferred method because it lowers the risk of cancer cells escaping. This is because the lymphatic system of the scrotum, through which white blood cells (and, potentially, cancer cells) flow in and out, links to the lower extremities, while that of the testicle links to the back of the abdominal cavity (the retroperitoneum). A trans-scrotal biopsy or orchiectomy will potentially leave cancer cells in the scrotum and create two routes for cancer cells to spread, while in an inguinal orchiectomy, only the retroperitoneal route exists.
Blood tests are also used to identify and measure tumor markers (usually proteins present in the bloodstream) that are specific to testicular cancer. Alpha-fetoprotein, human chorionic gonadotropin (the "pregnancy hormone"), and LDH-1 are the typical tumor markers used to spot testicular germ cell tumors.
A pregnancy test may be used to detect high levels of chorionic gonadotropin; however, the first sign of testicular cancer is usually a painless lump. Note that only about 25% of seminomas have elevated chorionic gonadotropin, so a pregnancy test is not very sensitive for making out testicular cancer.
Stressful experiences
The stressful event of testicular cancer not only affects the patient that is diagnosed but also affects the caregiver. The psychological stress model consists of stressful experiences that a patient with testicular cancer may go through after diagnosis, that caregivers may want to look out for. The stressful experiences consist of 4 main categories:
Late side-effects
Fear of tumor relapse
Fertility problems
Social and workplace issues
These side effects may need physical and emotional care which in turn can cause the caregiver an emotional burden.
Screening
The American Academy of Family Physicians recommends against screening males without symptoms for testicular cancer.
Staging
After removal, the testicle is fixed with Bouin's solution because it better conserves some morphological details such as nuclear conformation. Then the testicular tumor is staged by a pathologist according to the TNM Classification of Malignant Tumors as published in the AJCC Cancer Staging Manual. Testicular cancer is categorized as being in one of three stages (which have subclassifications). The size of the tumor in the testis is irrelevant to staging. In broad terms, testicular cancer is staged as follows:
Stage I: the cancer remains localized to the testis.
Stage II: the cancer involves the testis and metastasis to retroperitoneal and/or paraaortic lymph nodes (lymph nodes below the diaphragm).
Stage III: the cancer involves the testis and metastasis beyond the retroperitoneal and paraaortic lymph nodes. Stage 3 is further subdivided into non-bulky stage 3 and bulky stage 3.
Further information on the detailed staging system is available on the website of the American Cancer Society.
Classification
Although testicular cancer can be derived from any cell type found in the testicles, more than 95% of testicular cancers are germ cell tumors (GCTs). Most of the remaining 5% are sex cord–gonadal stromal tumours derived from Leydig cells or Sertoli cells. Correct diagnosis is necessary to ensure the most effective and appropriate treatment. To some extent, this can be done via blood tests for tumor markers, but definitive diagnosis requires examination of the histology of a specimen by a pathologist. Testicular tumors are best classified by radical inguinal orchiectomy, which allows for both histologic evaluation of the whole testicle and provides local tumor control.
Most pathologists use the World Health Organization classification system for testicular tumors:
Germ cells derived from germ cell neoplasia in situ
Noninvasive germ cell neoplasia
Germ cell neoplasia in situ
Specific forms of intratubular germ cell neoplasia
Gonadoblastoma
Germinoma family of tumors
Seminoma
Nonseminomatous germ cell tumors
Embryonal carcinoma
Yolk sac tumor, postpubertal type
Choriocarcinoma
Placental site trophoblastic tumour
Epithelioid trophoblastic tumour
Teratoma, postpubertal type
Teratoma with somatic-type malignancy
Mixed germ cell tumors of the testis
Mixed germ cell tumors
Polyembryoma
Diffuse embryoma
Germ cell tumors of unknown type
Regressed germ cell tumors
Germ cell tumors unrelated to germ cell neoplasia in situ
Spermatocytic tumor
Teratoma, prepubertal type
Dermoid cyst
Epidermoid cyst
Yolk sac tumor, prepubertal type
Testicular neuroendocrine tumor, prepubertal type
Mixed teratoma and yolk sac tumor, prepubertal type
Sex cord-stromal tumors of the testis
Leydig cell tumor
Leydig cell tumor
Sertoli cell tumor
Sertoli cell tumor
Large cell calcifying Sertoli cell tumor
Granulosa cell tumor
Adult granulosa cell tumor
Juvenile granulosa cell tumor
The fibroma thecoma family of tumors
Thecoma
Fibroma
Mixed and other sex cord-stromal tumors
Mixed sex cord-stromal tumor
Signet ring stromal tumor
Myoid gonadal stromal tumor
Sex cord-stromal tumor not otherwise specified
Secondary tumors of the testis
Treatment
The three basic types of treatment are surgery, radiation therapy, and chemotherapy.
Surgery is performed by urologists; radiation therapy is administered by radiation oncologists; and chemotherapy is the work of medical oncologists. In most patients with testicular cancer, the disease is cured readily with minimal long-term morbidity. While treatment success depends on the stage, the average survival rate after five years is around 95%, and stage 1 cancer cases, if monitored properly, have essentially a 100% survival rate.
Testicle removal
The initial treatment for testicular cancer is surgery to remove the affected testicle (orchiectomy). While it may be possible, in some cases, to remove testicular cancer tumors from a testis while leaving the testis functional, this is almost never done, as the affected testicle usually contains pre-cancerous cells spread throughout the entire testicle. Thus removing the tumor alone without additional treatment greatly increases the risk that another cancer will form in that testicle.
Since only one testis is typically required to maintain fertility, hormone production, and other male functions, the affected testis is almost always removed completely in a procedure called inguinal orchiectomy. (The testicle is almost never removed through the scrotum; an incision is made beneath the belt line in the inguinal area.) In the UK, the procedure is known as a radical orchidectomy.
Retroperitoneal lymph node dissection
In the case of non-seminomas that appear to be stage I, surgery may be done on the retroperitoneal/paraaortic lymph nodes (in a separate operation) to accurately determine whether the cancer is in stage I or stage II and to reduce the risk that malignant testicular cancer cells that may have metastasized to lymph nodes in the lower abdomen. This surgery is called retroperitoneal lymph node dissection (RPLND). However, this approach, while standard in many places, especially the United States, is out of favor due to costs and the high level of expertise required to perform successful surgery. Sperm banking is frequently carried out prior to the procedure (as with chemotherapy), as there is a risk that RPLND may damage the nerves involved in ejaculation, causing ejaculation to occur internally into the bladder rather than externally.
Many patients are instead choosing surveillance, where no further surgery is performed unless tests indicate that the cancer has returned. This approach maintains a high cure rate because of the growing accuracy of surveillance techniques.
Adjuvant treatment
Since testicular cancers can spread, patients are usually offered adjuvant treatment—in the form of chemotherapy or radiotherapy—to kill any cancerous cells that may exist outside of the affected testicle. The type of adjuvant therapy depends largely on the histology of the tumor (i.e., the size and shape of its cells under the microscope) and the stage of progression at the time of surgery (i.e., how far cells have 'escaped' from the testicle, invaded the surrounding tissue, or spread to the rest of the body). If the cancer is not particularly advanced, patients may be offered careful surveillance by periodic CT scans and blood tests, in place of adjuvant treatment.
Before 1970, survival rates from testicular cancer were low. Since the introduction of adjuvant chemotherapy, chiefly platinum-based drugs like cisplatin and carboplatin, the outlook has improved substantially. Although 7000 to 8000 new cases of testicular cancer occur in the United States yearly, only 400 men are expected to die of the disease.
In the UK, a similar trend has emerged: since improvements in treatment, survival rates have risen rapidly to cure rates of over 95%.
Radiation therapy
Radiation may be used to treat stage II seminoma cancers, or as adjuvant (preventative) therapy in the case of stage I seminomas, to minimize the likelihood that tiny, non-detectable tumors exist and will spread (in the inguinal and para-aortic lymph nodes). Radiation is ineffective against and is therefore never used as a primary therapy for non-seminoma.
Chemotherapy
Non-seminoma
Chemotherapy is the standard treatment for non-seminoma when the cancer has spread to other parts of the body (that is, stage 2B or 3). The standard chemotherapy protocol is three, or sometimes four, rounds of Bleomycin-Etoposide-Cisplatin (BEP). BEP as a first-line treatment was first reported by Professor Michael Peckham in 1983. The landmark trial published in 1987 which established BEP as the optimum treatment was conducted by Dr. Lawrence Einhorn at Indiana University. An alternative, equally effective treatment involves the use of four cycles of Etoposide-Cisplatin (EP).
Lymph node surgery may also be performed after chemotherapy to remove masses left behind (stage 2B or more advanced), particularly in the cases of large non-seminomas.
Seminoma
As an adjuvant treatment, use of chemotherapy as an alternative to radiation therapy in the treatment of seminoma is increasing, because radiation therapy appears to have more significant long-term side effects (for example, internal scarring, increased risks of secondary malignancies, etc.). Two doses, or occasionally a single dose of carboplatin, typically delivered three weeks apart, is proving to be a successful adjuvant treatment, with recurrence rates in the same ranges as those of radiotherapy. The concept of carboplatin as a single-dose therapy was developed by Tim Oliver, Professor of Medical Oncology at Barts and The London School of Medicine and Dentistry. However, very long-term data on the efficacy of adjuvant carboplatin in this setting do not exist.
Since seminoma can recur decades after the primary tumor is removed, patients receiving adjuvant chemotherapy should remain vigilant and not assume they are cured 5 years after treatment.
Prognosis
Treatment of testicular cancer is one of the success stories of modern medicine, with sustained response to treatment in more than 90% of cases, regardless of stage. In 2011 overall cure rates of more than 95% were reported, and 80% for metastatic disease—the best response by any solid tumor, with improved survival being attributed primarily to effective chemotherapy. By 2013 more than 96 per cent of the 2,300 men diagnosed each year in the U.K. were deemed cured, a rise by almost a third since the 1970s, the improvement attributed substantially to the chemotherapy drug cisplatin. In the United States, when the disease is treated while it is still localized, more than 99% of people survive 5 years.
Surveillance
For many patients with stage I cancer, adjuvant (preventative) therapy following surgery may not be appropriate and patients will undergo surveillance instead. The form this surveillance takes, e.g. the type and frequency of investigations and the length time it should continue, will depend on the type of cancer (non-seminoma or seminoma), but the aim is to avoid unnecessary treatments in the many patients who are cured by their surgery, and ensure that any relapses with metastases (secondary cancers) are detected early and cured. This approach ensures that chemotherapy and or radiotherapy is only given to the patients that need it. The number of patients ultimately cured is the same using surveillance as post-operative "adjuvant" treatments, but the patients have to be prepared to follow a prolonged series of visits and tests.
For both non-seminomas and seminomas, surveillance tests generally include physical examination, blood tests for tumor markers, chest x-rays, and CT scanning. However, the requirements of a surveillance program differ according to the type of disease since, for seminoma patients, relapses can occur later, and blood tests are not as good at indicating relapse.
CT scans are performed on the abdomen (and sometimes the pelvis) and also the chest in some hospitals. Chest x-rays are increasingly preferred for the lungs as they give sufficient detail combined with a lower false-positive rate and significantly smaller radiation dose than CT.
The frequency of CT scans during surveillance should ensure that relapses are detected at an early stage while minimizing the radiation exposure.
For patients treated for stage I non-seminoma, a randomized trial (Medical Research Council TE08) showed that, when combined with the standard surveillance tests described above, 2 CT scans at 3 and 12 months were as good as 5 over 2 years in detecting relapse at an early stage.
For patients treated for stage I seminoma who choose surveillance rather than undergoing adjuvant therapy, there have been no randomized trials to determine the optimum frequency of scans and visits, and the schedules vary very widely across the world, and within individual countries. In the UK there is an ongoing clinical trial called TRISST. This is assessing how often scans should take place and whether magnetic resonance imaging (MRI) can be used instead of CT scans. MRI is being investigated because it does not expose the patient to radiation and so, if it is shown to be as good at detecting relapses, it may be preferable to CT.
For more advanced stages of testicular cancer, and for those cases in which radiation therapy or chemotherapy was administered, the extent of monitoring (tests) after treatment will vary on the basis of the circumstances, but normally should be done for five years in uncomplicated cases and for longer in those with higher risks of relapse.
Fertility
A man with one remaining testis may maintain fertile. However, sperm banking may be appropriate for men who still plan to have children, since fertility may be adversely affected by chemotherapy and/or radiotherapy. A man who loses both testicles will be infertile after the procedure, though he may elect to bank viable, cancer-free sperm prior to the procedure.
Psychological factors of testicular cancer
Although testicular cancer has a low mortality rate and better prognosis outcomes, psychological factors still affect cancer patients struggling with a diagnosis. This means that the absence of testicles can influence perceptions of masculinity, sexual identity, and body image. Castration or partial removal is associated with fantasies, beliefs, myths, and cultural norms surrounding the testes, which can lead to severe psychological trauma and consequences for the individual. Consequently, worries regarding sexual and reproductive capabilities may induce feelings of despair, inadequacy, and emotional turmoil. Factors that are associated with a decrease in psychological outcomes are early adulthood, partnership status, work status, sexual dysfunction, diminished masculinity, and adaptive mechanisms.
Masculinity and sexual identity
Biological ideas about masculinity say that human bodies confirm their gender, so changes or damage to our reproductive system can affect how men feel about being men. Since testicles have long been seen as symbols of strength, bravery, and masculinity, having surgery to remove them can change how men with testicular cancer view themselves and what it means to be a man.
Social stigma related to masculinity and sexual identity
Males aged 18–24 encounter distinct gender-specific social factors that are linked to a decrease in mental health outcomes. These social factors include limited access to health services and engagement, stigma related to masculinity, and cultural expectations. Single or unemployed men are at a higher risk of poorer psychological outcomes that are correlated with an impairment of sexual functions and masculinity. Another factor that is related to experiencing negative effects related to masculinity is not having children, due to not being able to meet traditional expectations of being a protector or provider. Men who felt that losing a testicle made them less masculine also felt negative psychological effects.
New research shows that testicular cancer survivors who have low testosterone levels feel less masculine than those with normal testosterone levels. These concerns are important for teenage boys going through puberty or recently experiencing physical changes, which can shape their developing understanding of their sexual identity. For example, gynecomastia, which is when males develop enlarged breasts during puberty, is a common and normal part of growing up. However, only up to 11% of patients diagnosed with testicular cancer have gynecomastia when they first seek medical attention, and about 4% of males checked for gynecomastia turn out to have testicular cancer. After testicular cancer, some men feel less masculine, but how much cancer affects masculinity varies from person to person.
Body image
New studies show that 16% of survivors have serious concerns about how they look after the removal of a testicle. These survivors worry about feeling awkward and anxious because of their missing testicles, and they feel different from other people. Even though 52% of survivors felt that their bodies had changed a lot because of cancer and treatment, 88% of the spouses didn't think their partners were any less attractive.
How survivors feel about their bodies is a big factor in deciding whether to get a testicular prosthesis. Many worry about losing their masculinity, and how they see themselves, and just want to look and feel normal again. Looking back at how testicular prostheses were used, especially with many teenagers, there were noticeable improvements in how people felt about their bodies and themselves overall one year after getting an implant. They also felt more comfortable during sexual activities.
Anxiety and depression
After having the testicles removed through orchiectomy, testicular cancer survivors may experience long-lasting feelings of sadness or embarrassment. Research has shown that these emotions are more prevalent among younger and unmarried men compared to older and partnered individuals. The most common psychological problem faced by men diagnosed with testicular cancer is anxiety. New research suggested that there was no direct comparison between people who have been diagnosed with testicular cancer and the general population. Studies show that anxiety is more frequent among testicular cancer survivors of similar gender and age compared to the general population, affecting about 1 in 5 survivors. Depression does not seem to burden testicular cancer patients as much as anxiety.
Fear of recurrence
Around one out of every three testicular cancer survivors experience significant fear of the cancer coming back, and this fear is considered the most troubling issue for them. Unmarried men reported they felt less fear of cancer recurrence than men who were in a relationship. Survivors who have a fear of recurrence of their cancer tend to have more:
Intrusive thoughts
Feel more depressed
Stressed
Experience poorer physical well-being
Being diagnosed with testicular cancer often destroys many men's feelings of being invincible and brings up unexpected questions about life and purpose. They feel a sense of being in between or on the threshold of a new identity. This involves feeling disconnected from those who haven't been through a similar intense experience, questioning the purpose of their existence, and becoming more aware of life's fragility and the certainty of death. New research suggests that certain testicular cancer survivors think their cancer was triggered by their stress sensitivity. This may be why some survivors have a fear of recurrence more than 10 years after treatment, even though the actual risk of recurrence is around 1%.
Biological and psychological factors of sexual dysfunction
Sexual dysfunction can present as a symptom in people who have been diagnosed with testicular cancer. Sexual dysfunction can stem from biological factors, psychological factors, or a blend of both. Difficulties in physiological aspects such as achieving erection and ejaculation are correlated with the severity of the disease and the methods of treatment employed such as surgery, radiotherapy, or chemotherapy. Conversely, psychological aspects such as libido and satisfaction remain unaffected by the type of treatment received. Nonetheless, treatment approaches for testicular cancer can induce physiological alterations while simultaneously eliciting emotional responses. Therefore, diminished sexual function (such as decreased libido or inhibition) may result from treatment-related physical factors like fatigue, overall discomfort, hair loss, and significant weight fluctuations, as well as emotional factors including concerns about sexual performance, fear of losing control, and ambiguity regarding what lies ahead.
Post-traumatic growth from testicular cancer
Not every survivor of testicular cancer has negative outcomes of depression and some even may gain positive outcomes from their experience. This means that when looking at outcomes across all testicular cancer survivors, the positives and negatives could balance each other out. Many cancer survivors, both young and older adults, have reported benefits and personal growth in the months and even years following their diagnosis. Furthermore, researchers have discovered that while the journey of testicular cancer initially brings physical and emotional challenges, it also leads many survivors to develop a newfound gratitude for life. Besides improving mental outlook, going through testicular cancer might also motivate men to adopt healthier behaviors such as:
More physical activity
Reduce or stop smoking
These positive changes in lifestyle could contribute to better psychological well-being, which can offset any initial difficulties they face.
Epidemiology
Globally testicular cancer resulted in 8,300 deaths in 2013 up from 7,000 deaths in 1990. Testicular cancer has the highest prevalence in the U.S. and Europe, and is uncommon in Asia and Africa. Worldwide incidence has doubled since the 1960s, with the highest rates of prevalence in Scandinavia, Germany, and New Zealand.
Although testicular cancer is most common among men aged 15–40 years, it has three peaks: infancy through the age of four as teratomas and yolk sac tumors, ages 25–40 years as post-pubertal seminomas and non-seminomas, and from age 60 as spermatocytic tumors.
Germ cell tumors of the testis are the most common cancer in young men between the ages of 15 and 35 years.
United States
In the United States, about 8,900 cases are diagnosed a year. The risk of testicular cancer in white men is approximately 4–5 times the risk in black men, and more than three times that of Asian American men. The risk of testicular cancer in Latinos and American Indians is between that of white and Asian men. The cause of these differences is unknown.
United Kingdom
In the UK, approximately 2,000 people are diagnosed a year. Over a lifetime, the risk is roughly 1 in 200 (0.5%). It is the 16th most common cancer in men. It accounts for less than 1% of cancer deaths in men (around 60 men died in 2012).
Other animals
Testicular tumors occur also in other animals. In horses, these include interstitial cell tumors and teratomas. Typically, the former are found in older stallions (affected stallions may become extremely vicious, suggesting excessive production of androgen), and the latter are found in young horses and are large.
| Biology and health sciences | Cancer | Health |
300295 | https://en.wikipedia.org/wiki/Dichloromethane | Dichloromethane | Dichloromethane (DCM, methylene chloride, or methylene bichloride) is an organochlorine compound with the formula . This colorless, volatile liquid with a chloroform-like, sweet odor is widely used as a solvent. Although it is not miscible with water, it is slightly polar, and miscible with many organic solvents.
Occurrence
Natural sources of dichloromethane include oceanic sources, macroalgae, wetlands, and volcanoes. However, the majority of dichloromethane in the environment is the result of industrial emissions.
Production
DCM is produced by treating either chloromethane or methane with chlorine gas at 400–500 °C. At these temperatures, both methane and chloromethane undergo a series of reactions producing progressively more chlorinated products. In this way, an estimated 400,000 tons were produced in the US, Europe, and Japan in 1993.
The output of these processes is a mixture of chloromethane, dichloromethane, chloroform, and carbon tetrachloride as well as hydrogen chloride as a byproduct. These compounds are separated by distillation.
DCM was first prepared in 1839 by the French chemist Henri Victor Regnault (1810–1878), who isolated it from a mixture of chloromethane and chlorine that had been exposed to sunlight.
Uses
DCM's volatility and ability to dissolve a wide range of organic compounds makes it a useful solvent for many chemical processes commonly in paint removers. In the food industry, it is used to decaffeinate coffee and tea as well as to prepare extracts of hops and other flavourings. Its volatility has led to its use as an aerosol spray propellant and as a blowing agent for polyurethane foams.
Specialized uses
The chemical compound's low boiling point allows the chemical to function in a heat engine that can extract mechanical energy from small temperature differences. An example of a DCM heat engine is the drinking bird. The toy works at room temperature. It is also used as the fluid in jukebox displays and holiday bubble lights that have a colored bubbling tube above a lamp as a source of heat and a small amount of rock salt to provide thermal mass and a nucleation site for the phase changing solvent.
DCM chemically welds certain plastics. For example, it is used to seal the casing of electric meters. Often sold as a main component of plastic welding adhesives, it is also used extensively by model building hobbyists for joining plastic components together. It is commonly referred to as "Di-clo".
It is used in the garment printing industry for removal of heat-sealed garment transfers.
DCM is used in the material testing field of civil engineering; specifically it is used during the testing of bituminous materials as a solvent to separate the binder from the aggregate of an asphalt or macadam to allow the testing of the materials.
Dichloromethane extract of Asparagopsis taxiformis, a seaweed fodder for cattle, has been found to reduce their methane emissions by 79%.
It has been used as the principal component of various paint and lacquer strippers, although its use is now restricted in the EU and many such products now use benzyl alcohol as a safer alternative.
Chemical reactions
Dichloromethane is widely used as a solvent in part because it is relatively inert. It does participate in reactions with certain strong nucleophiles however. Tert-butyllithium deprotonates DCM:
Methyllithium reacts with methylene chloride to give chlorocarbene:
Although DCM is a common solvent in organic chemistry laboratories and is commonly assumed to be inert, it does react with some amines and triazoles. Tertiary amines can react with DCM to form quaternary chloromethyl chloride salts via the Menshutkin reaction. Secondary amines can react with DCM to yield an equilibrium of iminium chlorides and chloromethyl chlorides, which can react with a second equivalent of the secondary amine to form aminals. At increased temperatures, pyridines including DMAP, react with DCM to form methylene bispyridinium dichlorides. Hydroxybenzotriazole and related reagents used in peptide coupling react with DCM in the presence of triethylamine, forming acetals.
Toxicity
Serious health risks are associated with DCM, despite being one of the least toxic simple chlorohydrocarbons. Its high volatility makes it an inhalation hazard. It can also be absorbed through the skin.
Symptoms of acute overexposure to dichloromethane via inhalation include difficulty concentrating, dizziness, fatigue, nausea, headaches, numbness, weakness, and irritation of the upper respiratory tract and eyes. More severe consequences can include suffocation, loss of consciousness, coma, and death.
DCM is also metabolized to carbon monoxide potentially leading to carbon monoxide poisoning. Acute exposure by inhalation has resulted in optic neuropathy and hepatitis. Prolonged skin contact can result in DCM dissolving some of the fatty tissues in skin, resulting in skin irritation or chemical burns.
It may be carcinogenic, as it has been linked to cancer of the lungs, liver, and pancreas in laboratory animals. Other animal studies showed breast cancer and salivary gland cancer. Research is not yet clear as to what levels may be carcinogenic to humans. DCM crosses the placenta but fetal toxicity in women who are exposed to it during pregnancy has not been proven. In animal experiments, it was fetotoxic at doses that were maternally toxic but no teratogenic effects were seen.
In people with pre-existing heart problems, exposure to DCM can cause abnormal heart rhythms and/or heart attacks, sometimes without any other symptoms of overexposure. People with existing liver, nervous system, or skin problems may worsen after exposure to methylene chloride.
Regulation
In many countries, products containing DCM must carry labels warning of its health risks. Concerns about its health effects have led to a search for alternatives in many of its applications.
In the European Union, the Scientific Committee on Occupational Exposure Limit Values (SCOEL) recommends an occupational exposure limit for DCM of 100 ppm (8-hour time-weighted average) and a short-term exposure limit of 200 ppm for a 15-minute period. The European Parliament voted in 2009 to ban the use of DCM in paint-strippers for consumers and many professionals, with the ban taking effect in December 2010.
In February 2013, the US Occupational Safety and Health Administration (OSHA) and the National Institute for Occupational Safety and Health warned that at least 14 bathtub refinishers have died since 2000 from DCM exposure. These workers had been working alone, in poorly ventilated bathrooms, with inadequate or no respiratory protection, and no training about the hazards of DCM. OSHA has since then issued a DCM standard.
On March 15, 2019, the US Environmental Protection Agency (EPA) issued a final rule to prohibit the manufacture (including importing and exporting), processing, and distribution of DCM in all paint removers for consumer use, effective in 180 days. However, it does not affect other products containing DCM, including many consumer products not intended for paint removal. On April 20, 2023, the EPA proposed a widespread ban on the production of DCM with some exceptions for military and industrial uses. On April 30, 2024, the EPA finalized a ban on most commercial uses of DCM, which mainly banned its application for stripping paint and degreasing surfaces but allowed for some remaining commercial applications, such as chemical production.
Environmental effects
Dichloromethane is not classified as an ozone-depleting substance by the Montreal Protocol. The US Clean Air Act does not regulate dichloromethane as an ozone depleter. Dichloromethane has been classified as a very short-lived substance (VSLS). Despite their short atmospheric lifetimes of less than 0.5 year, VSLSs can contribute to stratospheric ozone depletion, particularly if emitted in regions where rapid transport to the stratosphere occurs. Atmospheric abundances of dichloromethane have been increasing in recent years.
| Physical sciences | Halocarbons | Chemistry |
300321 | https://en.wikipedia.org/wiki/Chloromethane | Chloromethane | Chloromethane, also called methyl chloride, Refrigerant-40, R-40 or HCC 40, is an organic compound with the chemical formula . One of the haloalkanes, it is a colorless, sweet-smelling, flammable gas. Methyl chloride is a crucial reagent in industrial chemistry, although it is rarely present in consumer products, and was formerly utilized as a refrigerant. Most chloromethane is biogenic.
Occurrence
Chloromethane is an abundant organohalogen, anthropogenic or natural, in the atmosphere. Natural sources produce an estimated 4,100,000,000 kg/yr.
Marine
Laboratory cultures of marine phytoplankton (Phaeodactylum tricornutum, Phaeocystis sp., Thalassiosira weissflogii, Chaetoceros calcitrans, Isochrysis sp., Porphyridium sp., Synechococcus sp., Tetraselmis sp., Prorocentrum sp., and Emiliana huxleyi) produce CH3Cl, but in relatively insignificant amounts. An extensive study of 30 species of polar macroalgae revealed the release of significant amounts of CH3Cl in only Gigartina skottsbergii and Gymnogongrus antarcticus.
Biogenesis
The salt marsh plant Batis maritima contains the enzyme methyl chloride transferase that catalyzes the synthesis of CH3Cl from S-adenosine-L-methionine and chloride. This protein has been purified and expressed in E. coli, and seems to be present in other organisms such as white rot fungi (Phellinus pomaceus), red algae (Endocladia muricata), and the ice plant (Mesembryanthemum crystallinum), each of which is a known CH3Cl producer.
Sugarcane and the emission of methyl chloride
In the sugarcane industry, the organic waste is usually burned in the power cogeneration process. When contaminated by chloride, this waste burns, releasing methyl chloride in the atmosphere.
Interstellar detections
Chloromethane has been detected in the low-mass Class 0 protostellar binary, IRAS 16293–2422, using the Atacama Large Millimeter Array (ALMA). It was also detected in the comet 67P/Churyumov–Gerasimenko (67P/C-G) using the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument on the Rosetta spacecraft. The detections reveal that chloromethane can be formed in star-forming regions before planets or life is formed.
Production
Chloromethane (originally called "chlorohydrate of methylene") was among the earliest organochlorine compounds to be discovered when it was synthesized by French chemists Jean-Baptiste Dumas and Eugène-Melchior Péligot in 1835 by boiling a mixture of methanol, sulfuric acid, and sodium chloride. This method is the forerunner for that used today, which uses hydrogen chloride instead of sulfuric acid and sodium chloride.
Chloromethane is produced commercially by treating methanol with hydrochloric acid or hydrogen chloride, according to the chemical equation:
CH3OH + HCl → CH3Cl + H2O
A smaller amount of chloromethane is produced by treating a mixture of methane with chlorine at elevated temperatures. This method, however, also produces more highly chlorinated compounds such as dichloromethane, chloroform, and carbon tetrachloride. For this reason, methane chlorination is usually only practiced when these other products are also desired. This chlorination method also cogenerates hydrogen chloride, which poses a disposal problem.
Dispersion in the environment
Most of the methyl chloride present in the environment ends up being released to the atmosphere. After being released into the air, the atmospheric lifetime of this substance is about 10 months with multiple natural sinks, such as ocean, transport to the stratosphere, soil, etc.
On the other hand, when the methyl chloride emitted is released to water, it will be rapidly lost by volatilization. The half-life of this substance in terms of volatilization in the river, lagoon and lake is 2.1 h, 25 h and 18 days, respectively.
The amount of methyl chloride in the stratosphere is estimated to be 2 tonnes per year, representing 20–25% of the total amount of chlorine that is emitted to the stratosphere annually.
Uses
Large scale use of chloromethane is for the production of dimethyldichlorosilane and related organosilicon compounds. These compounds arise via the direct process. The relevant reactions are (Me = CH3):
x MeCl + Si → Me3SiCl, Me2SiCl2, MeSiCl3, Me4Si2Cl2, ...
Dimethyldichlorosilane (Me2SiCl2) is of particular value as a precursor to silicones, but trimethylsilyl chloride (Me3SiCl) and methyltrichlorosilane (MeSiCl3) are also valuable.
Smaller quantities are used as a solvent in the manufacture of butyl rubber and in petroleum refining.
Chloromethane is employed as a methylating and chlorinating agent, e.g. the production of methylcellulose. It is also used in a variety of other fields: as an extractant for greases, oils, and resins, as a propellant and blowing agent in polystyrene foam production, as a local anesthetic, as an intermediate in drug manufacturing, as a catalyst carrier in low-temperature polymerization, as a fluid for thermometric and thermostatic equipment, and as a herbicide.
Obsolete applications
Chloromethane was a widely used refrigerant, but its use has been discontinued. It was particularly dangerous among the common refrigerants of the 1930s due to its combination of toxicity, flammability and lack of odor as compared with other toxic refrigerants such as sulfur dioxide and ammonia. Chloromethane was also once used for producing lead-based gasoline additives (tetramethyllead).
Safety
Inhalation of chloromethane gas produces central nervous system effects similar to alcohol intoxication. The TLV is 50 ppm and the MAC is the same. Prolonged exposure may have mutagenic effects.
| Physical sciences | Halocarbons | Chemistry |
300381 | https://en.wikipedia.org/wiki/Smooth%20muscle | Smooth muscle | Smooth muscle is one of the three major types of vertebrate muscle tissue, the others being skeletal and cardiac muscle. It can also be found in invertebrates and is controlled by the autonomic nervous system. It is non-striated, so-called because it has no sarcomeres and therefore no striations (bands or stripes). It can be divided into two subgroups, single-unit and multi-unit smooth muscle. Within single-unit muscle, the whole bundle or sheet of smooth muscle cells contracts as a syncytium.
Smooth muscle is found in the walls of hollow organs, including the stomach, intestines, bladder and uterus. In the walls of blood vessels, and lymph vessels, (excluding blood and lymph capillaries) it is known as vascular smooth muscle. There is smooth muscle in the tracts of the respiratory, urinary, and reproductive systems. In the eyes, the ciliary muscles, iris dilator muscle, and iris sphincter muscle are types of smooth muscles. The iris dilator and sphincter muscles are contained in the iris and contract in order to dilate or constrict the pupils. The ciliary muscles change the shape of the lens to focus on objects in accommodation. In the skin, smooth muscle cells such as those of the arrector pili cause hair to stand erect in response to cold temperature and fear.
Structure
Gross anatomy
Smooth muscle is grouped into two types: single-unit smooth muscle, also known as visceral smooth muscle, and multiunit smooth muscle. Most smooth muscle is of the single-unit type, and is found in the walls of most internal organs (viscera); and lines blood vessels (except large elastic arteries), the urinary tract, and the digestive tract. It is not found in the heart which has cardiac muscle.
In single-unit smooth muscle a single cell in a bundle is innervated by an autonomic nerve fiber (myogenic). An action potential can be propagated through neighbouring muscle cells due to the presence of many gap junctions between the cells. Due to this property, single-unit bundles form a syncytium that contracts in a coordinated fashion making the whole muscle contract or relax, such as the uterine muscles during childbirth.
Single-unit visceral smooth muscle is myogenic; it can contract regularly without input from a motor neuron (as opposed to multiunit smooth muscle, which is neurogenic - that is, its contraction must be initiated by an autonomic nervous system neuron). A few of the cells in a given single unit may behave as pacemaker cells, generating rhythmic action potentials due to their intrinsic electrical activity. Because of its myogenic nature, single-unit smooth muscle is usually active, even when it is not receiving any neural stimulation. Multiunit smooth muscle is found in the trachea, in the iris of the eye, and lining the large elastic arteries.
However, the terms single- and multi-unit smooth muscle represent an oversimplification. This is due to the fact that smooth muscles for the most part are controlled and influenced by a combination of different neural elements. In addition, it has been observed that most of the time there will be some cell-to-cell communication and activators/inhibitors produced locally. This leads to a somewhat coordinated response even in multiunit smooth muscle.
Smooth muscle differs from skeletal muscle and cardiac muscle in terms of structure, function, regulation of contraction, and excitation-contraction coupling. However, smooth muscle tissue tends to demonstrate greater elasticity and function within a larger length-tension curve than striated muscle. This ability to stretch and still maintain contractility is important in organs like the intestines and urinary bladder. Smooth muscle in the gastrointestinal tract is activated by a composite of smooth muscle cells (SMCs), interstitial cells of Cajal (ICCs), and platelet-derived growth factor receptor alpha (PDGFRα) that are electrically coupled and work together as an SIP functional syncytium.
Microanatomy
Smooth muscle cells
A smooth-muscle cell is a spindle-shaped myocyte with a wide middle and tapering ends, and a single nucleus. Like striated muscle, smooth muscle can tense and relax. In the relaxed state, each cell is 30–200 micrometers in length, some thousands of times shorter than a skeletal muscle cell. There are no myofibrils present, but much of the cytoplasm is taken up by the proteins, myosin and actin, which together have the capability to contract.
Myosin
Myosin is primarily class II in smooth muscle.
Myosin II contains two heavy chains (MHC) which constitute the head and tail domains. Each of these heavy chains contains the N-terminal head domain, while the C-terminal tails take on a coiled-coil morphology, holding the two heavy chains together (imagine two snakes wrapped around each other, such as in a caduceus). Thus, myosin II has two heads. In smooth muscle, there is a single gene (MYH11) that codes for the heavy chains myosin II, but there are splice variants of this gene that result in four distinct isoforms. Also, smooth muscle may contain MHC that is not involved in contraction, and that can arise from multiple genes.
Myosin II also contains 4 light chains (MLC), resulting in 2 per head, weighing 20 (MLC20) and 17 (MLC17) kDa. These bind the heavy chains in the "neck" region between the head and tail.
The MLC20 is also known as the regulatory light chain and actively participates in muscle contraction. Two MLC20 isoforms are found in smooth muscle, and they are encoded by different genes, but only one isoform participates in contraction.
The MLC17 is also known as the essential light chain. Its exact function is unclear, but it is believed that it contributes to the structural stability of the myosin head along with MLC20. Two variants of MLC17 (MLC17a/b) exist as a result of alternative splicing at the MLC17 gene.
Different combinations of heavy and light chains allow for up to hundreds of different types of myosin structures, but it is unlikely that more than a few such combinations are actually used or permitted within a specific smooth muscle bed. In the uterus, a shift in myosin expression has been hypothesized to avail for changes in the directions of uterine contractions that are seen during the menstrual cycle.
Actin
The thin filaments that are part of the contractile machinery are predominantly composed of alpha-actin and gamma-actin. Smooth muscle alpha-actin is the predominant isoform within smooth muscle. There is also a lot of actin (mainly beta-actin) that does not take part in contraction, but that polymerizes just below the plasma membrane in the presence of a contractile stimulant and may thereby assist in mechanical tension. Alpha-actin is also expressed as distinct genetic isoforms such as smooth muscle, cardiac muscle and skeletal muscle specific isoforms of alpha-actin.
The ratio of actin to myosin is between 2:1 and 10:1 in smooth muscle. Conversely, from a mass ratio standpoint (as opposed to a molar ratio), myosin is the dominant protein in striated skeletal muscle with the actin to myosin ratio falling in the 1:2 to 1:3 range. A typical value for healthy young adults is 1:2.2.
Other associated proteins
Smooth muscle does not contain the protein troponin; instead calmodulin (which takes on the regulatory role in smooth muscle), caldesmon and calponin are significant proteins expressed within smooth muscle.
Tropomyosin is present in smooth muscle, spanning seven actin monomers and is laid out end to end over the entire length of the thin filaments. In striated muscle, tropomyosin serves to block actin–myosin interactions until calcium is present, but in smooth muscle, its function is unknown.
Calponin molecules may exist in equal number as actin, and has been proposed to be a load-bearing protein.
Caldesmon has been suggested to be involved in tethering actin, myosin and tropomyosin, and thereby enhance the ability of smooth muscle to maintain tension.
Also, all three of these proteins may have a role in inhibiting the ATPase activity of the myosin complex that otherwise provides energy to fuel muscle contraction.
Dense bodies
The actin filaments are attached to dense bodies, which are analogous to the Z-discs in striated muscle sarcomeres. Dense bodies are rich in alpha-actinin (α-actinin), and also attach intermediate filaments (consisting largely of vimentin and desmin), and thereby appear to serve as anchors from which the thin filaments can exert force. Dense bodies also are associated with beta-actin, which is the type found in the cytoskeleton, suggesting that dense bodies may coordinate tensions from both the contractile machinery and the cytoskeleton. Dense bodies appear darker under an electron microscope, and so they are sometimes described as electron dense.
The intermediate filaments are connected to other intermediate filaments via dense bodies, which eventually are attached to adherens junctions (also called focal adhesions) in the cell membrane of the smooth muscle cell, called the sarcolemma. The adherens junctions consist of large number of proteins including alpha-actinin (α-actinin), vinculin and cytoskeletal actin. The adherens junctions are scattered around dense bands that are circumfering the smooth muscle cell in a rib-like pattern. The dense band (or dense plaques) areas alternate with regions of membrane containing numerous caveolae. When complexes of actin and myosin contract, force is transduced to the sarcolemma through intermediate filaments attaching to such dense bands.
Contraction
During contraction, there is a spatial reorganization of the contractile machinery to optimize force development. part of this reorganization consists of vimentin being phosphorylated at Ser56 by a p21 activated kinase, resulting in some disassembly of vimentin polymers.
Also, the number of myosin filaments is dynamic between the relaxed and contracted state in some tissues as the ratio of actin to myosin changes, and the length and number of myosin filaments change.
Isolated single smooth muscle cells have been observed contracting in a spiral corkscrew fashion, and isolated permeabilized smooth muscle cells adhered to glass (so contractile proteins allowed to internally contract) demonstrate zones of contractile protein interactions along the long axis as the cell contracts.
Smooth muscle-containing tissue needs to be stretched often, so elasticity is an important attribute of smooth muscle. Smooth muscle cells may secrete a complex extracellular matrix containing collagen (predominantly types I and III), elastin, glycoproteins, and proteoglycans. Smooth muscle also has specific elastin and collagen receptors to interact with these proteins of the extracellular matrix. These fibers with their extracellular matrices contribute to the viscoelasticity of these tissues. For example, the great arteries are viscolelastic vessels that act like a Windkessel, propagating ventricular contraction and smoothing out the pulsatile flow, and the smooth muscle within the tunica media contributes to this property.
Caveolae
The sarcolemma also contains caveolae, which are microdomains of lipid rafts specialized to cell signaling events and ion channels. These invaginations in the sarcoplasm contain a host of receptors (prostacyclin, endothelin, serotonin, muscarinic receptors, adrenergic receptors), second messenger generators (adenylate cyclase, phospholipase C), G proteins (RhoA, G alpha), kinases (rho kinase-ROCK, protein kinase C, protein Kinase A), ion channels (L type calcium channels, ATP sensitive potassium channels, calcium sensitive potassium channels) in close proximity. The caveolae are often close to sarcoplasmic reticulum or mitochondria, and have been proposed to organize signaling molecules in the membrane.
Excitation-contraction coupling
A smooth muscle is excited by external stimuli, which causes contraction. Each step is further detailed below.
Inducing stimuli and factors
Smooth muscle may contract spontaneously (via ionic channel dynamics) or as in the gut special pacemakers cells interstitial cells of Cajal produce rhythmic contractions. Also, contraction, as well as relaxation, can be induced by a number of physiochemical agents (e.g., hormones, drugs, neurotransmitters – particularly from the autonomic nervous system).
Smooth muscle in various regions of the vascular tree, the airway and lungs, kidneys and vagina is different in their expression of ionic channels, hormone receptors, cell-signaling pathways, and other proteins that determine function.
External substances
For instance, blood vessels in skin, gastrointestinal system, kidney and brain respond to norepinephrine and epinephrine (from sympathetic stimulation or the adrenal medulla) by producing vasoconstriction (this response is mediated through alpha-1 adrenergic receptors). However, blood vessels within skeletal muscle and cardiac muscle respond to these catecholamines producing vasodilation because they possess beta-adrenergic receptors. So there is a difference in the distribution of the various adrenergic receptors that explains the difference in why blood vessels from different areas respond to the same agent norepinephrine/epinephrine differently as well as differences due to varying amounts of these catecholamines that are released and sensitivities of various receptors to concentrations.
Generally, arterial smooth muscle responds to carbon dioxide by producing vasodilation, and responds to oxygen by producing vasoconstriction. Pulmonary blood vessels within the lung are unique as they vasodilate to high oxygen tension and vasoconstrict when it falls. Bronchiole, smooth muscle that line the airways of the lung, respond to high carbon dioxide producing vasodilation and vasoconstrict when carbon dioxide is low. These responses to carbon dioxide and oxygen by pulmonary blood vessels and bronchiole airway smooth muscle aid in matching perfusion and ventilation within the lungs. Further different smooth muscle tissues display extremes of abundant to little sarcoplasmic reticulum so excitation-contraction coupling varies with its dependence on intracellular or extracellular calcium.
Recent research indicates that sphingosine-1-phosphate (S1P) signaling is an important regulator of vascular smooth muscle contraction. When transmural pressure increases, sphingosine kinase 1 phosphorylates sphingosine to S1P, which binds to the S1P2 receptor in plasma membrane of cells. This leads to a transient increase in intracellular calcium, and activates Rac and Rhoa signaling pathways. Collectively, these serve to increase MLCK activity and decrease MLCP activity, promoting muscle contraction. This allows arterioles to increase resistance in response to increased blood pressure and thus maintain constant blood flow. The Rhoa and Rac portion of the signaling pathway provides a calcium-independent way to regulate resistance artery tone.
Spread of impulse
To maintain organ dimensions against force, cells are fastened to one another by adherens junctions. As a consequence, cells are mechanically coupled to one another such that contraction of one cell invokes some degree of contraction in an adjoining cell. Gap junctions couple adjacent cells chemically and electrically, facilitating the spread of chemicals (e.g., calcium) or action potentials between smooth muscle cells. Single unit smooth muscle displays numerous gap junctions and these tissues often organize into sheets or bundles which contract in bulk.
Contraction
Smooth muscle contraction is caused by the sliding of myosin and actin filaments (a sliding filament mechanism) over each other. The energy for this to happen is provided by the hydrolysis of ATP. Myosin functions as an ATPase utilizing ATP to produce a molecular conformational change of part of the myosin and produces movement. Movement of the filaments over each other happens when the globular heads protruding from myosin filaments attach and interact with actin filaments to form crossbridges. The myosin heads tilt and drag along the actin filament a small distance (10–12 nm). The heads then release the actin filament and then changes angle to relocate to another site on the actin filament a further distance (10–12 nm) away. They can then re-bind to the actin molecule and drag it along further. This process is called crossbridge cycling and is the same for all muscles (see muscle contraction). Unlike cardiac and skeletal muscle, smooth muscle does not contain the calcium-binding protein troponin. Contraction is initiated by a calcium-regulated phosphorylation of myosin, rather than a calcium-activated troponin system.
Crossbridge cycling causes contraction of myosin and actin complexes, in turn causing increased tension along the entire chains of tensile structures, ultimately resulting in contraction of the entire smooth muscle tissue.
Phasic or tonic
Smooth muscle may contract phasically with rapid contraction and relaxation, or tonically with slow and sustained contraction. The reproductive, digestive, respiratory, and urinary tracts, skin, eye, and vasculature all contain this tonic muscle type. This type of smooth muscle can maintain force for prolonged time with only little energy utilization. There are differences in the myosin heavy and light chains that also correlate with these differences in contractile patterns and kinetics of contraction between tonic and phasic smooth muscle.
Activation of myosin heads
Crossbridge cycling cannot occur until the myosin heads have been activated to allow crossbridges to form. When the light chains are phosphorylated, they become active and will allow contraction to occur. The enzyme that phosphorylates the light chains is called myosin light-chain kinase (MLCK), also called MLC20 kinase. In order to control contraction, MLCK will work only when the muscle is stimulated to contract. Stimulation will increase the intracellular concentration of calcium ions. These bind to a molecule called calmodulin, and form a calcium-calmodulin complex. It is this complex that will bind to MLCK to activate it, allowing the chain of reactions for contraction to occur.
Activation consists of phosphorylation of a serine on position 19 (Ser19) on the MLC20 light chain, which causes a conformational change that increases the angle in the neck domain of the myosin heavy chain, which corresponds to the part of the cross-bridge cycle where the myosin head is unattached to the actin filament and relocates to another site on it. After attachment of the myosin head to the actin filament, this serine phosphorylation also activates the ATPase activity of the myosin head region to provide the energy to fuel the subsequent contraction. Phosphorylation of a threonine on position 18 (Thr18) on MLC20 is also possible and may further increase the ATPase activity of the myosin complex.
Sustained maintenance
Phosphorylation of the MLC20 myosin light chains correlates well with the shortening velocity of smooth muscle. During this period there is a rapid burst of energy utilization as measured by oxygen consumption. Within a few minutes of initiation the calcium level markedly decrease, MLC20 myosin light chains phosphorylation decreases, and energy utilization decreases and the muscle can relax. Still, smooth muscle has the ability of sustained maintenance of force in this situation as well. This sustained phase has been attributed to certain myosin crossbridges, termed latch-bridges, that are cycling very slowly, notably slowing the progression to the cycle stage whereby dephosphorylated myosin detaches from the actin, thereby maintaining the force at low energy costs. This phenomenon is of great value especially for tonically active smooth muscle.
Isolated preparations of vascular and visceral smooth muscle contract with depolarizing high potassium balanced saline generating a certain amount of contractile force. The same preparation stimulated in normal balanced saline with an agonist such as endothelin or serotonin will generate more contractile force. This increase in force is termed calcium sensitization. The myosin light chain phosphatase is inhibited to increase the gain or sensitivity of myosin light chain kinase to calcium. There are a number of cell signalling pathways believed to regulate this decrease in myosin light chain phosphatase: a RhoA-Rock kinase pathway, a Protein kinase C-Protein kinase C potentiation inhibitor protein 17 (CPI-17) pathway, telokin, and a Zip kinase pathway. Further Rock kinase and Zip kinase have been implicated to directly phosphorylate the 20kd myosin light chains.
Other contractile mechanisms
Other cell signaling pathways and protein kinases (Protein kinase C, Rho kinase, Zip kinase, Focal adhesion kinases) have been implicated as well and actin polymerization dynamics plays a role in force maintenance. While myosin light chain phosphorylation correlates well with shortening velocity, other cell signaling pathways have been implicated in the development of force and maintenance of force. Notably the phosphorylation of specific tyrosine residues on the focal adhesion adapter protein-paxillin by specific tyrosine kinases has been demonstrated to be essential to force development and maintenance. For example, cyclic nucleotides can relax arterial smooth muscle without reductions in crossbridge phosphorylation, a process termed force suppression. This process is mediated by the phosphorylation of the small heat shock protein, hsp20, and may prevent phosphorylated myosin heads from interacting with actin.
Relaxation
The phosphorylation of the light chains by MLCK is countered by a myosin light-chain phosphatase, which dephosphorylates the MLC20 myosin light chains and thereby inhibits contraction. Other signaling pathways have also been implicated in the regulation actin and myosin dynamics. In general, the relaxation of smooth muscle is by cell-signaling pathways that increase the myosin phosphatase activity, decrease the intracellular calcium levels, hyperpolarize the smooth muscle, and/or regulate actin and myosin muscle can be mediated by the endothelium-derived relaxing factor-nitric oxide, endothelial derived hyperpolarizing factor (either an endogenous cannabinoid, cytochrome P450 metabolite, or hydrogen peroxide), or prostacyclin (PGI2). Nitric oxide and PGI2 stimulate soluble guanylate cyclase and membrane bound adenylate cyclase, respectively. The cyclic nucleotides (cGMP and cAMP) produced by these cyclases activate Protein Kinase G and Protein Kinase A and phosphorylate a number of proteins. The phosphorylation events lead to a decrease in intracellular calcium (inhibit L type Calcium channels, inhibits IP3 receptor channels, stimulates sarcoplasmic reticulum Calcium pump ATPase), a decrease in the 20kd myosin light chain phosphorylation by altering calcium sensitization and increasing myosin light chain phosphatase activity, a stimulation of calcium sensitive potassium channels which hyperpolarize the cell, and the phosphorylation of amino acid residue serine 16 on the small heat shock protein (hsp20)by Protein Kinases A and G. The phosphorylation of hsp20 appears to alter actin and focal adhesion dynamics and actin-myosin interaction, and recent evidence indicates that hsp20 binding to 14-3-3 protein is involved in this process. An alternative hypothesis is that phosphorylated Hsp20 may also alter the affinity of phosphorylated myosin with actin and inhibit contractility by interfering with crossbridge formation. The endothelium derived hyperpolarizing factor stimulates calcium sensitive potassium channels and/or ATP sensitive potassium channels and stimulate potassium efflux which hyperpolarizes the cell and produces relaxation.
Invertebrate smooth muscle
In invertebrate smooth muscle, contraction is initiated with the binding of calcium directly to myosin and then rapidly cycling cross-bridges, generating force. Similar to the mechanism of vertebrate smooth muscle, there is a low calcium and low energy utilization catch phase. This sustained phase or catch phase has been attributed to a catch protein that has similarities to myosin light-chain kinase and the elastic protein-titin called twitchin. Clams and other bivalve mollusks use this catch phase of smooth muscle to keep their shell closed for prolonged periods with little energy usage.
Specific effects
Although the structure and function is basically the same in smooth muscle cells in different organs, their specific effects or end-functions differ.
The contractile function of vascular smooth muscle regulates the lumenal diameter of the small arteries-arterioles called resistance arteries, thereby contributing significantly to setting the level of blood pressure and blood flow to vascular beds. Smooth muscle contracts slowly and may maintain the contraction (tonically) for prolonged periods in blood vessels, bronchioles, and some sphincters. Activating arteriole smooth muscle can decrease the lumenal diameter 1/3 of resting so it drastically alters blood flow and resistance. Activation of aortic smooth muscle doesn't significantly alter the lumenal diameter but serves to increase the viscoelasticity of the vascular wall.
In the digestive tract, smooth muscle contracts in a rhythmic peristaltic fashion, rhythmically forcing foodstuffs through the digestive tract as the result of phasic contraction.
A non-contractile function is seen in specialized smooth muscle within the afferent arteriole of the juxtaglomerular apparatus, which secretes renin in response to osmotic and pressure changes, and also it is believed to secrete ATP in tubuloglomerular regulation of glomerular filtration rate. Renin in turn activates the renin–angiotensin system to regulate blood pressure.
Growth and rearrangement
The mechanism in which external factors stimulate growth and rearrangement is not yet fully understood. A number of growth factors and neurohumoral agents influence smooth muscle growth and differentiation. The Notch receptor and cell-signaling pathway have been demonstrated to be essential to vasculogenesis and the formation of arteries and veins. The proliferation is implicated in the pathogenesis of atherosclerosis and is inhibited by nitric oxide.
The embryological origin of smooth muscle is usually of mesodermal origin, after the creation of muscle cells in a process known as myogenesis. However, the smooth muscle within the Aorta and Pulmonary arteries (the Great Arteries of the heart) is derived from ectomesenchyme of neural crest origin, although coronary artery smooth muscle is of mesodermal origin.
Related diseases
Multisystemic smooth muscle dysfunction syndrome is a genetic condition in which the body of a developing embryo does not create enough smooth muscle for the gastrointestinal system. This condition is fatal.
Anti-smooth muscle antibodies (ASMA) can be a symptom of an auto-immune disorder, such as hepatitis, cirrhosis, or lupus.
Smooth muscle tumors are most commonly benign, and are then called leiomyomas. They can occur in any organ, but they usually occur in the uterus, small bowel, and esophagus. Malignant smooth muscle tumors are called leiomyosarcomas. Leiomyosarcomas are one of the more common types of soft-tissue sarcomas. Vascular smooth muscle tumors are very rare. They can be malignant or benign, and morbidity can be significant with either type. Intravascular leiomyomatosis is a benign neoplasm that extends through the veins; angioleiomyoma is a benign neoplasm of the extremities; vascular leiomyosarcomas is a malignant neoplasm that can be found in the inferior vena cava, pulmonary arteries and veins, and other peripheral vessels.
See Atherosclerosis.
| Biology and health sciences | Muscular system | Biology |
300539 | https://en.wikipedia.org/wiki/Varicella%20zoster%20virus | Varicella zoster virus | Varicella zoster virus (VZV), also known as human herpesvirus 3 (HHV-3, HHV3) or Human alphaherpesvirus 3 (taxonomically), is one of nine known herpes viruses that can infect humans. It causes chickenpox (varicella) commonly affecting children and young adults, and shingles (herpes zoster) in adults but rarely in children. As a late complication of VZV infection, Ramsay Hunt syndrome type 2 may develop in rare cases. VZV infections are species-specific to humans. The virus can survive in external environments for a few hours.
VZV multiplies in the tonsils, and causes a wide variety of symptoms. Similar to the herpes simplex viruses, after primary infection with VZV (chickenpox), the virus lies dormant in neurons, including the cranial nerve ganglia, dorsal root ganglia, and autonomic ganglia. Many years after the person has recovered from initial chickenpox infection, VZV can reactivate to cause shingles.
Epidemiology
Chickenpox
Primary varicella zoster virus infection results in chickenpox (varicella), which may result in complications including encephalitis, pneumonia (either direct viral pneumonia or secondary bacterial pneumonia), or bronchitis (either viral bronchitis or secondary bacterial bronchitis). Even when clinical symptoms of chickenpox have resolved, VZV remains dormant in the nervous system of the infected person (virus latency), in the trigeminal and dorsal root ganglia. VZV enters through the respiratory system and has an incubation period of 10–21 days, with an average of 14 days. Targeting the skin and peripheral nerves, the period of illness lasts about 3 to 4 days. Infected individuals are most contagious 1–2 days before the lesions appear. Signs and symptoms include vesicles that fill with pus, rupture, and scab before healing. Lesions most commonly occur on the face, throat, the lower back, the chest and shoulders.
Shingles
In about a third of cases, VZV reactivates in later life, producing a disease known as shingles or herpes zoster. The individual lifetime risk of developing herpes zoster is thought to be between 20% and 30%, or approximately 1 in 4 people. However, for people aged 85 and over, this risk increases to 1 in 2.
In a study in Sweden by Nilsson et al. (2015) the annual incidence of herpes zoster infection is estimated at a total of 315 cases per 100,000 inhabitants for all ages and 577 cases per 100,000 for people 50 years of age or older.
VZV can also infect the central nervous system, with a 2013 article reporting an incidence rate of 1.02 cases per 100,000 inhabitants in Switzerland, and an annual incidence rate of 1.8 cases per 100,000 inhabitants in Sweden.
Shingles lesions and the associated pain, often described as burning, tend to occur on the skin that is innervated by one or two adjacent sensory nerves, almost always on one side of the body only. The skin lesions usually subside over the course of several weeks, while the pain often persists longer. In 10–15% of cases the pain persists more than three months, a chronic and often disabling condition known as postherpetic neuralgia. Other serious complications of varicella zoster infection include Mollaret's meningitis, zoster multiplex, and inflammation of arteries in the brain leading to stroke, myelitis, herpes ophthalmicus, or zoster sine herpete. In Ramsay Hunt syndrome, VZV affects the geniculate ganglion giving lesions that follow specific branches of the facial nerve. Symptoms may include painful blisters on the tongue and ear along with one-sided facial weakness and hearing loss. After infection during initial stages of pregnancy, the fetus can be severely damaged. Reye's syndrome can happen after initial infection, causing continuous vomiting and signs of brain dysfunction such as extreme drowsiness or combative behavior. In some cases, death or coma can follow. Reye's syndrome mostly affects children and teenagers; using aspirin during infection can increase this risk.
Morphology
VZV is closely related to the herpes simplex viruses (HSV), sharing much genome homology. The known envelope glycoproteins (gB, gC, gE, gH, gI, gK, gL) correspond with those in HSV; however, there is no equivalent of the HSV gD protein. VZV also fails to produce the LAT (latency-associated transcripts) that play an important role in establishing HSV latency (herpes simplex virus). VZV virions are spherical and 180–200 nm in diameter. Their lipid envelope encloses the 100 nm nucleocapsid of 162 hexameric and pentameric capsomeres arranged in an icosahedral form. Its DNA is a single, linear, double-stranded molecule, 125,000 nt long. The capsid is surrounded by loosely associated proteins known collectively as the tegument; many of these proteins play critical roles in initiating the process of virus reproduction in the infected cell. The tegument is in turn covered by a lipid envelope studded with glycoproteins that are displayed on the exterior of the virion, each approximately 8 nm long.
Genomes
The genome was first sequenced in 1986. It is a linear duplex DNA molecule, a laboratory strain has 124,884 base pairs. The genome has two predominant isomers, depending on the orientation of the S segment, P (prototype) and IS (inverted S) which are present with equal frequency for a total frequency of 90–95%. The L segment can also be inverted resulting in a total of four linear isomers (IL and ILS). This is distinct from HSV's equiprobable distribution, and the discriminatory mechanism is not known. A small percentage of isolated molecules are circular genomes, about which little is known. (It is known that HSV circularizes on infection.) There are at least 70 open reading frames in the genome.
Evolution
Commonality with HSV1 and HSV2 indicates a common ancestor; five genes (out of about 70) do not have corresponding HSV genes. Relation with other human herpes viruses is less strong, but many homologues and conserved gene blocks are still found.
There are at least five clades of this virus. Clades 1 and 3 include European/North American strains; clade 2 are Asian strains, especially from Japan; and clade 5 appears to be based in India. Clade 4 includes some strains from Europe but its geographic origins need further clarification. There are also four genotypes that do not fit into these clades. Allocation of VZV strains to clades required the sequence of whole virus genome. Practically all molecular epidemiological data on global VZV strains distribution are obtained with targeted sequencing of selected regions.
Phylogenetic analysis of VZV genomic sequences resolves wild-type strains into nine genotypes (E1, E2, J, M1, M2, M3, M4, VIII and IX). Complete sequences for M3 and M4 strains are unavailable, but targeted analyses of representative strains suggest they are stable, circulating VZV genotypes. Sequence analysis of VZV isolates identified both shared and specific markers for every genotype and validated a unified VZV genotyping strategy. Despite high genotype diversity no evidence for intra-genotypic recombination was observed. Five of seven VZV genotypes were reliably discriminated using only four single nucleotide polymorphisms (SNP) present in ORF22, and the E1 and E2 genotypes were resolved using SNP located in ORF21, ORF22 or ORF50. Sequence analysis of 342 clinical varicella and zoster specimens from 18 European countries identified the following distribution of VZV genotypes: E1, 221 (65%); E2, 87 (25%); M1, 20 (6%); M2, 3 (1%); M4, 11 (3%). No M3 or J strains were observed. Of 165 clinical varicella and zoster isolates from Australia and New Zealand typed using this approach, 67 of 127 eastern Australian isolates were E1, 30 were E2, 16 were J, 10 were M1, and 4 were M2; 25 of 38 New Zealand isolates were E1, 8 were E2, and 5 were M1.
The mutation rate for synonymous and nonsynonymous mutation rates among the herpesviruses have been estimated at 1 × 10−7 and 2.7 × 10−8 mutations/site/year, respectively, based on the highly conserved gB gene.
Treatment
Within the human body it can be treated by a number of drugs and therapeutic agents including acyclovir for chickenpox, famciclovir, valaciclovir for the shingles, zoster-immune globulin (ZIG), and vidarabine. Acyclovir is frequently used as the drug of choice in primary VZV infections, and beginning its administration early can significantly shorten the duration of any symptoms. However, reaching an effective serum concentration of acyclovir typically requires intravenous administration, making its use more difficult outside of a hospital.
Vaccination
A live attenuated VZV Oka/Merck strain vaccine is available and is marketed in the United States under the trade name Varivax. It was developed by Merck, Sharp & Dohme in the 1980s from the Oka strain virus isolated and attenuated by Michiaki Takahashi and colleagues in the 1970s. It was submitted to the US Food and Drug Administration (FDA) for approval in 1990 and was approved in 1995. Since then, it has been added to the recommended vaccination schedules for children in Australia, the United States, and many other countries. Varicella vaccination has raised concerns in some that the immunity induced by the vaccine may not be lifelong, possibly leaving adults vulnerable to more severe disease as the immunity from their childhood immunization wanes. Vaccine coverage in the United States in the population recommended for vaccination is approaching 90%, with concomitant reductions in the incidence of varicella cases and hospitalizations and deaths due to VZV. So far, clinical data has proved that the vaccine is effective for over ten years in preventing varicella infection in healthy individuals, and when breakthrough infections do occur, illness is typically mild. In 2006, the CDC's Advisory Committee on Immunization Practices (ACIP) recommended a second dose of vaccine before school entry to ensure the maintenance of high levels of varicella immunity.
In 2006, the FDA approved Zostavax for the prevention of shingles. Zostavax is a more concentrated formulation of the Varivax vaccine, designed to elicit an immune response in older adults whose immunity to VZV wanes with advancing age. A systematic review by Cochrane (updated in 2023) shows that Zostavax reduces the incidence of shingles by almost 50%.
Shingrix is a subunit vaccine (HHV3 glycoprotein E) developed by GlaxoSmithKline which was approved in the United States by the FDA in October 2017. The ACIP recommended Shingrix for adults over the age of 50, including those who have already received Zostavax. The committee voted that Shingrix is preferred over Zostavax for the prevention of zoster and related complications because phase 3 clinical data showed vaccine efficacy of >90% against shingles across all age groups, as well as sustained efficacy over a four-year follow-up. Unlike Zostavax, which is given as a single shot, Shingrix is given as two intramuscular doses, two to six months apart. This vaccine has shown to be immunogenic and safe in adults with human immunodeficiency virus.
History
Chickenpox-like rashes were recognized and described by ancient civilizations; the relationship between zoster and chickenpox was not realized until 1888. In 1943, the similarity between virus particles isolated from the lesions of zoster and those from chickenpox was noted. In 1974 the first chickenpox vaccine was introduced.
The varicella zoster virus was first isolated by Evelyn Nicol while she was working at Cleveland City Hospital. Thomas Huckle Weller also isolated the virus and found evidence that the same virus was responsible for both chickenpox and herpes zoster.
The etymology of the name of the virus comes from the two diseases it causes, varicella and herpes zoster. The word varicella is possibly derived from variola, a term for smallpox coined by Rudolph Augustin Vogel in 1764.
| Biology and health sciences | Specific viruses | Health |
300567 | https://en.wikipedia.org/wiki/Hidden%20camera | Hidden camera | A hidden camera or spy camera is a camera used to photograph or record subjects, often people, without their knowledge. The camera may be considered "hidden" because it is not visible to the subject being filmed, or is disguised as another object. Hidden cameras are often considered a surveillance tool.
The term "hidden camera" is commonly used when subjects are unaware that they are being recorded, usually lacking their knowledge and consent; the term "spy camera" is generally used when the subject would object to being recorded if they were aware of the camera's presence. In contrast, the phrase "security camera" refers to cameras that are visible and/or are accompanied by a warning notice of their presence, so the subject is aware of the camera's presence and knows they are being filmed.
The use of hidden cameras raises personal privacy issues. There may be legal aspects to consider, depending on the jurisdiction in which they are used.
Description
A hidden camera can be wired or wireless. Hidden cameras connected, by cable or wirelessly, to a viewing or recording device, such as a television, computer, videocassette recorder, network video recorder, digital video recorder, memory card, or another data storage medium. They may also store their images or recordings online, such as through a livestream. Hidden video cameras may or may not have audio recording capabilities. Hidden cameras may be activated manually, remotely, or through motion detection.
A hidden camera may not be visible to the subject, for example, because it is fitted with a long-focus lens and located beyond the view of the subject, or because it is obscured or hidden by an object, such as a one-way mirror. Hidden cameras can be built into a wide variety of items, ranging from electronics (television sets, smoke detectors, clocks, motion detectors, mobile phones, personal computers) to everyday objects where electronics are not expected to be found (stationery, plants, glasses, clothing, street lights).
Use
Common applications for hidden cameras are property security, personal surveillance, photography, or entertainment purposes, though they may also be used for espionage or surveillance by law enforcement, intelligence agencies, investigative journalists, corporations, or other entities. They may also be used for illegal activity, such as criminal scope-outs, stalking, or voyeurism.
Hidden cameras may be installed within common household objects for parents to monitor and record the activities of nannies and sometimes the children themselves. These hidden cameras are commonly referred to as "nanny cams". The use nanny cams can be a subject of controversy. For example, a 2003 criminal case in Florida, involving a nanny that was allegedly caught by a nanny cam violently shaking a baby, was thrown out in 2006 when the video was considered "worthless evidence"; however, this was due to issues regarding video quality, not legality, and several earlier cases used clearer nanny cam footage as evidence. Some hidden camera television shows have also led to lawsuits or the cancellation of episodes by the people who were trapped in set-ups that they found unpleasant.
Hidden cameras are sometimes placed in holiday rental apartments such as those advertised on Airbnb. Questions have been raised about the safety and privacy of holidaymakers in these circumstances.
In media
Hidden cameras are sometimes used in reality television and social media, where they are used to catch participants in unusual or absurd situations. Participants will either know they will be filmed, but not always exactly when or where; or they will not know they have been filmed until later, at which point they may sign a release or give consent to the footage being produced for a show. This latter subgenre of unwitting participants began in the 1940s with Allen Funt's Candid Microphone theatrical short films.
Legal issues
South Korea
In South Korea, hidden cameras (abbreviated to Molka in Korean) proliferated in the 2010s and enabled the spread of voyeuristic images and videos. The term Molka can refer to both the actual cameras as well as the footage posted online.
United Kingdom
The use of hidden cameras is generally permitted under UK law, if used in a legal manner and towards legitimate ends. Individuals may use covert surveillance in their own home, in the workplace for employee monitoring, outside of a domestic or commercial property for security purposes and in security situations where there may be a need to do so. There are a number of laws under the Data Protection Act and Human Rights Acts that may affect the use of hidden cameras.
In any type of covert surveillance, footage should only be used for the purpose for which it has been taken, which must be a legitimate security reason. The person in possession of the footage is responsible for its use, and must only retain footage for as long as it is reasonably needed. It is not permitted to release the footage to third parties except when there is a legal necessity.
It is illegal under UK law to deploy covert cameras in areas where individuals would have an expectation of privacy, such as bathrooms, changing rooms, and locker rooms. It is also illegal to place hidden cameras in someone else's home or on someone else's property.
United States
In the United States, the purchase, ownership, and use of hidden cameras and nanny cams is generally considered legal in all 50 states. However, U.S. Code Title 18, Chapter 119, Section 2512 prohibits the interception of oral communication by "surreptitious manner" such as a hidden recording device, and so most hidden video cameras are not available with audio recording. Additionally, it is illegal in 13 states to record audio without express or written consent of the nanny being recorded. Despite this, some hidden cameras are still sold in the United States with audio recording capabilities, though their use is illegal and their recordings cannot legally be used as evidence.
| Technology | Photography | null |
300628 | https://en.wikipedia.org/wiki/Vegetative%20state | Vegetative state | A vegetative state (VS) or post-coma unresponsiveness (PCU) is a disorder of consciousness in which patients with severe brain damage are in a state of partial arousal rather than true awareness. After four weeks in a vegetative state, the patient is classified as being in a persistent vegetative state (PVS). This diagnosis is classified as a permanent vegetative state some months (three in the US and six in the UK) after a non-traumatic brain injury or one year after a traumatic injury. The term unresponsive wakefulness syndrome may be used alternatively, as "vegetative state" has some negative connotations among the public.
Definition
There are several definitions that vary by technical versus layman's usage. There are different legal implications in different countries.
Medical definition
Per the definition of the British Royal College of Physicians of London, "a wakeful unconscious state that lasts longer than a few weeks is referred to as a persistent (or 'continuing') vegetative state".
"Vegetative state"
The vegetative state is a chronic or long-term condition. This condition differs from a coma: a coma is a state that lacks both awareness and wakefulness. Patients in a vegetative state may have awoken from a coma, but still have not regained awareness. In the vegetative state patients can open their eyelids occasionally and demonstrate sleep-wake cycles, but completely lack cognitive function. The vegetative state is also called a "coma vigil". The chances of regaining awareness diminish considerably as the time spent in the vegetative state increases.
"Persistent vegetative state"
Persistent vegetative state is the standard usage (except in the UK) for a medical diagnosis, made after numerous neurological and other tests, that due to extensive and irreversible brain damage a patient is highly unlikely ever to achieve higher functions above a vegetative state. This diagnosis does not mean that a doctor has diagnosed improvement as impossible, but does open the possibility, in the US, for a judicial request to end life support. Informal guidelines hold that this diagnosis can be made after four weeks in a vegetative state. US caselaw has shown that successful petitions for termination have been made after a diagnosis of a persistent vegetative state, although in some cases, such as that of Terri Schiavo, such rulings have generated widespread controversy.
In the UK, the term is discouraged in favor of two more precisely defined terms that have been strongly recommended by the Royal College of Physicians (RCP). These guidelines recommend using a continuous vegetative state for patients in a vegetative state for more than four weeks. A medical determination of a permanent vegetative state can be made if, after exhaustive testing and a customary 12 months of observation, a medical diagnosis is made that it is impossible by any informed medical expectations that the mental condition will ever improve. Hence, a "continuous vegetative state" in the UK may remain the diagnosis in cases that would be called "persistent" in the US or elsewhere.
While the actual testing criteria for a diagnosis of "permanent" in the UK are quite similar to the criteria for a diagnosis of "persistent" in the US, the semantic difference imparts in the UK a legal presumption that is commonly used in court applications for ending life support. The UK diagnosis is generally only made after 12 months of observing a static vegetative state. A diagnosis of a persistent vegetative state in the US usually still requires a petitioner to prove in court that recovery is impossible by informed medical opinion, while in the UK the "permanent" diagnosis already gives the petitioner this presumption and may make the legal process less time-consuming.
In common usage, the "permanent" and "persistent" definitions are sometimes conflated and used interchangeably. However, the acronym "PVS" is intended to define a "persistent vegetative state", without necessarily the connotations of permanence, and is used as such throughout this article. Bryan Jennett, who originally coined the term "persistent vegetative state", has now recommended using the UK division between continuous and permanent in his book The Vegetative State, arguing that "the 'persistent' component of this term ... may seem to suggest irreversibility".
The Australian National Health and Medical Research Council has suggested "post coma unresponsiveness" as an alternative term for "vegetative state" in general.
Lack of legal clarity
Unlike brain death, permanent vegetative state (PVS) is recognized by statute law as death in only a very few legal systems. In the US, courts have required petitions before termination of life support that demonstrate that any recovery of cognitive functions above a vegetative state is assessed as impossible by authoritative medical opinion. In England, Wales and Scotland, the legal precedent for withdrawal of clinically assisted nutrition and hydration in cases of patients in a PVS was set in 1993 in the case of Tony Bland, who sustained catastrophic anoxic brain injury in the 1989 Hillsborough disaster. An application to the Court of Protection is no longer required before nutrition and hydration can be withdrawn or withheld from PVS (or 'minimally conscious' – MCS) patients.
This legal grey area has led to vocal advocates that those in PVS should be allowed to die. Others are equally determined that, if recovery is at all possible, care should continue. The existence of a small number of diagnosed PVS cases that have eventually resulted in improvement makes defining recovery as "impossible" particularly difficult in a legal sense. This legal and ethical issue raises questions about autonomy, quality of life, appropriate use of resources, the wishes of family members, and professional responsibilities.
Signs and symptoms
Most PVS patients are unresponsive to external stimuli and their conditions are associated with different levels of consciousness. Some level of consciousness means a person can still respond, in varying degrees, to stimulation. A person in a coma, however, cannot. In addition, PVS patients often open their eyes in response to feeding, which has to be done by others; they are capable of swallowing, whereas patients in a coma subsist with their eyes closed.
Cerebral cortical function (e.g. communication, thinking, purposeful movement, etc.) is lost while brainstem functions (e.g. breathing, maintaining circulation and hemodynamic stability, etc.) are preserved. Non-cognitive upper brainstem functions such as eye-opening, occasional vocalizations (e.g. crying, laughing), maintaining normal sleep patterns, and spontaneous non-purposeful movements often remain intact.
PVS patients' eyes might be in a relatively fixed position, or track moving objects, or move in a disconjugate (i.e., completely unsynchronized) manner. They may experience sleep-wake cycles, or be in a state of chronic wakefulness. They may exhibit some behaviors that can be construed as arising from partial consciousness, such as grinding their teeth, swallowing, smiling, shedding tears, grunting, moaning, or screaming without any apparent external stimulus.
Individuals in PVS are seldom on any life-sustaining equipment other than a feeding tube because the brainstem, the center of vegetative functions (such as heart rate and rhythm, respiration, and gastrointestinal activity) is relatively intact.
Recovery
Many people emerge spontaneously from a vegetative state within a few weeks. The chances of recovery depend on the extent of injury to the brain and the patient's age – younger patients having a better chance of recovery than older patients. A 1994 report found that of those who were in a vegetative state a month after a trauma, 54% had regained consciousness by a year after the trauma, whereas 28% had died and 18% were still in the vegetative state. For non-traumatic injuries such as strokes, only 14% had recovered consciousness at one year, 47% had died, and 39% were still vegetative. Patients who were vegetative six months after the initial event were much less likely to have recovered consciousness a year after the event than in the case of those who were simply reported vegetative at one month. A New Scientist article from 2000 gives a pair of graphs showing changes of patient status during the first 12 months after head injury and after incidents depriving the brain of oxygen. After a year, the chances that a PVS patient will regain consciousness are very low and most patients who do recover consciousness experience significant disability. The longer a patient is in a PVS, the more severe the resulting disabilities are likely to be. Rehabilitation can contribute to recovery, but many patients never progress to the point of being able to take care of themselves.
The medical literature also includes case reports of the recovery of a small number of patients following the removal of assisted respiration with cold oxygen. The researchers found that in many nursing homes and hospitals unheated oxygen is given to non-responsive patients via tracheal intubation. This bypasses the warming of the upper respiratory tract and causes a chilling of aortic blood and chilling of the brain which the authors believe may contribute to the person's nonresponsive state. The researchers describe a small number of cases in which removal of the chilled oxygen was followed by recovery from the PVS and recommend either warming of oxygen with a heated nebulizer or removal of the assisted oxygen if it is no longer needed. The authors further recommend additional research to determine if this chilling effect may either delay recovery or even may contribute to brain damage.
There are two dimensions of recovery from a persistent vegetative state: recovery of consciousness and recovery of function. Recovery of consciousness can be verified by reliable evidence of awareness of self and the environment, consistent voluntary behavioral responses to visual and auditory stimuli, and interaction with others. Recovery of function is characterized by communication, the ability to learn and to perform adaptive tasks, mobility, self-care, and participation in recreational or vocational activities. Recovery of consciousness may occur without functional recovery, but functional recovery cannot occur without recovery of consciousness.
Causes
There are three main causes of PVS (persistent vegetative state):
Acute traumatic brain injury
Non-traumatic: neurodegenerative disorder or metabolic disorder of the brain
Severe congenital abnormality of the central nervous system
Potential causes of PVS are:
Meningitis
Encephalitis
Increased intracranial pressure
Brain tumor
Brain abscess
Ischemic stroke
Intracerebral hemorrhage
Subarachnoid hemorrhage
Brain herniation
Hypoxic-anoxic brain injury
Cardiac arrest
Respiratory arrest
Cardiac arrhythmia
Atrial fibrillation
Ventricular fibrillation
Ventricular tachycardia
Myocardial infarction
Heart failure
Drowning
Myocarditis
Pericarditis
Cardiogenic shock
Electrocution
Choking
Toxins such as uremia, ethanol, atropine, opiates, lead, dimethylmercury, endrin, parathion, and colloidal silver
Physical trauma: Concussion, contusion, etc.
Seizure, both nonconvulsive status epilepticus and postconvulsive state (postictal state)
Electrolyte imbalance, which involves hypoxia, hyponatremia, hypernatremia, hypomagnesemia, hypoglycemia, hyperglycemia, hypercalcemia, and hypocalcemia
Postinfectious: Acute disseminated encephalomyelitis (ADEM)
Endocrine disorders such as adrenal insufficiency and thyroid disorders
Degenerative and metabolic diseases including urea cycle disorders, Reye syndrome, and mitochondrial disease
Systemic infection and sepsis
Hepatic encephalopathy
In addition, these authors claim that doctors sometimes use the mnemonic device AEIOU-TIPS to recall portions of the differential diagnosis: Alcohol ingestion and acidosis, epilepsy and encephalopathy, infection, opiates, uremia, trauma, insulin overdose or inflammatory disorders, poisoning and psychogenic causes, and shock.
Diagnosis
Despite converging agreement about the definition of persistent vegetative state, recent reports have raised concerns about the accuracy of diagnosis in some patients, and the extent to which, in a selection of cases, residual cognitive functions may remain undetected and patients are diagnosed as being in a persistent vegetative state. Objective assessment of residual cognitive function can be extremely difficult as motor responses may be minimal, inconsistent, and difficult to document in many patients, or may be undetectable in others because no cognitive output is possible. In recent years, a number of studies have demonstrated an important role for functional neuroimaging in the identification of residual cognitive function in persistent vegetative state; this technology is providing new insights into cerebral activity in patients with severe brain damage. Such studies, when successful, may be particularly useful where there is concern about the accuracy of the diagnosis and the possibility that residual cognitive function has remained undetected.
Diagnostic experiments
Researchers have begun to use functional neuroimaging studies to study implicit cognitive processing in patients with a clinical diagnosis of persistent vegetative state. Activations in response to sensory stimuli with positron emission tomography (PET), functional magnetic resonance imaging (fMRI), and electrophysiological methods can provide information on the presence, degree, and location of any residual brain function. However, use of these techniques in people with severe brain damage is methodologically, clinically, and theoretically complex and needs careful quantitative analysis and interpretation.
For example, PET studies have shown the identification of residual cognitive function in persistent vegetative state. That is, an external stimulation, such as a painful stimulus, still activates "primary" sensory cortices in these patients but these areas are functionally disconnected from "higher order" associative areas needed for awareness. These results show that parts of the cortex are indeed still functioning in "vegetative" patients.
In addition, other PET studies have revealed preserved and consistent responses in predicted regions of auditory cortex in response to intelligible speech stimuli. Moreover, a preliminary fMRI examination revealed partially intact responses to semantically ambiguous stimuli, which are known to tap higher aspects of speech comprehension.
Furthermore, several studies have used PET to assess the central processing of noxious somatosensory stimuli in patients in PVS. Noxious somatosensory stimulation activated midbrain, contralateral thalamus, and primary somatosensory cortex in each and every PVS patient, even in the absence of detectable cortical evoked potentials. In conclusion, somatosensory stimulation of PVS patients, at intensities that elicited pain in controls, resulted in increased neuronal activity in primary somatosensory cortex, even if resting brain metabolism was severely impaired. However, this activation of primary cortex seems to be isolated and dissociated from higher-order associative cortices.
Also, there is evidence of partially functional cerebral regions in catastrophically injured brains. To study five patients in PVS with different behavioral features, researchers employed PET, MRI and magnetoencephalographic (MEG) responses to sensory stimulation. In three of the five patients, co-registered PET/MRI correlate areas of relatively preserved brain metabolism with isolated fragments of behavior. Two patients had had anoxic injuries and demonstrated marked decreases in overall cerebral metabolism to 30–40% of normal. Two other patients with non-anoxic, multifocal brain injuries demonstrated several isolated brain regions with higher metabolic rates, that ranged up to 50–80% of normal. Nevertheless, their global metabolic rates remained <50% of normal. MEG recordings from three PVS patients provide clear evidence for the absence, abnormality or reduction of evoked responses. Despite major abnormalities, however, these data also provide evidence for localized residual activity at the cortical level. Each patient partially preserved restricted sensory representations, as evidenced by slow evoked magnetic fields and gamma band activity. In two patients, these activations correlate with isolated behavioral patterns and metabolic activity. Remaining active regions identified in the three PVS patients with behavioral fragments appear to consist of segregated corticothalamic networks that retain connectivity and partial functional integrity. A single patient who sustained severe injury to the tegmental mesencephalon and paramedian thalamus showed widely preserved cortical metabolism, and a global average metabolic rate of 65% of normal. The relatively high preservation of cortical metabolism in this patient defines the first functional correlate of clinical–pathological reports associating permanent unconsciousness with structural damage to these regions. The specific patterns of preserved metabolic activity identified in these patients reflect novel evidence of the modular nature of individual functional networks that underlie conscious brain function. The variations in cerebral metabolism in chronic PVS patients indicate that some cerebral regions can retain partial function in catastrophically injured brains.
Misdiagnoses
Statistical PVS misdiagnosis is common. An example study with 40 patients in the United Kingdom diagnosed with PVS reported 43% of the patients were considered to have been misdiagnosed, and another 33% had recovered whilst the study was underway. Some PVS cases may actually be a misdiagnosis of patients being in an undiagnosed minimally conscious state. Since the exact diagnostic criteria of the minimally conscious state were only formulated in 2002, there may be chronic patients diagnosed as PVS before the secondary notion of the minimally conscious state became known.
Whether or not there is any conscious awareness with a patient's vegetative state is a prominent issue. Three completely different aspects of this should be distinguished. First, some patients can be conscious simply because they are misdiagnosed (see above). In fact, they are not in vegetative states. Second, sometimes a patient was correctly diagnosed but is then examined during the early stages of recovery. Third, perhaps some day the notion itself of vegetative states will change so to include elements of conscious awareness. Inability to disentangle these three example cases causes confusion. An example of such confusion is the response to an experiment using functional magnetic resonance imaging which revealed that a woman diagnosed with PVS was able to activate predictable portions of her brain in response to the tester's requests that she imagine herself playing tennis or moving from room to room in her house. The brain activity in response to these instructions was indistinguishable from those of healthy patients.
In 2010, Martin Monti and fellow researchers, working at the MRC Cognition and Brain Sciences Unit at the University of Cambridge, reported in an article in the New England Journal of Medicine that some patients in persistent vegetative states responded to verbal instructions by displaying different patterns of brain activity on fMRI scans. Five out of a total of 54 diagnosed patients were apparently able to respond when instructed to think about one of two different physical activities. One of these five was also able to "answer" yes or no questions, again by imagining one of these two activities. It is unclear, however, whether the fact that portions of the patients' brains light up on fMRI could help these patients assume their own medical decision making.
In November 2011, a publication in The Lancet presented bedside EEG apparatus and indicated that its signal could be used to detect awareness in three of 16 patients diagnosed in the vegetative state.
Treatment
Currently no treatment for vegetative state exists that would satisfy the efficacy criteria of evidence-based medicine. Several methods have been proposed which can roughly be subdivided into four categories: pharmacological methods, surgery, physical therapy, and various stimulation techniques. Pharmacological therapy mainly uses activating substances such as tricyclic antidepressants or methylphenidate. Mixed results have been reported using dopaminergic drugs such as amantadine and bromocriptine and stimulants such as dextroamphetamine. Surgical methods such as deep brain stimulation are used less frequently due to the invasiveness of the procedures. Stimulation techniques include sensory stimulation, sensory regulation, music and musicokinetic therapy, social-tactile interaction, and cortical stimulation.
Zolpidem
There is limited evidence that the hypnotic drug zolpidem has an effect. The results of the few scientific studies that have been published so far on the effectiveness of zolpidem have been contradictory.
Epidemiology
In the United States, it is estimated that there may be between 15,000 and 40,000 patients who are in a persistent vegetative state, but due to poor nursing home records exact figures are hard to determine.
History
The syndrome was first described in 1940 by Ernst Kretschmer who called it apallic syndrome. The term persistent vegetative state was coined in 1972 by Scottish spinal surgeon Bryan Jennett and American neurologist Fred Plum to describe a syndrome that seemed to have been made possible by medicine's increased capacities to keep patients' bodies alive.
Society and culture
Ethics and policy
An ongoing debate exists as to how much care, if any, patients in a persistent vegetative state should receive in health systems plagued by limited resources. In a case before the New Jersey Superior Court, Betancourt v. Trinitas Hospital, a community hospital sought a ruling that dialysis and CPR for such a patient constitutes futile care. An American bioethicist, Jacob M. Appel, argued that any money spent treating PVS patients would be better spent on other patients with a higher likelihood of recovery.
The patient died naturally prior to a decision in the case, resulting in the court finding the issue moot.
In 2010, British and Belgian researchers reported in an article in the New England Journal of Medicine that some patients in persistent vegetative states actually had enough consciousness to "answer" yes or no questions on fMRI scans. However, it is unclear whether the fact that portions of the patients' brains light up on fMRI will help these patient assume their own medical decision making. Professor Geraint Rees, Director of the Institute of Cognitive Neuroscience at University College London, responded to the study by observing that, "As a clinician, it would be important to satisfy oneself that the individual that you are communicating with is competent to make those decisions. At the moment it is premature to conclude that the individual able to answer 5 out of 6 yes/no questions is fully conscious like you or I." In contrast, Jacob M. Appel of the Mount Sinai Hospital told the Telegraph that this development could be a welcome step toward clarifying the wishes of such patients. Appel stated: "I see no reason why, if we are truly convinced such patients are communicating, society should not honour their wishes. In fact, as a physician, I think a compelling case can be made that doctors have an ethical obligation to assist such patients by removing treatment. I suspect that, if such individuals are indeed trapped in their bodies, they may be living in great torment and will request to have their care terminated or even active euthanasia."
Notable cases
Tony Bland – first patient in English legal history to be allowed to die
Paul Brophy – first American to die after court-authorization
Sunny von Bülow – lived almost 28 years in a persistent vegetative state until her death
Gustavo Cerati – Argentine singer-songwriter, composer and producer who died after four years in a chronic disorder of consciousness state
Prichard Colón – Puerto Rican former professional boxer and gold medal winner who spent years in a vegetative state after a bout
Nancy Cruzan – American woman involved in a landmark United States Supreme Court case
Gary Dockery – American police officer who entered, emerged and later reentered a persistent vegetative state
Eluana Englaro – Italian woman from Lecco whose life was ended after a legal case after spending 17 years in a vegetative state
Elaine Esposito – American woman who was a previous record holder for having spent 37 years in a chronic disorder of consciousness state
Lia Lee – Hmong girl who spent 26 years in a vegetative state after a seizure, and was the subject of a 1997 book by Anne Fadiman
Martin Pistorius – South African man who is a rare example of a survivor as his state progressed to minimally conscious after 3 years, locked in syndrome after another 4 more years, and fully came out of a coma after another 5 years. He is now a web designer, developer, and author. In 2011, he wrote a book called Ghost Boy, in which he describes his many years of being in a state of chronic disorder of consciousness.
Annie Shapiro – Canadian woman who is another rare example of a survivor, as it is known that she could not think for the first 2 years of her 29 total years of being comatose. In 1992 she awakened fully recovered and lived her last 10 years peacefully. It is the longest a person has been in a coma and woken up.
Haleigh Poutre
Karen Ann Quinlan
Terri Schiavo
Aruna Shanbaug – Indian woman in persistent vegetative state for 42 years until her death. Owing to her case, the Supreme Court of India allowed passive euthanasia in the country.
Ariel Sharon
Chayito Valdez
Vice Vukov
Helga Wanglie
Otto Warmbier
| Biology and health sciences | Miscellaneous | null |
300659 | https://en.wikipedia.org/wiki/Midnight%20sun | Midnight sun | Midnight sun, also known as polar day, is a natural phenomenon that occurs in the summer months in places north of the Arctic Circle or south of the Antarctic Circle, when the Sun remains visible at the local midnight. When midnight sun is seen in the Arctic, the Sun appears to move from left to right. In Antarctica, the equivalent apparent motion is from right to left. This occurs at latitudes ranging from approximately 65°44' to exactly 90° north or south, and does not stop exactly at the Arctic Circle or the Antarctic Circle, due to refraction.
The opposite phenomenon, polar night, occurs in winter, when the Sun stays below the horizon throughout the day.
Geography
Because there are no permanent human settlements south of the Antarctic Circle, apart from research stations, the countries and territories whose populations experience midnight sun are limited to those crossed by the Arctic Circle: Canada (Yukon, Nunavut, and Northwest Territories), Finland, Greenland, Iceland, Norway, Russia, Sweden, and the United States (state of Alaska).
The largest city in the world north of the Arctic Circle, Murmansk, Russia, experiences midnight sun from 22 May to 22 July (62 days).
A quarter of Finland's territory lies north of the Arctic Circle, and at the country's northernmost point the Sun does not set at all for 72 days during summer.
In Svalbard, Norway, the northernmost inhabited region of Europe, there is no sunset from approximately 19 April to 23 August. The extreme sites are the poles, where the Sun can be continuously visible for half the year. The North Pole has midnight sun for about 6 months, from approximately 18 March to 24 September. South Pole, Antarctica has midnight sun and experiences this from approximately 20 September to 23 March (about 6 months).
Polar circle proximity
Due to atmospheric refraction, and also because the Sun is a disc rather than a point in the sky, midnight sun may be experienced at latitudes slightly south of the Arctic Circle or north of the Antarctic Circle, though not exceeding one degree (depending on local conditions). For example, Iceland is known for its midnight sun, even though most of it (Grímsey is the exception) is slightly south of the Arctic Circle. For the same reasons, the period of sunlight at the poles is slightly longer than six months. Even the northern extremities of the United Kingdom (and places at similar latitudes, such as Saint Petersburg) experience twilight throughout the night in the northern sky at around the summer solstice.
Places sufficiently close to the poles, such as Alert, Nunavut, experience times where it does not get entirely dark at night yet the Sun does not rise either, combining effects of midnight sun and polar night, reaching civil twilight during the "day" and astronomical twilight at "night".
White nights
Locations where the Sun remains less than 6 (or 7) degrees below the horizonbetween about 60° 34’ (or 59° 34’) latitude and the polar circleexperience midnight twilight instead of midnight sun, so that daytime activities, such as reading, are still possible without artificial light on a clear night. This happens in both Northern Hemisphere summer solstice and Southern Hemisphere summer solstice. The lowest latitude to experience midnight sun without a golden hour is about 72°34′ North or South.
White Nights have become a common symbol of Saint Petersburg, Russia, where they occur from about 11 June to 1 July, and the last 10 days of June are celebrated with cultural events known as the White Nights Festival. The phenomenon carries similar significance for Fairbanks, Alaska, where an annual Midnight Sun Game baseball competition has been contested since 1906 in the twilight surrounding midnight on June 21.
The northernmost tip of Antarctica also experiences white nights near the Southern Hemisphere summer solstice.
Explanation
Since the axial tilt of Earth is considerable (23 degrees, 26 minutes, 21.41196 seconds), at high latitudes the Sun does not set in summer; rather, it remains continuously visible for one day during the summer solstice at the polar circle, for several weeks only closer to the pole, and for six months at the pole. At extreme latitudes, midnight sun is usually referred to as polar day.
At the poles themselves, the Sun rises and sets only once each year on the equinoxes. During the six months that the Sun is above the horizon, it spends the days appearing to continuously move in circles around the observer, gradually spiraling higher and reaching its highest circuit of the sky at the summer solstice, before beginning to sink lower, setting just after the autumnal equinox.
Time zones and daylight saving time
The term "midnight sun" refers to the consecutive 24-hour periods of sunlight experienced north of the Arctic Circle and south of the Antarctic Circle. Other phenomena are sometimes referred to as "midnight sun", but they are caused by time zones and the observance of daylight saving time. For instance, in Fairbanks, Alaska, which is south of the Arctic Circle, the Sun sets at 12:47a.m. at the summer solstice. This is because Fairbanks is 51 minutes (1 hour and 51 minutes at Daylight Savings Time) ahead of its idealized time zone (as most of the state is in one time zone) and Alaska observes daylight saving time. (Fairbanks is at about 147.72 degrees west, corresponding to UTC−9 hours 51 minutes, and is on UTC−9 in winter.) This means that solar culmination occurs at about 12:51p.m. instead of at 12 noon. Also in Fairbanks, Alaska, solar midnight occurs at 01:51 a.m. local time.
If a precise moment for the genuine "midnight sun" is required, the observer's longitude, the local civil time, and the equation of time must be taken into account. The moment of the Sun's closest approach to the horizon coincides with its passing due north at the observer's position, which occurs only approximately at midnight in general. Each degree of longitude east of the Greenwich meridian makes the vital moment exactly 4 minutes earlier than midnight as shown on the clock, while each hour that the local civil time is ahead of coordinated universal time (UTC, also known as GMT) makes the moment an hour later. These two effects must be added. Furthermore, the equation of time (which depends on the date) must be added: a positive value on a given date means that the Sun is running slightly ahead of its average position, so the value must be subtracted.
As an example, at the North Cape of Norway at midnight on June 21/22, the longitude of 25.9 degrees east makes the moment 103.2 minutes earlier by clock time; but the local time, 2 hours ahead of GMT in the summer, makes it 120 minutes later by clock time. The equation of time at that date is -2.0 minutes. Therefore, the Sun's lowest elevation occurs 120 - 103.2 + 2.0 minutes after midnight: at 00.19 Central European Summer time. On other nearby dates the only thing different is the equation of time, so this remains a reasonable estimate for a considerable period. The Sun's altitude remains within half a degree of the minimum of about 5 degrees for about 45 minutes either side of this time.
When it rotates on its own axis, it sometimes moves closer to the Sun. During this period of Earth's rotation from May to July, Earth tilts at an angle of 23.5 degrees above its own axis in its orbit. This causes the part of Norway located in the Arctic region at the North Pole of Earth to move very close to the Sun and during this time the length of the day increases. It can be said that it almost never subsides. Night falls in Norway's Hammerfest at this particular time of year.
Duration
The number of days per year with potential midnight sun increases the closer one goes toward either pole. Although approximately defined by the polar circles, in practice, midnight sun can be seen as much as outside the polar circle, as described below, and the exact latitudes of the furthest reaches of midnight sun depend on topography and vary slightly from year to year.
Even though at the Arctic Circle the center of the Sun is, per definition and without refraction by the atmosphere, only visible during one summer night, some part of midnight sun is visible at the Arctic Circle from approximately 12 June until 1 July. This period extends as one travels north: At Cape Nordkinn, Norway, the northernmost point of Continental Europe, midnight sun lasts approximately from 14 May to 29 July. On the Svalbard archipelago further north, it lasts from 20 April to 22 August.
Southern and Northern poles
Also, the periods of polar day and polar night are unequal in both polar regions because the Earth is at perihelion in early January and at aphelion in early July. As a result, the polar day is longer than the polar night in the Northern Hemisphere (at Utqiagvik, Alaska, for example, polar day lasts 84 days, while polar night lasts only 68 days), while in the Southern Hemisphere, the situation is the reverse—the polar night is longer than the polar day.
Observers at heights appreciably above sea level can experience extended periods of midnight sun as a result of the "dip" of the horizon viewed from altitude.
| Physical sciences | Celestial mechanics | Astronomy |
300664 | https://en.wikipedia.org/wiki/Theropoda | Theropoda | Theropoda (; from ancient Greek [, (therion) "wild beast"; , (pous, podos) "foot"]), whose members are known as theropods, is an extant dinosaur clade that is characterized by hollow bones and three toes and claws on each limb. Theropods are generally classed as a group of saurischian dinosaurs. They were ancestrally carnivorous, although a number of theropod groups evolved to become herbivores and omnivores. Theropods first appeared during the Carnian age of the late Triassic period 231.4 million years ago (Ma) and included the majority of large terrestrial carnivores from the Early Jurassic until at least the close of the Cretaceous, about 66 Ma. In the Jurassic, birds evolved from small specialized coelurosaurian theropods, and are today represented by about 11,000 living species.
Biology
Traits
Various synapomorphies for Theropoda have been proposed based on which taxa are included in the group. For example, a 1999 paper by Paul Sereno suggests that theropods are characterized by traits such as an ectopterygoid fossa (a depression around the ectopterygoid bone), an intramandibular joint located within the lower jaw, and extreme internal cavitation within the bones. However, since taxa like Herrerasaurus may not be theropods, these traits may have been more widely distributed among early saurischians rather than being unique to theropods.
Instead, taxa with a higher probability of being within the Theropoda may share more specific traits, such as a prominent promaxillary fenestra, cervical vertebrae with pleurocoels in the anterior part of the centrum leading to a more pneumatic neck, five or more sacral vertebrae, enlargement of the carpal bone, and a distally concave portion of the tibia, among a few other traits found throughout the skeleton. Like the early sauropodomorphs, the second digit in a theropod's hand is enlarged. Theropods also have a very well developed ball and socket joint near their neck and head.
Most theropods belong to the clade Neotheropoda, characterized by the reduction of several foot bones, thus leaving three toed footprints on the ground when they walk (tridactyl feet). Digit V was reduced to a remnant early in theropod evolution and was gone by the late Triassic. Digit I is reduced and generally do not touch the ground, and greatly reduced in some lineages. They also lack a digit V on their hands and have developed a furcula which is otherwise known as a wishbone. Early neotheropods like the coelophysoids have a noticeable kink in the upper jaw known as a subnarial gap. Averostrans are some of the most derived theropods and contain the Tetanurae and Ceratosauria. While some used to consider coelophysoids and ceratosaurs to be within the same group due to features such as a fused hip, later studies showed that it is more likely that these were features ancestral to neotheropods and were lost in basal tetanurans. Averostrans and their close relatives are united via the complete loss of any digit V remnants, fewer teeth in the maxilla, the movement of the tooth row further down the maxilla and a lacrimal fenestra. Averostrans also share features in their hips and teeth.
Diet and teeth
Theropods exhibit a wide range of diets, from insectivores to herbivores and carnivores. Strict carnivory has always been considered the ancestral diet for theropods as a group, and a wider variety of diets was historically considered a characteristic exclusive to the avian theropods (birds). However, discoveries in the late 20th and early 21st centuries showed that a variety of diets existed even in more basal lineages. All early finds of theropod fossils showed them to be primarily carnivorous. Fossilized specimens of early theropods known to scientists in the 19th and early 20th centuries all possessed sharp teeth with serrated edges for cutting flesh, and some specimens even showed direct evidence of predatory behavior. For example, a Compsognathus longipes fossil was found with a lizard in its stomach, and a Velociraptor mongoliensis specimen was found locked in combat with a Protoceratops andrewsi (a type of ornithischian dinosaur).
The first confirmed non-carnivorous fossil theropods found were the therizinosaurs, originally known as "segnosaurs". First thought to be prosauropods, these enigmatic dinosaurs were later proven to be highly specialized, herbivorous theropods. Therizinosaurs possessed large abdomens for processing plant food, and small heads with beaks and leaf-shaped teeth. Further study of maniraptoran theropods and their relationships showed that therizinosaurs were not the only early members of this group to abandon carnivory. Several other lineages of early maniraptorans show adaptations for an omnivorous diet, including seed-eating (some troodontids) and insect-eating (many avialans and alvarezsaurs). Oviraptorosaurs, ornithomimosaurs and advanced troodontids were likely omnivorous as well, and some early theropods (such as Masiakasaurus knopfleri and the spinosaurids) appear to have specialized in catching fish.
Diet is largely deduced by the tooth morphology, tooth marks on bones of the prey, and gut contents. Some theropods, such as Baryonyx, Lourinhanosaurus, ornithomimosaurs, and birds, are known to use gastroliths, or gizzard-stones.
The majority of theropod teeth are blade-like, with serration on the edges, called ziphodont. Others are pachydont or folidont depending on the shape of the tooth or denticles. The morphology of the teeth is distinct enough to tell the major families apart, which indicate different diet strategies. An investigation in July 2015 discovered that what appeared to be "cracks" in their teeth were actually folds that helped to prevent tooth breakage by strengthening individual serrations as they attacked their prey. The folds helped the teeth stay in place longer, especially as theropods evolved into larger sizes and had more force in their bite.
Integument (skin, scales and feathers)
Mesozoic theropods were also very diverse in terms of skin texture and covering. Feathers or feather-like structures (filaments) are attested in most lineages of theropods (see feathered dinosaur). However, outside the coelurosaurs, feathers may have been confined to the young, smaller species, or limited parts of the animal. Many larger theropods had skin covered in small, bumpy scales. In some species, these were interspersed with larger scales with bony cores, or osteoderms. This type of skin is best known in the ceratosaur Carnotaurus, which has been preserved with extensive skin impressions.
The coelurosaur lineages most distant from birds had feathers that were relatively short and composed of simple, possibly branching filaments. Simple filaments are also seen in therizinosaurs, which also possessed large, stiffened "quill"-like feathers. More fully feathered theropods, such as dromaeosaurids, usually retain scales only on the feet. Some species may have mixed feathers elsewhere on the body as well. Scansoriopteryx preserved scales near the underside of the tail, and Juravenator may have been predominantly scaly with some simple filaments interspersed. On the other hand, some theropods were completely covered with feathers, such as the troodontid Anchiornis, which even had feathers on the feet and toes.
Based on a relationships between tooth size and skull length and also a comparison of the degree of wear of the teeth of non-avian theropods and modern lepidosaurs, it is concluded that theropods had lips that protected their teeth from the outside. Visually, the snouts of such theropods as Daspletosaurus had more similarities with lizards than crocodilians, which lack lips.
Size
Tyrannosaurus was for many decades the largest known theropod and best known to the general public. Since its discovery, however, a number of other giant carnivorous dinosaurs have been described, including Spinosaurus, Carcharodontosaurus, and Giganotosaurus. The original Spinosaurus specimens (as well as newer fossils described in 2006) support the idea that Spinosaurus was probably 3 meters longer than Tyrannosaurus, though Tyrannosaurus might have been more massive than Spinosaurus. Specimens such as Sue and Scotty are both estimated to be the heaviest theropods known to science. It is still not clear why these animals grew so heavy and bulky compared to the land predators that came before and after them.
The largest extant theropod is the common ostrich, up to 2.74 m (9 ft) tall and weighing between 90 and 130 kg (200 - 290 lb). The smallest non-avian theropod known from adult specimens is the troodontid Anchiornis huxleyi, at 110 grams in weight and 34 centimeters (1 ft) in length. When modern birds are included, the bee hummingbird (Mellisuga helenae) is smallest at 1.9 g and 5.5 cm (2.2 in) long.
Recent theories propose that theropod body size shrank continuously over a period of 50 million years, from an average of down to , eventually evolving into over 11,000 species of modern birds. This was based on evidence that theropods were the only dinosaurs to get continuously smaller, and that their skeletons changed four times as fast as those of other dinosaur species.
Growth rates
In order to estimate the growth rates of theropods, scientists need to calculate both age and body mass of a dinosaur. Both of these measures can only be calculated through fossilized bone and tissue, so regression analysis and extant animal growth rates as proxies are used to make predictions. Fossilized bones exhibit growth rings that appear as a result of growth or seasonal changes, which can be used to approximate age at the time of death. However, the amount of rings in a skeleton can vary from bone to bone, and old rings can also be lost at advanced age, so scientists need to properly control these two possibly confounding variables.
Body mass is harder to determine as bone mass only represents a small proportion of the total body mass of animals. One method is to measure the circumference of the femur, which in non-avian theropod dinosaurs has been shown to be a relatively proportional to quadrupedal mammals, and use this measurement as a function of body weight, as the proportions of long bones like the femur grow proportionately with body mass. The method of using extant animal bone proportion to body mass ratios to make predictions about extinct animals is known as the extant-scaling (ES) approach. A second method, known as the volumetric-density (VD) approach, uses full-scale models of skeletons to make inferences about potential mass. The ES approach is better for wide-range studies including many specimens and doesn't require as much of a complete skeleton as the VD approach, but the VD approach allows scientists to better answer more physiological questions about the animal, such as locomotion and center of gravity.
The current consensus is that non-avian theropods didn't exhibit a group wide growth rate, but instead had varied rates depending on their size. However, all non-avian theropods had faster growth rates than extant reptiles, even when modern reptiles are scaled up to the large size of some non-avian theropods. As body mass increases, the relative growth rate also increases. This trend may be due to the need to reach the size required for reproductive maturity. For example, one of the smallest known theropods was Microraptor zhaoianus, which had a body mass of 200 grams, grew at a rate of approximately 0.33 grams per day. A comparable reptile of the same size grows at half of this rate. The growth rates of medium-sized non-avian theropods (100–1000 kg) approximated those of precocial birds, which are much slower than altricial birds. Large theropods (1500–3500 kg) grew even faster, similar to rates displayed by eutherian mammals. The largest non-avian theropods, like Tyrannosaurus rex had similar growth dynamics to the largest living land animal today, the African elephant, which is characterized by a rapid period of growth until maturity, subsequently followed by slowing growth in adulthood.
Stance and gait
As a hugely diverse group of animals, the posture adopted by theropods likely varied considerably between various lineages through time. All known theropods are bipedal, with the forelimbs reduced in length and specialized for a wide variety of tasks (see below). In modern birds, the body is typically held in a somewhat upright position, with the upper leg (femur) held parallel to the spine and with the forward force of locomotion generated at the knee. Scientists are not certain how far back in the theropod family tree this type of posture and locomotion extends.
Non-avian theropods were first recognized as bipedal during the 19th century, before their relationship to birds was widely accepted. During this period, theropods such as carnosaurs and tyrannosaurids were thought to have walked with vertical femurs and spines in an upright, nearly erect posture, using their long, muscular tails as additional support in a kangaroo-like tripodal stance. Beginning in the 1970s, biomechanical studies of extinct giant theropods cast doubt on this interpretation. Studies of limb bone articulation and the relative absence of trackway evidence for tail dragging suggested that, when walking, the giant, long-tailed theropods would have adopted a more horizontal posture with the tail held parallel to the ground. However, the orientation of the legs in these species while walking remains controversial. Some studies support a traditional vertically oriented femur, at least in the largest long-tailed theropods, while others suggest that the knee was normally strongly flexed in all theropods while walking, even giants like the tyrannosaurids. It is likely that a wide range of body postures, stances, and gaits existed in the many extinct theropod groups.
Nervous system and senses
Although rare, complete casts of theropod endocrania are known from fossils. Theropod endocrania can also be reconstructed from preserved brain cases without damaging valuable specimens by using a computed tomography scan and 3D reconstruction software. These finds are of evolutionary significance because they help document the emergence of the neurology of modern birds from that of earlier reptiles. An increase in the proportion of the brain occupied by the cerebrum seems to have occurred with the advent of the Coelurosauria and "continued throughout the evolution of maniraptorans and early birds."
Studies show that theropods had very sensitive snouts. It is suggested they might have been used for temperature detection, feeding behavior, and wave detection.
Forelimb morphology
Shortened forelimbs in relation to hind legs was a common trait among theropods, most notably in the abelisaurids (such as Carnotaurus) and the tyrannosaurids (such as Tyrannosaurus). This trait was, however, not universal: spinosaurids had well developed forelimbs, as did many coelurosaurs. The relatively robust forelimbs of one genus, Xuanhanosaurus, led D. Zhiming to suggest that the animal might have been quadrupedal. However, this is no longer thought to be likely.
The hands are also very different among the different groups. The most common form among non-avian theropods is an appendage consisting of three fingers; the digits I, II and III (or possibly II, III and IV), with sharp claws. Some basal theropods, like most Ceratosaurians, had four digits, and also a reduced metacarpal V (e.g. Dilophosaurus). The majority of tetanurans had three, but some had even fewer.
The forelimbs' scope of use is also believed to have also been different among different families. The spinosaurids could have used their powerful forelimbs to hold fish. Some small maniraptorans such as scansoriopterygids are believed to have used their forelimbs to climb in trees. The wings of modern birds are used primarily for flight, though they are adapted for other purposes in certain groups. For example, aquatic birds such as penguins use their wings as flippers.
Forelimb movement
Contrary to the way theropods have often been reconstructed in art and the popular media, the range of motion of theropod forelimbs was severely limited, especially compared with the forelimb dexterity of humans and other primates. Most notably, theropods and other bipedal saurischian dinosaurs (including the bipedal prosauropods) could not pronate their hands—that is, they could not rotate the forearm so that the palms faced the ground or backwards towards the legs. In humans, pronation is achieved by motion of the radius relative to the ulna (the two bones of the forearm). In saurischian dinosaurs, however, the end of the radius near the elbow was actually locked into a groove of the ulna, preventing any movement. Movement at the wrist was also limited in many species, forcing the entire forearm and hand to move as a single unit with little flexibility. In theropods and prosauropods, the only way for the palm to face the ground would have been by lateral splaying of the entire forelimb, as in a bird raising its wing.
In carnosaurs like Acrocanthosaurus, the hand itself retained a relatively high degree of flexibility, with mobile fingers. This was also true of more basal theropods, such as herrerasaurs. Coelurosaurs showed a shift in the use of the forearm, with greater flexibility at the shoulder allowing the arm to be raised towards the horizontal plane, and to even greater degrees in flying birds. However, in coelurosaurs, such as ornithomimosaurs and especially dromaeosaurids, the hand itself had lost most flexibility, with highly inflexible fingers. Dromaeosaurids and other maniraptorans also showed increased mobility at the wrist not seen in other theropods, thanks to the presence of a specialized half-moon shaped wrist bone (the semi-lunate carpal) that allowed the whole hand to fold backward towards the forearm in the manner of modern birds.
Paleopathology
In 2001, Ralph E. Molnar published a survey of pathologies in theropod dinosaur bone. He found pathological features in 21 genera from 10 families. Pathologies were found in theropods of all body size although they were less common in fossils of small theropods, although this may be an artifact of preservation. They are very widely represented throughout the different parts of theropod anatomy. The most common sites of preserved injury and disease in theropod dinosaurs are the ribs and tail vertebrae. Despite being abundant in ribs and vertebrae, injuries seem to be "absent... or very rare" on the bodies' primary weight supporting bones like the sacrum, femur, and tibia. The lack of preserved injuries in these bones suggests that they were selected by evolution for resistance to breakage. The least common sites of preserved injury are the cranium and forelimb, with injuries occurring in about equal frequency at each site. Most pathologies preserved in theropod fossils are the remains of injuries like fractures, pits, and punctures, often likely originating with bites. Some theropod paleopathologies seem to be evidence of infections, which tended to be confined only to small regions of the animal's body. Evidence for congenital malformities have also been found in theropod remains. Such discoveries can provide information useful for understanding the evolutionary history of the processes of biological development. Unusual fusions in cranial elements or asymmetries in the same are probably evidence that one is examining the fossils of an extremely old individual rather than a diseased one.
Swimming
The trackway of a swimming theropod, the first in China of the ichnogenus named Characichnos, was discovered at the Feitianshan Formation in Sichuan. These new swim tracks support the hypothesis that theropods were adapted to swimming and capable of traversing moderately deep water. Dinosaur swim tracks are considered to be rare trace fossils, and are among a class of vertebrate swim tracks that also include those of pterosaurs and crocodylomorphs. The study described and analyzed four complete natural molds of theropod foot prints that are now stored at the Huaxia Dinosaur Tracks Research and Development Center (HDT). These dinosaur footprints were in fact claw marks, which suggest that this theropod was swimming near the surface of a river and just the tips of its toes and claws could touch the bottom. The tracks indicate a coordinated, left-right, left-right progression, which supports the proposition that theropods were well-coordinated swimmers.
Evolutionary history
During the late Triassic, a number of primitive proto-theropod and theropod dinosaurs existed and evolved alongside each other.
The earliest and most primitive of the theropod dinosaurs were the carnivorous Eodromaeus and, possibly, the herrerasaurids of Argentina. The herrerasaurs existed during the early late Triassic (Late Carnian to Early Norian). They were found in North America and South America and possibly also India and Southern Africa. The herrerasaurs were characterised by a mosaic of primitive and advanced features. Some paleontologists have in the past considered the herrerasaurians to be members of Theropoda, while other theorized the group to be basal saurischians, and may even have evolved prior to the saurischian-ornithischian split. Cladistic analysis following the discovery of Tawa, another Triassic dinosaur, suggests the herrerasaurs likely were early theropods.
The earliest and most primitive unambiguous theropods are the Coelophysoidea. The coelophysoids were a group of widely distributed, lightly built and potentially gregarious animals. They included small hunters like Coelophysis and Camposaurus. These successful animals continued from the Late Carnian (early Late Triassic) through to the Toarcian (late Early Jurassic). Although in the early cladistic classifications they were included under the Ceratosauria and considered a side-branch of more advanced theropods, they may have been ancestral to all other theropods (which would make them a paraphyletic group).
Neotheropoda (meaning "new theropods") is a clade that includes coelophysoids and more advanced theropod dinosaurs, and is the only group of theropods that survived the Triassic–Jurassic extinction event. Neotheropoda was named by R.T. Bakker in 1986 as a group including the relatively derived theropod subgroups Ceratosauria and Tetanurae, and excluding coelophysoids. However, most later researchers have used it to denote a broader group. Neotheropoda was first defined as a clade by Paul Sereno in 1998 as Coelophysis plus modern birds, which includes almost all theropods except the most primitive species. Dilophosauridae was formerly considered a small clade within Neotheropoda, but was later considered to be paraphyletic. By the Early Jurassic, all non-averostran neotheropods had gone extinct.
Averostra (or "bird snouts") is a clade within Neotheropoda that includes most theropod dinosaurs, namely Ceratosauria and Tetanurae. It represents the only group of post-Early Jurassic theropods. One important diagnostic feature of Averostra is the absence of the fifth metacarpal. Other saurischians retained this bone, albeit in a significantly reduced form.
The somewhat more advanced ceratosaurs (including Ceratosaurus and Carnotaurus) appeared during the Early Jurassic and continued through to the Late Jurassic in Laurasia. They competed alongside their more anatomically advanced tetanuran relatives and—in the form of the abelisaur lineage—lasted to the end of the Cretaceous in Gondwana.
The Tetanurae are more specialised again than the ceratosaurs. They are subdivided into the basal Megalosauroidea (alternately Spinosauroidea) and the more derived Avetheropoda. Megalosauridae were primarily Middle Jurassic to Early Cretaceous predators, and their spinosaurid relatives' remains are mostly from Early and Middle Cretaceous rocks. Avetheropoda, as their name indicates, were more closely related to birds and are again divided into the Allosauroidea (the diverse carcharodontosaurs) and the Coelurosauria (a very large and diverse dinosaur group including the birds).
Thus, during the late Jurassic, there were no fewer than four distinct lineages of theropods—ceratosaurs, megalosaurs, allosaurs, and coelurosaurs—preying on the abundance of small and large herbivorous dinosaurs. All four groups survived into the Cretaceous, and three of those—the ceratosaurs, coelurosaurs, and allosaurs—survived to end of the period, where they were geographically separate, the ceratosaurs and allosaurs in Gondwana, and the coelurosaurs in Laurasia.
Of all the theropod groups, the coelurosaurs were by far the most diverse. Some coelurosaur groups that flourished during the Cretaceous were the tyrannosaurids (including Tyrannosaurus), the dromaeosaurids (including Velociraptor and Deinonychus, which are remarkably similar in form to the oldest known bird, Archaeopteryx), the bird-like troodontids and oviraptorosaurs, the ornithomimosaurs (or "ostrich Dinosaurs"), the strange giant-clawed herbivorous therizinosaurs, and the avialans, which include modern birds and is the only dinosaur lineage to survive the Cretaceous–Paleogene extinction event. While the roots of these various groups are found in the Middle Jurassic, they only became abundant during the Early Cretaceous. A few palaeontologists, such as Gregory S. Paul, have suggested that some or all of these advanced theropods were actually descended from flying dinosaurs or proto-birds like Archaeopteryx that lost the ability to fly and returned to a terrestrial habitat.
The evolution of birds from other theropod dinosaurs has also been reported, with some of the linking features being the furcula (wishbone), pneumatized bones, brooding of the eggs, and (in coelurosaurs, at least) feathers.
Classification
History of classification
O. C. Marsh coined the name Theropoda (meaning "beast feet") in 1881. Marsh initially named Theropoda as a suborder to include the family Allosauridae, but later expanded its scope, re-ranking it as an order to include a wide array of "carnivorous" dinosaur families, including Megalosauridae, Compsognathidae, Ornithomimidae, Plateosauridae and Anchisauridae (now known to be herbivorous sauropodomorphs) and Hallopodidae (subsequently revealed as relatives of crocodilians). Due to the scope of Marsh's Order Theropoda, it came to replace a previous taxonomic group that Marsh's rival E. D. Cope had created in 1866 for the carnivorous dinosaurs: Goniopoda ("angled feet").
By the early 20th century, some palaeontologists, such as Friedrich von Huene, no longer considered carnivorous dinosaurs to have formed a natural group. Huene abandoned the name "Theropoda", instead using Harry Seeley's Order Saurischia, which Huene divided into the suborders Coelurosauria and Pachypodosauria. Huene placed most of the small theropod groups into Coelurosauria, and the large theropods and prosauropods into Pachypodosauria, which he considered ancestral to the Sauropoda (prosauropods were still thought of as carnivorous at that time, owing to the incorrect association of rauisuchian skulls and teeth with prosauropod bodies, in animals such as Teratosaurus). Describing the first known dromaeosaurid (Dromaeosaurus albertensis) in 1922, W. D. Matthew and Barnum Brown became the first paleontologists to exclude prosauropods from the carnivorous dinosaurs, and attempted to revive the name "Goniopoda" for that group, but other scientists did not accept either of these suggestions.
In 1956, "Theropoda" came back into use—as a taxon containing the carnivorous dinosaurs and their descendants—when Alfred Romer re-classified the Order Saurischia into two suborders, Theropoda and Sauropoda. This basic division has survived into modern palaeontology, with the exception of, again, the Prosauropoda, which Romer included as an infraorder of theropods. Romer also maintained a division between Coelurosauria and Carnosauria (which he also ranked as infraorders). This dichotomy was upset by the discovery of Deinonychus and Deinocheirus in 1969, neither of which could be classified easily as "carnosaurs" or "coelurosaurs". In light of these and other discoveries, by the late 1970s Rinchen Barsbold had created a new series of theropod infraorders: Coelurosauria, Deinonychosauria, Oviraptorosauria, Carnosauria, Ornithomimosauria, and Deinocheirosauria.
With the advent of cladistics and phylogenetic nomenclature in the 1980s, and their development in the 1990s and 2000s, a clearer picture of theropod relationships began to emerge. Jacques Gauthier named several major theropod groups in 1986, including the clade Tetanurae for one branch of a basic theropod split with another group, the Ceratosauria. As more information about the link between dinosaurs and birds came to light, the more bird-like theropods were grouped in the clade Maniraptora (also named by Gauthier in 1986). These new developments also came with a recognition among most scientists that birds arose directly from maniraptoran theropods and, on the abandonment of ranks in cladistic classification, with the re-evaluation of birds as a subset of theropod dinosaurs that survived the Mesozoic extinctions and lived into the present.
Major groups
The following is a simplified classification of theropod groups based on their evolutionary relationships, and organized based on the list of Mesozoic dinosaur species provided by Holtz. A more detailed version can be found at dinosaur classification.
The dagger (†) is used to signify groups with no living members.
†Coelophysoidea (small, early theropods; includes Coelophysis and close relatives)
†Ceratosauria (generally elaborately horned, the dominant southern carnivores of the Cretaceous; includes Carnotaurus and close relatives, like Majungasaurus and Chenanisaurus)
Tetanurae ("stiff tails"; includes most theropods)
†Megalosauroidea (early group of large carnivores including the semi-aquatic spinosaurids)
†Allosauroidea (Allosaurus and close relatives, like Carcharodontosaurus)
†Megaraptora (A group of medium to large Orionides with unknown affinities, quite common in the southern hemisphere)
Coelurosauria (feathered theropods, with a range of body sizes and niches)
†Compsognathidae (early coelurosaurs with reduced forelimbs)
†Tyrannosauridae (Tyrannosaurus and close relatives; had reduced forelimbs)
†Ornithomimosauria ("ostrich-mimics"; mostly toothless; carnivores to possible herbivores)
Maniraptora ("hand snatchers"; had long, slender arms and fingers)
†Alvarezsauroidea (small insectivores with reduced forelimbs each bearing one enlarged claw)
†Therizinosauria (bipedal herbivores with large hand claws and small heads)
†Scansoriopterygidae (small, arboreal maniraptors with long third fingers)
†Oviraptorosauria (mostly toothless; their diet and lifestyle are uncertain)
†Archaeopterygidae (small, winged protobirds)
†Dromaeosauridae (small to medium-sized theropods)
†Troodontidae (small, gracile theropods)
Avialae (birds and extinct relatives)
†Omnivoropterygidae (large, early short-tailed avialans)
†Confuciusornithidae (small toothless birds)
†Enantiornithes (primitive tree-dwelling, flying birds)
Euornithes (advanced flying birds)
†Yanornithiformes (toothed Cretaceous Chinese birds)
†Hesperornithes (specialized aquatic diving birds)
Aves (modern, beaked birds and their extinct relatives)
Relationships
The following family tree illustrates a synthesis of the relationships of the major theropod groups based on various studies conducted in the 2010s.
Averostra was named by G.S. Paul in 2002 as an apomorphy-based clade defined as the group including the Dromaeosauridae and other Avepoda with (an ancestor with) a promaxillary fenestra (fenestra promaxillaris) which can also be referred to as a maxillary fenestra, an extra opening in the front outer side of the maxilla, the bone that makes up the upper jaw. It was later re-defined by Martin Ezcurra and Gilles Cuny in 2007 as a node-based clade containing Ceratosaurus nasicornis, Allosaurus fragilis, their last common ancestor and all its descendants. Mickey Mortimer commented that Paul's original apomorphy-based definition may make Averostra a much broader clade than the Ceratosaurus+Allosaurus node, potentially including all of Avepoda or more.
A large study of early dinosaurs by Dr Matthew G. Baron, David Norman and Paul M. Barrett (2017) published in the journal Nature suggested that Theropoda is actually more closely related to Ornithischia, to which it formed the sister group within the clade Ornithoscelida. This new hypothesis also recovered Herrerasauridae as the sister group to Sauropodomorpha in the redefined Saurischia and suggested that the hypercarnivore morphologies that are observed in specimens of theropods and herrerasaurids were acquired convergently. However, this phylogeny remains controversial and additional work is being done to clarify these relationships.
| Biology and health sciences | Theropods | Animals |
300898 | https://en.wikipedia.org/wiki/Scallop | Scallop | Scallop () is a common name that encompasses various species of marine bivalve mollusks in the taxonomic family Pectinidae, the scallops. However, the common name "scallop" is also sometimes applied to species in other closely related families within the superfamily Pectinoidea, which also includes the thorny oysters.
Scallops are a cosmopolitan family of bivalves found in all of the world's oceans, although never in fresh water. They are one of the very few groups of bivalves to be primarily "free-living", with many species capable of rapidly swimming short distances and even migrating some distance across the ocean floor. A small minority of scallop species live cemented to rocky substrates as adults, while others attach themselves to stationary or rooted objects such as seagrass at some point in their lives by means of a filament they secrete called a byssal thread. The majority of species, however, live recumbent on sandy substrates, and when they sense the presence of a predator such as a starfish, they may attempt to escape by swimming swiftly but erratically through the water using jet propulsion created by repeatedly clapping their shells together. Scallops have a well-developed nervous system, and unlike most other bivalves all scallops have a ring of numerous simple eyes situated around the edge of their mantles.
Many species of scallops are highly prized as a food source, and some are farmed as aquaculture. The word "scallop" is also applied to the meat of these bivalves, the adductor muscle, that is sold as seafood. The brightly coloured, symmetric, fan-shaped shells of scallops with their radiating and often fluted ornamentation are valued by shell collectors, and have been used since ancient times as motifs in art, architecture, and design.
Owing to their widespread distribution, scallop shells are a common sight on beaches and are often brightly coloured, making them a popular object to collect among beachcombers and vacationers. The shells also have a significant place in popular culture.
Distribution and habitat
Scallops inhabit all the oceans of the world, with the largest number of species living in the Indo-Pacific region. Most species live in relatively shallow waters from the low tide line to 100 m, while others prefer much deeper water. Although some species only live in very narrow environments, most are opportunistic and can live under a wide variety of conditions. Scallops can be found living within, upon, or under either rocks, coral, rubble, sea grass, kelp, sand, or mud. Most scallops begin their lives as byssally attached juveniles, an ability that some retain throughout their lives while others grow into free-living adults.
Anatomy and physiology
Very little variation occurs in the internal arrangement of organs and systems within the scallop family, and what follows can be taken to apply to the anatomy of any given scallop species.
Orientation
The shell of a scallop consists of two sides or valves, a left valve and a right one, divided by a plane of symmetry. Most species of scallops rest on their right valve, and consequently, this valve is often deeper and more rounded than the left (i.e., upper) valve, which in many species is actually concave. With the hinge of the two valves oriented towards the top, one side corresponds to the animal's morphological anterior or front, the other is the posterior or rear, the hinge is the dorsal or back/top region, and the bottom corresponds to the ventral or (as it were) underside/belly. However, as many scallop shells are more or less bilaterally symmetrical ("equivalved"), as well as symmetrical front/back ("equilateral"), determining which way a given animal is "facing" requires detailed information about its valves.
Valves
The model scallop shell consists of two similarly shaped valves with a straight hinge line along the top, devoid of teeth, and producing a pair of flat wings or "ears" (sometimes called "auricles", though this is also the term for two chambers in its heart) on either side of its midpoint, a feature which is unique to and apparent in all adult scallops. These ears may be of similar size and shape, or the anterior ear may be somewhat larger (the posterior ear is never larger than the anterior one, an important feature for distinguishing which valve is which). As is the case in almost all bivalves, a series of lines and/or growth rings originates at the center of the hinge, at a spot called the "beak" surrounded by a generally raised area called the "umbo". These growth rings increase in size downwards until they reach the curved ventral edge of the shell. The shells of most scallops are streamlined to facilitate ease of movement during swimming at some point in their lifecycles, while also providing protection from predators. Scallops with ridged valves have the advantage of the architectural strength provided by these ridges called "ribs", although the ribs are somewhat costly in weight and mass. A unique feature of the scallop family is the presence, at some point during the animal's lifecycle, of a distinctive and taxonomically important shell feature, a comb-like structure called a ctenolium located on the anterior edge of the right valve next to the valve's byssal notch. Though many scallops lose this feature as they become free-swimming adults, all scallops have a ctenolium at some point during their lives, and no other bivalve has an analogous shell feature. The ctenolium is found in modern scallops only; both putative ancestors of modern scallops, the entoliids and the Aviculopectinidae, did not possess it.
Muscular system
Like the true oysters (family Ostreidae), scallops have a single central adductor muscle, thus, the inside of their shells has a characteristic central scar, marking the point of attachment for this muscle. The adductor muscle of scallops is larger and more developed than those of oysters, because scallops are active swimmers; some species of scallops are known to move en masse from one area to another. In scallops, the shell shape tends to be highly regular, and is commonly used as an archetypal form of a seashell.
Adductor muscles
Scallops possess fast (striated) and slow (smooth) adductor muscles, which have different structures and contractile properties. These muscles lie closely apposed to one another but are divided by a connective tissue sheet. The striated adductor muscle contracts very quickly for swimming, whereas the smooth catch adductor muscle lacks striations, and contracts for long periods, keeping shells closed with little expenditure of energy.
Digestive system
Scallops are filter feeders, and eat plankton. Unlike many other bivalves, they lack siphons. Water moves over a filtering structure, where food particles become trapped in mucus. Next, the cilia on the structure move the food toward the mouth. Then, the food is digested in the digestive gland, an organ sometimes misleadingly referred to as the "liver, " which envelops part of the oesophagus, intestine, and entire stomach. Waste is passed on through the intestine (the terminus of which, like that of many mollusks, enters and leaves the animal's heart) and exits via the anus.
Nervous system
Like all bivalves, scallops lack actual brains. Instead, their nervous system is controlled by three paired ganglia located at various points throughout their anatomy, the cerebral or cerebropleural ganglia, the pedal ganglia, and the visceral or parietovisceral ganglia. All are yellowish. The visceral ganglia are by far the largest and most extensive of the three, and occur as an almost-fused mass near the center of the animal – proportionally, these are the largest and most intricate sets of ganglia of any modern bivalve. From this, radiate all of the nerves which connect the visceral ganglia to the circumpallial nerve ring which loops around the mantle and connects to all of the scallop's tentacles and eyes. This nerve ring is so well developed that, in some species, it may be legitimately considered an additional ganglion. The visceral ganglia are also the origin of the branchial nerves which control the scallop's gills. The cerebral ganglia are the next-largest set of ganglia and lie distinct from each other a significant distance dorsal to the visceral ganglia. They are attached to the visceral ganglia by long cerebral-visceral connectives, and to each other via a cerebral commissure that extends in an arch dorsally around the esophagus. The cerebral ganglia control the scallop's mouth via the palp nerves and connect to statocysts which help the animal sense its position in the surrounding environment. They are connected to the pedal ganglia by short cerebral-pedal connectives. The pedal ganglia, though not fused, are situated very close to each other near the midline. From the pedal ganglia, the scallop puts out pedal nerves which control the movement of, and sensation in, its small muscular foot.
Vision
Scallops have a large number (up to 200) of small (about 1 mm) eyes arranged along the edge of their mantles. These eyes represent a particular innovation among molluscs, relying on a concave, parabolic mirror of guanine crystals to focus and retro-reflect light instead of a lens as found in many other eye types. Additionally, their eyes possess a double-layered retina, the outer retina responding most strongly to light and the inner to abrupt darkness. While these eyes are unable to resolve shapes with high fidelity, the combined sensitivity of both retinas to light entering the eye and light retro-reflected from the mirror grants scallops exceptional contrast definition, as well as the ability to detect changing patterns of light and motion. Scallops primarily rely on their eyes as an 'early-warning' threat detection system, scanning around them for movement and shadows which could potentially indicate predators. Additionally, some scallops alter their swimming or feeding behaviour based on the turbidity or clarity of the water, by detecting the movement of particulate matter in the water column.
Locomotion
Scallops are mostly free-living and active, unlike the vast majority of bivalves, which are mostly slow-moving and infaunal. All scallops are thought to start out with a byssus, which attaches them to some form of substrate such as eelgrass when they are very young. Most species lose the byssus as they grow larger. A very few species go on to cement themselves to a hard substrate (e.g. Chlamys distorta and Hinnites multirigosus).
However, the majority of scallops are free-living and can swim with brief bursts of speed to escape predators (mostly starfish) by rapidly opening and closing their valves. Indeed, everything about their characteristic shell shape – its symmetry, narrowness, smooth and/ or grooved surface, small flexible hinge, powerful adductor muscle, and continuous and uniformly curved edge – facilitates such activity. They often do this in spurts of several seconds before closing the shell entirely and sinking back to the bottom of their environment. Scallops are able to move through the water column either forward/ventrally (termed swimming) by sucking water in through the space between their valves, an area called the gape, and ejecting it through small holes near the hinge line called exhalant apertures, or backward/dorsally (termed jumping) by ejecting the water out the same way it came in (i.e. ventrally). A jumping scallop usually lands on the sea floor between each contraction of its valves, whereas a swimming scallop stays in the water column for most or all of its contractions and travels a much greater distance (though seldom at a height of more than 1 m off the sea bed and seldom for a distance of greater than 5 m). Both jumping and swimming movements are very energy-intensive, and most scallops cannot perform more than four or five in a row before becoming completely exhausted and requiring several hours of rest. Should a swimming scallop land on its left side, it is capable of flipping itself over to its right side via a similar shell-clapping movement called the righting reflex. So-called singing scallops are rumored to make an audible, soft popping sound as they flap their shells underwater (though whether or not this happens is open to some debate). Other scallops can extend their foot from between their valves, and by contracting the muscles in their foot, they can burrow into sand.
Mobility and behavior
Most species of the scallop family are free-living, active swimmers, propelling themselves through the water through the adductor muscles to open and close their shells. Swimming occurs through the clapping of valves for water intake. Closing the valves propels water with a strong force near the hinge via the velum, a curtain-like fold of the mantle that directs water expulsion around the hinge. Scallops swim in the direction of the valve opening unless the velum directs an abrupt change in course direction.
Other species of scallops can be found on the ocean floor attached to objects by byssal threads. Byssal threads are strong, silky fibers extending from the muscular foot, used to attach to a firm support, such as a rock. Some can also be found on the ocean floor, moving with an extendable foot between their valves or burrowing themselves in the sand by extending and retracting their feet. Scallops are highly sensitive to shadows, vibrations, water movement, and chemical stimuli. All possess a series of 100 blue eyes, embedded on the edge of the mantle of their upper and lower valves that can distinguish between light and darkness. They serve as a vital defense mechanism for avoiding predators. Though rather weak, their series of eyes can detect surrounding movement and alert precaution in the presence of predators, most commonly sea stars, crabs, and snails. Physiological fitness and exercise of scallops decrease with age due to the decline of cellular and especially mitochondrial function, thus increasing the risk of capture and lowering rates of survival. Older individuals show lower mitochondrial volume density and aerobic capacity, as well as decreased anaerobic capacity construed from the amount of glycogen stored in muscle tissue. Environmental factors, such as changes in oxidative stress parameters, can inhibit the growth and development of scallops.
Seasonal changes in temperature and food availability have been shown to affect muscle metabolic capabilities. The properties of mitochondria from the phasic adductor muscle of Euvola ziczac varied significantly during their annual reproductive cycle. Summer scallops in May have lower maximal oxidative capacities and substrate oxidation than any other time in the year. This phenomenon is due to lower protein levels in adductor muscles.
Lifecycle and growth
The scallop family is unusual in that some members of the family are dioecious (males and females are separate), while others are simultaneous hermaphrodites (both sexes in the same individual), and a few are protoandrous hermaphrodites (males when young then switching to female). Female scallops have red roe and male scallops have white roe. Spermatozoa and ova are released freely into the water during mating season and fertilized ova sink to the bottom. After several weeks, the immature scallops hatch and the larvae, miniature transparent versions of the adults called "spat", drift in the plankton until settling to the bottom again (an event called spatfall) to grow, usually attaching by means of byssal threads. Some scallops, such as the Atlantic bay scallop Argopecten irradians, are short-lived, while others can live 20 years or more. Age can often be inferred from annuli, the concentric rings of their shells.
Many scallops are hermaphrodites (having female and male organs simultaneously), altering their sex throughout their lives, while others exist as dioecious species, having a definite sex. In this case, males are distinguished by roe-containing white testes and females with roe-containing orange ovaries. At the age of two, they usually become sexually active, but do not contribute significantly to egg production until four. The reproduction process occurs externally through spawning, in which eggs and sperm are released into the water. Spawning typically occurs in late summer and early autumn; spring spawning may also take place in the Mid-Atlantic Bight. The females of scallops are highly fecund, capable of producing hundreds of millions of eggs per year.
Once an egg is fertilized, it is then planktonic, a collection of microorganisms that drift abundantly in fresh or salt water. Larvae stay in the water column for four to seven weeks before dissipating to the ocean floor, where they attach themselves to objects through byssus threads. Byssus is eventually lost with adulthood, transitioning almost all scallop species into free swimmers. Rapid growth occurs within the first several years, with an increase of 50–80 % in shell height and quadrupled size in meat weight, and reaches a commercial size at about four to five years of age. The lifespans of some scallops have been known to extend over 20 years.
Ecology
Scallops are known to be infected by viruses, bacteria, microalgae of the heterokonts and dinoflagellates.
Mutualism
Some scallops, including Chlamys hastata, frequently carry epibionts such as sponges and barnacles on their shells. The relationship of the sponge to the scallop is characterized as a form of mutualism, because the sponge provides protection by interfering with adhesion of predatory sea-star tube feet, camouflages Chlamys hastata from predators, or forms a physical barrier around byssal openings to prevent sea stars from inserting their digestive membranes. Sponge encrustation protects C. hastata from barnacle larvae settlement, serving as a protection from epibionts that increase susceptibility to predators. Thus, barnacle larvae settlement occurs more frequently on sponge-free shells than on sponge-encrusted shells.
In fact, barnacle encrustation negatively influences swimming in C. hastata. Those swimming with barnacle encrustation require more energy and show a detectable difference in anaerobic energy expenditure than those without encrustation. In the absence of barnacle encrustation, individual scallops swim significantly longer, travel further, and attain greater elevation.
Taxonomy and phylogeny
Etymology
The family name Pectinidae, which is based on the name of the type genus, Pecten, comes from the Latin pecten meaning comb, in reference to a comb-like structure of the shell which is situated next to the byssal notch.
Phylogeny
The fossil history of scallops is rich in species and specimens. The earliest known records of true scallops (those with a ctenolium) can be found from the Triassic period, over 200 million years ago. The earliest species were divided into two groups, one with a nearly smooth exterior: Pleuronectis von Schlotheim, 1820, while the other had radial ribs or riblets and auricles: Praechlamys Allasinaz, 1972. Fossil records also indicate that the abundance of species within the Pectinidae has varied greatly over time; Pectinidae was the most diverse bivalve family in the Mesozoic era, but the group almost disappeared completely by the end of the Cretaceous period. The survivors speciated rapidly during the Tertiary period. Nearly 7,000 species and subspecies names have been introduced for both fossil and recent Pectinidae.
The cladogram is based on molecular phylogeny using mitochondrial (12S, 16S) and nuclear (18S, 28S, and H3) gene markers by Yaron Malkowsky and Annette Klussmann-Kolb in 2012.
Taxonomic structure
Scallops are the family Pectinidae, marine bivalve molluscs within the superfamily Pectinoidea. Other families within this same superfamily share a somewhat similar overall shell shape, and some species within some of the related families are also commonly referred to as "scallops" (for example, Propeamussiidae, the glass scallops).
The family Pectinidae is the most diversified of the pectinoideans in present-day oceans. It is one of the largest marine bivalve families and contains over 300 extant species in 60 genera. Its origin dates back to the Middle Triassic Period, approximately 240 million years ago; in terms of diversity, it has been a thriving family to the present day.
Evolution from its origin has resulted in a successful and diverse group: pectinids are present in the world's seas, found in environments ranging from the intertidal zone to the hadal depths. The Pectinidae play an extremely important role in many benthic communities and exhibit a wide range of shell shapes, sizes, sculptures, and cultures.
Raines and Poppe listed nearly 900 species names of scallops, but most of these are considered either questionable or invalid. Raines and Poppe mentioned over 50 genera, around 250 species, and subspecies. Although species are generally well-circumscribed, their attribution to subfamilies and genera is sometimes equivocal, and information about phylogeny and relationships of the species are minimal, not the least because most work has been based only on adult morphology.
This family's earliest and most comprehensive taxonomic treatments were based on macroscopic morphological characters of the adult shells and represent broadly divergent classification schemes. Some level of taxonomic stability was achieved when Waller's studies in 1986, 1991, and 1993 concluded evolutionary relationships between pectinid taxa based on hypothesized morphological synapomorphies, which previous classification systems of Pectinidae failed to do. He created three Pectinidae subfamilies: Camptonectinidae, Chlamydinae and Pectininae.
The framework of its phylogeny shows that repeated life habit states derive from evolutionary convergence and parallelism. Studies have determined the family Pectinidae is monophyletic, developing from a single common ancestor. The direct ancestors of Pectinidae were scallop-like bivalves of the family Entoliidae. Entoliids had auricles and a byssal notch only at youth, but they did not have a ctenolium, a comb-like arrangement along the margins of the byssal notch in Pectinidae. The ctenolium is the defining feature of the modern family Pectinidae and is a characteristic that has evolved within the lineage.
In a 2008 paper, Puslednik et al. identified considerable convergence of shell morphology in a subset species of gliding Pectinidae, which suggests iterative morphological evolution may be more prevalent in the family than previously believed.
There have been a number of efforts to address phylogenetic studies. Only three have assessed more than ten species and only one has included multiple outgroups. Nearly all previous molecular analyses of the Pectinidae have only utilized mitochondrial data. Phylogenies based only on mitochondrial sequence data do not always provide an accurate estimation on the species tree. Complicated factors can arise due to the presence of genetic polymorphisms in ancestral species and resultant lineage sorting.
In molecular phylogenies of the Bivalvia, both the Spondylidae and the Propeamussiidae have been resolved as sister to the Pectinidae.
List of subfamilies and genera
The following are recognised in the family Pectinidae:
Subfamily Camptonectinae Habe, 1977
Camptonectes Agassiz, 1864
Ciclopecten Seguenza, 1877
Delectopecten Stewart, 1920
Hyalopecten A. E. Verrill, 1897
Pseudohinnites Dijkstra, 1989
Sinepecten Schein, 2006
Subfamily Palliolinae Korbkov in Eberzin, 1960
Tribe Adamussiini Habe, 1977
Adamussium Thiele, 1934
Antarctipecten Beu & Taviani, 2013 †
Duplipecten Marwick, 1928 †
Lentipecten Marwick, 1928 †
Leoclunipecten Beu & Taviani, 2013 †
Ruthipecten Beu & Taviani, 2013 †
Tribe Eburneopectinini T. R. Waller, 2006 †
Eburneopecten Conrad, 1865 †
Tribe Mesopeplini T. R. Waller, 2006
Kaparachlamys Boreham, 1965 †
Mesopeplum Iredale, 1929
Phialopecten Marwick, 1928 †
Sectipecten Marwick, 1928 †
Towaipecten Beu, 1995 †
Tribe Palliolini Waller, 1993
Karnekampia H. P. Wagner, 1988
Lissochlamys Sacco, 1897
Palliolum Monterosato, 1884
Placopecten Verrill, 1897
Pseudamussium Mörch, 1853
Tribe Serripectinini T. R. Waller, 2006 †
Janupecten Marwick, 1928 †
Serripecten Marwick, 1928 †
Subfamily Pectininae
Tribe Aequipectinini F. Nordsieck, 1969
Aequipecten P. Fischer, 1886
Argopecten Monterosato, 1889
Cryptopecten Dall, Bartsch & Rehder, 1938
Flexopecten Sacco, 1897
Haumea Dall, Bartsch & Rehder, 1938
Leptopecten Verrill, 1897
Volachlamys Iredale, 1939
Tribe Amusiini Ridewood, 1903
Amusium Röding, 1798
Dentamussium Dijkstra, 1990
Euvola Dall, 1898
Leopecten Masuda, 1971
Ylistrum Mynhardt & Alejandrino, 2014
Tribe Austrochlamydini Jonkers, 2003
Austrochlamys Jonkers, 2003
Tribe Decatopectinini Waller, 1986
Anguipecten Dall, Bartsch & Rehder, 1938
Antillipecten T. R. Waller, 2011
Bractechlamys Iredale, 1939
Decatopecten Rüppell in G. B. Sowerby II, 1839
Excellichlamys Iredale, 1939
Glorichlamys Dijkstra, 1991
Gloripallium Iredale, 1939
Juxtamusium Iredale, 1939
Lyropecten Conrad, 1862
Mirapecten Dall, Bartsch & Rehder, 1938
Nodipecten Dall, 1898
Tribe Pectinini Wilkes, 1810
Annachlamys Iredale, 1939
†Fascipecten Freneix, Karache & Salvat 1971
†Gigantopecten Rovereto, 1899
Minnivola Iredale, 1939
†Oopecten Sacco, 1897
†Oppenheimopecten Teppner, 1922
Pecten Müller, 1776
Serratovola Habe, 1951
Subfamily Pedinae Bronn, 1862
Tribe Chlamydini von Teppner, 1922
Austrohinnites Beu & Darragh, 2001 †
Azumapecten Habe, 1977
Chesapecten Ward & Blackwelder, 1975 †
Chlamys Röding, 1798
Chokekenia Santelli & del Río, 2018 †
Ckaraosippur Santelli & del Río, 2019 †
Complicachlamys Iredale, 1939
Coralichlamys Iredale, 1939
Dietotenhosen Santelli & del Río, 2019 †
Equichlamys Iredale, 1929
Hemipecten A. Adams & Reeve, 1849
Hinnites Deference, 1821
Laevichlamys Waller, 1993
Manupecten Monterosato, 1872
Moirechlamys Santelli & del Río, 2018 †
Notochlamys Cotton, 1930
Pascahinnites Dijkstra & Raines, 1999
Pixiechlamys Santelli & del Río, 2018 †
Praechlamys Allasinaz, 1972 †
Scaeochlamys Iredale, 1929
Semipallium Jousseaume in Lamy, 1928
Swiftopecten Hertlein, 1936
Talochlamys Iredale, 1929
Veprichlamys Iredale, 1929
Yabepecten Masuda, 1963 †
Zygochlamys Ihering, 1907
Tribe Crassadomini Waller, 1993
Caribachlamys Waller, 1993
Crassadoma Bernard, 1986
Tribe Fortipectinini Masuda, 1963
Fortipecten Yabe & Hatai, 1940 †
Kotorapecten Masuda, 1962 †
Masudapecten Akiyama, 1962 †
Mizuhopecten Masuda, 1963
Nipponopecten Masuda, 1962 †
Patinopecten Dall, 1898
Tribe Mimachlamydini Waller, 1993
Mimachlamys Iredale, 1929
Spathochlamys Waller, 1993
Tribe Pedini Bronn, 1862
Pedum Bruguière, 1792
Subfamily incertae sedis
Agerchlamys Damborenea, 1993 †
Athlopecten Marwick, 1928 †
Camptochlamys Arkell, 1930 †
Indopecten Douglas, 1929 †
Jorgechlamys del Río, 2004 †
Lamellipecten Dijkstra & Maestrati, 2010
Lindapecten Petuch, 1995
Mixtipecten Marwick, 1928 †
Pseudopecten Bayle, 1878 †
Seafood industry
Aquaculture
Wild fisheries
The largest wild scallop fishery is for the Atlantic sea scallop (Placopecten magellanicus) found off the northeastern United States and eastern Canada. Scallops are harvested using scallop dredges or bottom trawls. Most of the rest of the world's production of scallops is from Japan (wild, enhanced, and aquaculture) and China (mostly cultured Atlantic bay scallops).
In the D'Entrecasteaux Channel in the south of Tasmania dredging was banned in 1969, and since then divers have caught them in this area. Attempts to use lighted pots to attract lobster and crab led to the discovery that they were effective in attracting scallops.
Sustainability
The scallop fishery in New Zealand declined from a catch of 1246 tonnes in 1975 to 41 tonnes in 1980, at which point the government ordered the fishery closed. Spat seeding in the 1980s helped it recover, and catches in the 1990s were up to 684 tonnes. The Tasman Bay area was closed to commercial scallop harvesting from 2009 to 2011 due to a decline in the numbers. The commercial catch was down to 22 tonnes in 2015, and the fishery was closed again. The main causes for the decline seem to be fishing, climate effects, disease, pollutants, and sediment runoff from farming and forestry. Forest and Bird list scallops as the "Worst Choice" in their Best Fish Guide for sustainable seafood species.
On the east coast of the United States, over the last 100 years, the populations of bay scallops have greatly diminished due to several factors but probably mostly due to a reduction in seagrasses (to which bay scallop spat attach) caused by increased coastal development and concomitant nutrient runoff. Another possible factor is the reduction of sharks from overfishing. A variety of sharks used to feed on rays, which are the main predator of bay scallops. With the shark population reduced – this apex predator in some places almost eliminated – the rays have been free to feed on scallops to greatly decrease their numbers. By contrast, the Atlantic sea scallop (Placopecten magellanicus) is at historically high levels of abundance after recovery from overfishing.
As food
Scallops are characterized by offering two flavors and textures in one shell: the meat, called "scallop", which is firm and white, and the roe, called "coral", which is soft and often brightly coloured reddish-orange. Sometimes, markets sell scallops already prepared in the shell, with only the meat remaining. Outside the U.S., the scallop is often sold whole. They are available both with and without coral in the UK and Australia.
Scallops without any additives are called "dry-packed", while scallops that are treated with sodium tripolyphosphate (STPP) are called "wet-packed". STPP causes the scallops to absorb moisture prior to the freezing process, thereby increasing their weight. The freezing process takes about two days.
In French cuisine, scallops are often quickly cooked in a hot buttered pan, sometimes with calvados and served with creamed leeks, or prepared in a white wine sauce. In Galician cuisine, scallops are baked with breadcrumbs, ham, and onions.
Scallops are sometimes breaded, deep-fried, and served with coleslaw and french fries in the northeastern United States (either on their own or as part of a fisherman's platter). In New England, some seafood restaurants offer scallop rolls, consisting of breaded scallops on a grilled, split-top hot dog bun.
In Japanese cuisine, scallops may be served in soup or prepared as sashimi or sushi. In a sushi bar, hotategai (帆立貝, 海扇) is the traditional scallop on rice and, while kaibashira (貝柱) may be calscallop is more loosely used to include other shellfish species with round-shaped flesh (the adductor muscle), such as Atrina (帶子).
Dried scallop is known in Cantonese Chinese cuisine as conpoy (乾瑤柱, 乾貝, 干貝). Smoked scallops are sometimes served as appetizers or used as an ingredient in the preparation of various dishes and appetizers.
Scallops have lent their name to the culinary term "scalloped", which originally referred to seafood creamed and served hot in the shell. Today, it means a creamed casserole dish such as scalloped potatoes, which contains no seafood at all.
Pearls
Scallops do occasionally produce pearls, though scallop pearls do not have the buildup of translucent layers or "nacre" which give desirability to the pearls of the feather oysters, and usually lack both lustre and iridescence. They can be dull, small, and of varying colour, but exceptions occur that are appreciated for their aesthetic qualities.
Symbolism of the shell
Shell of Saint James
The scallop shell is the traditional emblem of St James the Great and is popular with pilgrims travelling the Way of St James (Camino de Santiago). Medieval Christians would collect a scallop shell while at Compostela as evidence of having made the journey. The association of Saint James with the scallop can most likely be traced to the legend that the apostle once rescued a knight covered in scallops. An alternative version of the legend holds that while St. James' remains were being transported to Galicia (Spain) from Jerusalem. As the ship approached land, the wedding of the daughter of Queen Lupa was taking place on shore. The young groom was on horseback, and, upon seeing the ship's approach, his horse got spooked, and horse and rider plunged into the sea. Through miraculous intervention, the horse and rider emerged from the water alive, covered in seashells.
Indeed, in French, the mollusc itself – as well as a popular preparation of it in cream sauce – is called . In German they are – literally "James's shellfish". Curiously the Linnaean name Pecten jacobeus is given to the Mediterranean scallop, while the scallop endemic to Galicia is called Pecten maximus due to its bigger size. The scallop shell is represented in the decoration of churches named after St. James, such as in St James' Church, Sydney, where it appears in a number of places, including in the mosaics on the floor of the chancel.
When referring to St James, a scallop shell valve is displayed with its convex outer surface showing. In contrast, when the shell refers to the goddess Venus (see below), it is displayed with its concave interior surface showing.
Fossil Fuel
The Shell company, one of the world's biggest companies, is represented by a scallop.
Badge
The scallop shell symbol found its way into heraldry as a badge of those who had been on the pilgrimage to Compostela, although later, it became a symbol of pilgrimage in general. Sir Winston Churchill and Lady Diana's family, the Spencer family coat of arms includes a scallop, as well as both of Diana's sons Prince William and Prince Harry's personal coats of arms; also Pope Benedict XVI's personal coat of arms includes a scallop; another example is the surname Wilmot and also John Wesley's (which as a result the scallop shell is used as an emblem of Methodism). However, charges in heraldry do not always have an unvarying symbolic meaning, and there are cases of arms in which no family member went on a pilgrimage, and the occurrence of the scallop is simply a pun on the name of the armiger (as in the case of Jacques Coeur), or for other reasons. In 1988, the State of New York in the US chose the bay scallop (Argopecten irradians) as its state shell.
Fertility symbol
Throughout antiquity, scallops and other hinged shells have symbolized the feminine principle. Outwardly, the shell can symbolize the protective and nurturing principle, and inwardly, the "life-force slumbering within the Earth", an emblem of the vulva.
Many paintings of Venus, the Roman goddess of love and fertility, included a scallop shell in the painting to identify her. This is evident in Botticelli's classically inspired 15th century painting The Birth of Venus.
One legend of the Way of St. James holds that the route was seen as a fertility pilgrimage, undertaken when a young couple desired to bear offspring. The scallop shell is believed to have originally been carried by pagans as a symbol of fertility.
Other interpretations
Alternatively, the scallop resembles the setting sun, which was the focus of the pre-Christian Celtic rituals of the area. To wit, the pre-Christian roots of the Way of St. James was a Celtic death journey westwards towards the setting sun, terminating at the End of the World (Finisterra) on the "Coast of Death" (Costa da Morte) and the "Sea of Darkness" (i.e., the Abyss of Death, the Mare Tenebrosum, Latin for the Atlantic Ocean, itself named after the Dying Civilization of Atlantis).
Contemporary art
The beach at Aldeburgh, Suffolk, England, features Maggi Hambling's steel sculpture, The Scallop, erected in 2003 as a memorial to the composer Benjamin Britten, who had a long association with the town.
Scalloped shape
The term "scalloped" is used to designate a decorative pattern, resembling the wavy scallop surface, that is used at the edges of furniture, fabrics, and other items.
| Biology and health sciences | Mollusks | null |
301024 | https://en.wikipedia.org/wiki/Silphium | Silphium | Silphium (also known as laserwort or laser; Ancient Greek: , ) is an unidentified plant that was used in classical antiquity as a seasoning, perfume, aphrodisiac, and medicine.
It was the essential item of trade from the ancient North African city of Cyrene, and was so critical to the Cyrenian economy that most of their coins bore a picture of the plant. The valuable product was the plant's resin, called in Latin laserpicium, lasarpicium or laser (the words Laserpitium and Laser were used by botanists to name genera of aromatic plants, but the silphium plant is not believed to belong to these genera).
The exact identity of silphium is unclear. It was claimed to have become extinct in Roman times. It is commonly believed to be a relative of giant fennel in the genus Ferula. The extant plant Thapsia gummifera has been suggested as another possibility. Another theory is that it was simply a high quality variety of asafoetida, a common spice in the Roman Empire. The two spices were considered the same by many Romans including the geographer Strabo.
Silphium was considered invaluable by all who held it. The BBC reports that the plant was sung about in Roman poems and songs, who considered it equivalent to its weight in gold. Historically, Pliny the Elder blamed silphium's valuation on "tax-farmers," and Julius Caesar directly registered silphium as "1500 pounds of laser" in the Roman treasury.
Identity and extinction
The identity of silphium is highly debated. Without a surviving sample, no genetic analysis can be made. It is generally considered to belong to the genus Ferula, as an extinct or living species. The currently extant plants Thapsia gummifera, Ferula tingitana, Ferula narthex, Ferula drudeana, and Thapsia garganica have been suggested as possible identities. Ferula drudeana, an endemic species found in Turkey, is a candidate for silphium based on similarity of appearance in descriptions and production of a spice-like gum-resin with supposedly similar properties to silphium. However, F. drudeana belongs to a lineage from the southern Caspian Sea region, with no known connection to Eastern Libya.
Theophrastus mentioned silphium as having thick roots covered in black bark, about one cubit (48 cm) long, with a hollow stalk, similar to fennel, and golden leaves, like celery.
The disappearance of silphium is considered the first extinction of a plant or animal species in recorded history. The cause of silphium's supposed extinction is not entirely known but numerous factors are suggested. Silphium had a remarkably narrow native range, about , in the southern steppe of Cyrenaica (present-day eastern Libya). Overgrazing combined with overharvesting have long been cited as the primary factors that led to its extinction. However, recent research has challenged this notion, arguing instead that desertification in ancient Cyrenaica was the primary driver of silphium's decline.
Another theory is that when Roman provincial governors took over power from Greek colonists, they over-farmed silphium and rendered the soil unable to yield the type that was said to be of such medicinal value. Theophrastus wrote in Enquiry into Plants that the type of Ferula specifically referred to as "silphium" was odd in that it could not be cultivated. He reports inconsistencies in the information he received about this, however. This could suggest the plant is similarly sensitive to soil chemistry as huckleberries which, when grown from seed, are devoid of fruit.
Similar to the soil theory, another theory holds that the plant was a hybrid, which often results in very desired traits in the first generation, but second-generation can yield very unpredictable outcomes. This could have resulted in plants without fruits, when planted from seeds, instead of asexually reproducing through their roots.
Pliny reported that the last known stalk of silphium found in Cyrenaica was given to Emperor Nero "as a curiosity".
Ancient medicine
Many medical uses were ascribed to the plant. It was said that it could be used to treat cough, sore throat, fever, indigestion, aches and pains, warts, and all kinds of maladies.
Hippocrates wrote:
When the gut protrudes and will not remain in its place, scrape the finest and most compact silphium into small pieces and apply as a cataplasm.
The plant may also have functioned as a contraceptive and abortifacient.
Culinary uses
Silphium was used in Graeco-Roman cooking, notably in recipes presented in Apicius. Some historians have suggested that its use, particularly in the North African region of its origin, was extensive:Not quite as ubiquitous as liquamen, but just as necessary in the Roman kitchen, was the herb silphium...Life in Cyrenaica revolved around [silphium] to such an extent that the dramatist Antiphanes, in the fourth century BC, made one of his characters groan: "I will not sail back to the place from which we were all carried away, for I want to say goodbye to all—horses, silphium, chariots, silphium stalks, steeple-chasers, silphium leaves, and silphium juice!"Long after its claimed extinction, silphium continued to be mentioned in lists of aromatics copied one from another, until it makes perhaps its last appearance in the list of spices that the Carolingian cook should have at hand— ("A short list of condiments that should be in the home")—by a certain "Vinidarius", whose excerpts of Apicius survive in one 8th-century uncial manuscript. Vinidarius's dates may not be much earlier.
Hieroglyphs and symbols for silphium
The Minoans probably used silphium as the visual reference for the hieroglyph psi (), meaning "plant." It resembles a central shoot flanked by two stalks. Minoan fetishes with this geometry are known as psi and phi type figurines, and are also designed for their letter-like shape. This glylph developed into the modern greek psi (Ψ).
Egyptian hieroglyphs for Libyan silphium have also been documented in archeological publications as a balm ingredient that must be dehulled and which produces a sap. In one record, it appears similar to the hieroglyph for branch (𓆱) written to be read from left-to-right.
There has been some speculation about the connection between silphium and the traditional heart shape (♥). Silver coins from Cyrene of the 6th–5th centuries BCE bear a similar design, sometimes accompanied by a silphium plant, and is understood to represent its seed or fruit. Some plants in the family Apiaceae, such as Heracleum sphondylium, have heart-shaped indehiscent mericarps (a type of fruit).Contemporary writings help tie silphium to sexuality and love. Silphium appears in Pausanias' Description of Greece in a story of the Dioscuri staying at a house belonging to Phormion, a Spartan:
Silphium as laserpicium makes an appearance in a poem (Catullus 7) of Catullus to his lover Lesbia (though others have suggested that the reference here is instead to silphium's use as a treatment for mental illness, tying it to the "madness of love").
Heraldry
In the Italian military heraldry, ("Silphium of Cyrenaica, smoothly cut and printed in gold; in blazon: silphium couped or of Cyrenaica") is the symbol granted to units that distinguished themselves in the Western Desert Campaign in North Africa during World War II.
In popular culture
Characters in Lindsey Davis's 1998 historical crime novel Two for the Lions travel from Rome to North Africa in search of Silphium.
| Biology and health sciences | Herbs and spices | Plants |
301108 | https://en.wikipedia.org/wiki/Pan-American%20Highway | Pan-American Highway | The Pan-American Highway is a vast network of roads that stretches approximately 30,000 kilometers (about 19,000 miles) from Prudhoe Bay, Alaska, in the northernmost part of North America to Ushuaia, Argentina, at the southern tip of South America. It is recognized as the longest road in the world and serves as a significant overland route connecting multiple countries across the Americas.
The highway traverses through 14 countries in total, including Canada, the United States, Mexico, Guatemala, El Salvador, Honduras, Nicaragua, Costa Rica, Panama, Colombia, Ecuador, Peru, Chile, Argentina, and Bolivia. Notably, no official road in the U.S. or Canada is designated as part of the Pan-American Highway; it officially begins at the U.S.-Mexico border in Nuevo Laredo. A significant interruption in the highway is the Darién Gap, a dense rainforest area between Panama and Colombia. No road traverses the Darien Gap to connect the two countries for traffic, and although ferries previously carried vehicles around the gap to service this need, no car carrying ferries have operated there in recent decades. The primary alternative is to ship a car by cargo ship from one country to the other.
Concept of the highway
The highway was built in stages. The first, not long after one could drive across the United States on a paved road, was the highway from Laredo, Texas, to Mexico City. The second stage was the Inter-American Highway to Panama City; previously there were no roads, and little commerce between most Central American countries. There was no road between Costa Rica and Panama until, concerned about access to the Panama Canal in a war situation, the U.S. Army Corps of Engineers began a highway in 1941.
The third stage, which has not been completed and may never be, continues onward to the southern tip of South America at Tierra del Fuego National Park, near Ushuaia, Argentina. Both Panama and Colombia, as well as environmentalists, are opposed to building a highway through the Darién Gap that separates the two continents.
A Cuban proposal that was not carried out was to include a "circuito del Caribe" (Caribbean circuit). This would have expanded the highway to Puerto Juárez, Mexico (Cancún), and from there by ferry to Pinar del Río, Cuba, from there by road to Havana, and by ferry again to Key West, Florida, and the Overseas Highway. The deterioration of relations between Cuba and the U.S. after the Cuban Revolution of 1959 ended talk of this project.
Development and construction
The concept of an overland route from one tip of the Americas to the other was originally proposed as a railroad. In 1884 the U.S. Congress passed a law with a plan to build an inter-American rail system. This was discussed at the First Pan-American Conference in 1889; however, construction never started. It was abandoned in concept after the independence of Panama in 1903, when work on the canal began.
The concept of building a highway, rather than a railroad, emerged at the Fifth International Conference of American States in 1923, after the automobile and other vehicles had begun to replace railroads for both passenger and goods transportation. The first conference regarding construction of the highway occurred on October 5, 1925.
Finally, on July 29, 1937, in the latter years of the Great Depression, Argentina, Bolivia, Chile, Colombia, Costa Rica, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Peru, Canada, and the United States signed the Convention on the Pan-American Highway, whereby they agreed to achieve speedy construction, by all adequate means. Thirteen years later, in 1950, Mexico became the first Latin American country to complete its portion of the highway.
No single route in the United States (except in Alaska) has been designated, much less marked, as the U.S. portion of the Pan-American Highway. However, I-25 is labeled as the Pan-American freeway in states such as New Mexico and Colorado. According to the federal Department of Transportation, the Interstate Highway System is the United States' section of the highway. In Canada the highway is not marked. Much of the highway in Latin America is marked as or .
Countries served
The Northern Pan-American Highway travels through 14 countries, including in Central America:
Canada (CANAMEX Corridor unofficial)
United States (Interstate Highway System official)
Mexico
Guatemala
El Salvador
Honduras
Nicaragua
Costa Rica
Panama
The Southern Pan-American Highway travels through five countries:
Colombia
Ecuador
Peru
Chile
Argentina
Important spurs also connect with four other South American countries:
Bolivia
Brazil
Paraguay
Uruguay
Northern section
Alaska
The Alaska Highway through Alaska, Yukon and British Columbia is commonly considered a de facto northerly extension of the Pan-American Highway, which continues further north with the Dalton Highway in Alaska. With this route, the Pan-American Highway begins in Prudhoe Bay, Alaska near Deadhorse. Traveling south, the route follows the length of the Dalton Highway (Alaska Route 11) changing to Alaska Route 2, the Alaskan portion of the Alaska Highway, near Fairbanks, Alaska. From Fairbanks, the route follows Alaska Route 2 southeast to the Canada–United States border southeast of Northway, Alaska, and adjacent to the Tetlin National Wildlife Refuge.
Canada
In Canada, no particular road has been officially designated as the Pan-American Highway. The National Highway System, which includes but is not limited to the Trans-Canada Highway, is the country's only official inter-provincial highway system. However, several Canadian highways are a natural extension of several key American highways that reach the Canada–US border. British Columbia Highway 97 and Highway 2 to Alberta both pick up where the southern end of the Alaska highway leaves off. Highway 97 becomes U.S. Route 97 at the Canada–US border. British Columbia Highway 99 provides an alternate route from Highway 97 just north of Cache Creek; it runs through Whistler and Vancouver before ending at the Canada–US border at the north end of Interstate 5 in Washington state, the beginning of the official Pan-American route south of British Columbia. Meanwhile, Alberta Highway 2 runs south and east to Alberta Highway 3 leading into Lethbridge, then south on Alberta Highway 4 to the Canada–US border, where it becomes Interstate 15 in Montana. This is the first official stretch of the Pan-American Highway south of the Alberta route, both of which are also part of the CANAMEX Corridor.
Yukon
Crossing the border into Canada, Alaska Highway 2 turns into Yukon Highway 1. The first significant settlement along the way is Beaver Creek, Yukon. At Haines Junction, where it meets Yukon Highway 3, Yukon Highway 1 turns east toward Whitehorse, the capital of the Yukon Territory.
Through most of Whitehorse, Yukon Highway 2 and Yukon Highway 1 share an alignment. Yukon Highway 1 cuts southeast toward Marsh Lake, Yukon while Yukon Highway 2 cuts south to Skagway, Alaska. Eventually, Yukon Highway 1 intersects with Yukon Highway 8 and Yukon Highway 7 at Jake's Corner, Yukon; the Pan-American Highway continues on Yukon 1 east-northeast from this junction.
At Johnson's Crossing, Yukon Highway 1 meets Yukon Highway 6 and travels southeast through Teslin, Yukon. The Pan-American Highway continues on Yukon 1 as it crosses over into British Columbia (B. C.). After several miles, the Highway reenters the Yukon (once again as Highway 1) and continues southeast of Watson Lake until it, once again, enters British Columbia as B.C. Highway 97.
British Columbia
After travelling about past the British Columbia–Yukon border, the Pan-American Highway reaches the first settlement in British Columbia at Lower Post. After travelling about east, the highway once again re-enters the Yukon for roughly . The Highway then re-enters British Columbia (as BC 97) for the final time. The Pan-American Highway continues south to southeast through a long uninhabited stretch until it passes through the villages of Fireside and Coal River, then runs east parallel to the Liard River.
The Pan-American Highway continues on B.C. Highway 97 as it passes through Toad River Post, and then Summit Lake, which is nested between Stone Mountain and Mount Saint George. Further down the road, B.C. Highway 97 intersects with B.C. Highway 77; the Pan-American Highway continues along B.C. 97 east to Fort Nelson.
From Fort Nelson, the Highway travels south for about until it reaches Fort St. John. It continues on B.C. Highway 97 southeast for another to reach the end of the Alaska Highway at Dawson Creek.
Alberta
After B.C Highway 97, the unofficial route becomes Alberta Highway 43. In approximately , Highway 43 enters into the first settlement Demmitt. For about , Highway 43 goes into Grande Prairie. At Clairmont, Highway 43, turns to Alberta Highway 2, Highway 43 goes left. Highway 43 goes for before reaching Edmonton. The unofficial route turns 2 ways, one way goes to Lloydminster, Minneapolis, and Dallas and merges with the second way. The second way goes to Calgary and the US border.
Contiguous United States
In 1966, the U.S. Federal Highway Administration designated the entire Interstate Highway System as part of the Pan-American Highway System, but this has not been expressed in any of the official interstate signage. Of the many freeways that make up this very comprehensive system, several are notable because of their mainly north–south orientation and their links to the main Mexican route and its spurs, as well as to key routes in Canada that link to the Alaska Highway.
These include the following:
Interstate 5 runs north from San Diego, California, to Blaine, Washington, then links indirectly with British Columbia Highway 99 north of the Canada–US border. A technically direct link between the same interstate and the U.S. Route 97 system can be found near Weed, California. US Route 97 runs northeast then north through Oregon and Washington from this junction, and becomes BC Highway 97 at the border with Canada.
Interstate 15 links San Diego with Alberta Highway 2 that eventually crosses into British Columbia and ends at the southern terminus of the Alaska Highway. Interstate 8 provides an east–west link from San Diego to Interstate 10 near Phoenix, Arizona. The latter continues to Tucson and links with Interstate 19, which becomes a spur of the Pan-American highway through Mexico at the Nogales border crossing.
Interstate 25 runs north from Interstate 10 at Las Cruces, New Mexico, to Interstate 90 in Buffalo, Wyoming. This route has no direct extension into Canada but links indirectly to Interstate 15. Interstate 25 in Albuquerque, New Mexico, was named the Pan-American Freeway, as an extension of Highway 45, the Mexican spur linking El Paso to the original route along highway 85 north of Mexico City. This portion of I-25 largely follows the historic Camino Real, and thus serves a culturally significant portion of the Pan American system. Like I-15, the complete route of Interstate 25 is an official northerly continuation toward Alberta, where Highway 2 provides a direct but unofficial Canadian link to the Alaska Highway.
Interstate 35 is a northerly continuation of the original Pan-American highway following Mexican Federal Highway 85. It extends from Laredo, Texas to the Canada–United States border north of Duluth, Minnesota, with a spur, Interstate 29, that leads farther west toward Winnipeg, Manitoba. The section of Interstate 35 in San Antonio, Texas is referred to as the Pan Am Expressway by locals. I-35 is a northerly continuation of Mexico Highway 85, the original official Mexican route, ending in Duluth, Minnesota, where Minnesota State Highway 61 continues to the Canada–US border near Thunder Bay, Ontario. This route was first proposed in a 1932 bill introduced in the U.S. Congress. The Trans-Canada Highway provides a link from Winnipeg and Thunder Bay to Alberta and the Alaska Highway, but it is not officially part of the Pan-American Highway.
U.S. Route 81 is claimed to be part of the Pan American Highway from Wichita, Kansas, to Watertown, South Dakota, where it runs separately from Interstate 29.
An additional route only partially complete is Interstate 69, which will eventually run northeasterly from the Laredo Nuevo Laredo border crossing to the Windsor-Quebec City Corridor in Canada, where the route becomes unofficial.
Related North American highways
Several North American routes have names that make no direct reference to the Pan-American Highway, in part because some sections follow highways that are not up to full freeway standard.
The CANAMEX Corridor is designated from Mexico City to the western United States from Arizona to Montana, and continues north into western Canada. Although lacking any official status for the Pan American Highway in Canada, this is the only official North American highway that runs through Canada, the U.S., and Mexico to link the Alaska Highway with the Pan-American Highway at Mexico City. Unlike corresponding Pan American routes in the American southwest, the Canamex Highway bypasses San Diego by using several non-interstate highways to provide a shortcut from I-15 at Las Vegas, Nevada to I-10 at Phoenix, Arizona for traffic accessing I-15 from the Nogales border crossing.
The CanAm Highway follows Interstate 25 from El Paso to U.S. Route 85 north of Denver, Colorado, then continues into the Canadian province of Saskatchewan, following parts of provincial highways 35, 39, 6, 3, and 2 in succession before terminating at La Ronge. This route was first proposed during the 1920s but was never properly promoted nor developed. A section of the CanAm in southern Saskatchewan has deteriorated to the point where it is no longer a paved highway.
The NAFTA Superhighway tag has been unofficially used in connection with Interstate 35 from Laredo, Texas to the Canadian border; there it downgrades to a non-freeway route ending at Thunder Bay, Ontario. A spur follows Interstate 29 to the border, where it also downgrades to an arterial highway that extends to Winnipeg, Manitoba. The NAFTA highway sometimes unofficially includes Interstate 69, which is mostly complete from western Kentucky to the Canada–US border at Port Huron, Michigan. In Canada, Ontario Highway 402 and other freeways in the Windsor-Quebec City Corridor can be considered a northeastward extension of this version of the NAFTA superhighway. To the southwest, from western Kentucky to the Mexican border, there is no single superhighway yet completed. Pending completion of I-69, the main highway links to Mexico follow parts of US routes 45 and 51 from Kentucky to western Tennessee, I-155 into Missouri, parts of Interstates 55 and 40 from Missouri to Arkansas, and I-30 to the Texas stretch of I-35 that continues to the Mexican border at Laredo, Texas. The section of I-69 to be completed south of Kentucky is expected eventually to continue southwestward to the Texas Gulf Coast. It will have a spur linking to the original Pan-American route through Mexico to Laredo, and additional branches extending to the Mexican spurs that cross the border at Pharr, Texas, and Brownsville, Texas.
Mexico
The official route of the Pan-American Highway through Mexico (where it is known as the Inter-American Highway) starts at Nuevo Laredo, Tamaulipas (opposite Laredo, Texas), and goes south to Mexico City along Mexican Federal Highway 85.
An alternative route begins at the border crossing between San Diego, California and Tijuana, Baja California. Interstate 5 in the United States connects to Mexican Federal Highway 1 at the busiest international border crossing in the world. The Pan-American Highway continues south to Mexico City along two separate routes; historic Mexican Federal Highway 1 and toll Mexican Federal Highway 1D via Baja California Peninsula or Mexican Federal Highway 2 via the mainland.
The Pan-American Highway (as Mexico Highway 85D) enters Mexico City, but downtown Mexico City can be bypassed using Mexico Highway 136 (a divided limited-access route) and Mexico Highway 115, which reconnects to Mexico Highway 95D south of the Mexican Federal District.
Later branches were built to the border as follows:
Nogales spur – Mexican Federal Highway 15 from Mexico City
El Paso spur – Mexican Federal Highway 45 from Highway 85 north of Mexico City to Ciudad Juárez, Chihuahua
Eagle Pass spur – unknown, possibly Mexican Federal Highway 57 from Mexico City to Piedras Negras, Coahuila
Pharr spur – Mexican Federal Highway 40 from Monterrey to Reynosa, Tamaulipas
Brownsville spur – Mexican Federal Highway 101 from Ciudad Victoria to Matamoros, Tamaulipas
From Mexico City to the border with Guatemala, the highway follows Mexican Federal Highway 190. In the inaugural Carrera Panamericana road race, organized by the Mexican government, the terminus of this southern route was said to be at Ciudad Cuauhtémoc, Chiapas, at the Guatemalan border.
Specifically, as the Pan-American Highway continues south of Mexico City, it runs through the city of Cuernavaca about south of the Mexican capital. Here, the Pan-American Highway heads east along Federal Highway 190 through the state of Puebla; for about , it is a limited-access divided highway. The route then reverts to an undivided highway and enters the state of Oaxaca. From Huajuapan de León to the Oaxaca state capital of Oaxaca is about .
From the city of Oaxaca, the Highway continues southeast as Mexico Highway 190 for about to the village of Juchitán de Zaragoza. The Pan-American Highway is now in southern Mexico, which is a combination of small mountains, hills, and jungles. It is another to the border with the state of Chiapas where the Highway crosses the Continental Divide. From the Oaxaca-Chiapas state border, it is to the Chiapas state capital of Tuxtla Gutiérrez. The Highway then crosses the Mexico-Guatemala border at Ciudad Cuauhtémoc.
Central America
The Pan-American (or Inter-American) highway passes through the Central American countries with the highway designation of CA-1 (Central American Highway 1). Belize was supposedly included in the route at one time, after it switched to driving on the right. Prior to independence, as British Honduras, it was the only Central American country to drive on the left side of the road.
Guatemala
Upon crossing into Guatemala, Mexico Highway 190 becomes Central America Highway 1 and continues for about from the border village of La Mesilla to the city of Huehuetenango near the Maya ruins of Zaculeu. The Pan-American Highway crosses the Continental Divide again, and into the Sierra de los Cuchumatanes mountains.
From Huehuetenango to Chimaltenango is roughly with Mayan ruins at Iximché, just north of Tecpán Guatemala. From Chimaltenango, it is about to Guatemala City, the capital and largest city in Central America.
From Guatemala City to Cuilapa is about and another to Jutiapa. The highway continues as CA Highway 1 and approaches the border with El Salvador. It is to the border crossing at San Cristobal Frontera.
In Guatemala, the Pan-American highway passes through 10 departments, including The Department Of Guatemala, where it passes through Guatemala City.
El Salvador
El Salvador is the smallest country (by area) along the route of the Pan-American Highway. After crossing into El Salvador at Candelaria de la Frontera, the Inter-American Highway continues on toward Santa Ana as Central America Highway 1. From the border crossing to Santa Ana is about .
From Santa Ana it is about to San Salvador, El Salvador's capital and largest city. At Nueva San Salvador, the highway passes near the Volcano de San Salvador.
From San Salvador to Cojutepeque is about ; following the highway southeast to San Miguel is about . From San Miguel to the El Salvador-Honduras border is about .
In El Salvador, the highway also passes through the cities of, Santa Tecla, Antiguo Cuscatlán, and San Martín.
Honduras
The highway crosses the border into Honduras at El Amatillo near Nacaome ( from border). Just past Nacaome is a highway traveling north to Tegucigalpa, the capital of Honduras. Traveling south, it is to Choluteca, the fourth-largest city in Honduras. From Choluteca to the border crossing, just past San Marcos de Colón, is about . The Pan-American Highway's total distance in Honduras is about .
Nicaragua
From Honduras, it passes into Nicaragua at El Espino, passing through the Nicaraguan cities of Somoto, Estelí, Sebaco, Managua, Jinotepe, and Rivas before entering Costa Rica at Peñas Blancas.
From the crossing at the Honduras-Nicaragua border, the highway continues as Central America Highway 1 to the town of Ocotal, about . From Ocotal to Estelí is about , and on to the village of Sébaco is about . At this point, the Inter-American Highway turns from southeast to south towards Ciudad Darío, which is from Sébaco. From Ciudad Dario to the village of San Benito is .
From San Benito, it is about to the Nicaraguan capital and largest city of Managua, on the shores of Lake Managua. From Managua south to the town of Jinotepe is about , and Jinotepe to the town of Rivas is about . Around this area the Highway is in view of Lake Nicaragua, which is the largest lake in Central America. From Rivas to the Nicaragua-Costa Rica border is about .
Costa Rica
In Costa Rica, the Pan-American Highway is known as (Inter-American Highway) and is composed of two segments Carretera Interamericana Norte (Route 1) and Carretera Interamericana Sur (Route 2).
It passes through Liberia, San José, Cartago, Pérez Zeledón, Palmares, Neily, before crossing into Panama at Paso Canoas.
The highest point in the entire Pan-American Highway occurs at the Cerro de la Muerte (Death Hill) in the Carretera Interamericana Sur segment, at .
An alternative route used by cross country buses and freight transportation that avoids crossing through the Greater Metropolitan Area and Cerro de la Muerte, is by taking Route 23 in Puntarenas canton from Route 1, then Route 27 and Route 34, and taking Route 2 in Osa canton.
After entering Costa Rica, the Highway separates two national parks, the Santa Rosa National Park to the west and Guanacaste National Park to the east. From the Nicaragua-Costa Rica border to the town of Liberia is about . In the region of Costa Rica, the Pan-American Highway runs just west of the Cordillera de Guanacaste (Guanacaste Mountains), which includes the active volcanoes of Rincón de la Vieja and Miravalles. While travelling through Costa Rica north of San Jose, the highway route is known as Costa Rica Highway 1 instead of CA Highway 1. From San Jose south to Panama, the highway route is known as Costa Rica Highway 2.
Liberia to the town of Barranca is from Barranca, the Cordillera de Tilarán (Tilarán Mountains) can be seen from the Inter-American Highway. The Tilarán range includes Arenal, one of the world's most active volcanoes. From Barranca, the highway heads east across the mountains and the Continental Divide once again. From Barranca, it is roughly to the town of Alajuela.
After Alajuela the Cordillera Central (Central Mountains) come into view from the Inter-American Highway. The Central Mountains include four large volcanoes--Poás, Barva, Irazú and Turrialba. From Alajuela to San José is about
San José is the capital and largest city in Costa Rica. Leaving San José, the Inter-American Highway winds its way roughly southeast. From San José to San Isidro de El General is about
From San Isidro, the Cordillera de Talamanca (Talamanca Mountains) rise up from the rain forest canopy. The Talamanca range, which is non-volcanic, includes Cerro Chirripó, Costa Rica's highest mountain peak at . From San Isidro to Palmar Sur is roughly , and Palmar Sur to the Costa Rica-Panama border is about 91.9 km (57.1 mi).
Panama
From the Costa Rica-Panama border to the village of La Concepción is about , and from La Concepción to the city of David is about another . The highway enters Panama traveling generally from west to east.
David, the capital of the Chiriqui Province, is located about north of the town of Pedregal and the Gulf of Chiriquí. From David, the Highway travels east about to Tolé. From Tolé to the town of Santiago is about , about halfway to Santiago, the Pan-American Highway crosses over the San Pablo river.
From Santiago to Aguadulce is about , where the Pan American Highway reenters the tropical lowlands. From Aguadulce to Penonomé is about . This stretch of highway crosses the Santa María river.
From Penonomé, the highway travels southeast, then northeast, then roughly north in a loop as it avoids crossing into Panama's Central Mountains. From Penonomé to La Chorerra is about . From La Chorerra, it is only about to Balboa just west of Panama City.
Panama City is the capital and largest city in Panama. Before entering the city, the Pan-American Highway crosses over the Panama Canal on the Centennial Bridge, From Panama City, the Highway turns northeast. From Panama City to Chepo is roughly ; from Chepo to Cañita is another .
At the village of Cañita is the old terminus of the northern route of the Pan-American Highway. The highway continues another past Cañita to the village of Yaviza, a village near the junction of the Tuira and Chucunaque rivers. It is here that officially Pan- American Highway ends. Southeast of here is the virtually impenetrable Darién Gap, a stretch of rugged, mountainous jungle terrain. It is now being extended 6 km to Pinogana, which will include a bridge over the Chucunaque River. The road had formerly ended at Cañita, Panama, north of its current end.
United States government funding was particularly significant to complete the high-level Bridge of the Americas over the Panama Canal, during the years when the canal was administered by the United States.
Darién Gap
The Pan-American Highway is interrupted between Panama and Colombia by a stretch of marshland known as the Darién Gap. The highway terminates at Turbo, Colombia, and Yaviza, Panama. Because of swamps, marshes, and rivers, construction would be very expensive. One can cross through on foot, but it is both very difficult and very dangerous.
Efforts have been made for decades to eliminate the gap in the Pan-American highway, but have been controversial. Planning began in 1971 with the help of United States funding, but this was halted in 1974 after concerns raised by environmentalists. Another effort to build the road began in 1992, but by 1994 a United Nations agency reported that the road, and the subsequent development, would cause extensive environmental damage. The Embera-Wounaan and Kuna have also expressed concern that the road could bring about the potential erosion of their cultures.
The Darién Gap has challenged adventurers for many years. A 1962 expedition with three Chevrolet Corvair rear-engine cars and two support trucks completed the trip south from Chicago through to the Colombian border. A 1971–72 British expedition from Alaska to Argentina attempted to transit the Gap with two standard production Range Rovers, supported by a team of Land Rovers. They barely succeeded in thrashing a passage through the extreme terrain. In 1979, a team led by Mark Smith drove standard production CJ7-model Jeeps from South to North, traversing the Gap with difficulty. In 1984, Loren and Patty Upton made the first "all land" vehicle crossing of the Gap using a 1966 Jeep CJ-5 named Sand Ship Discovery. It took months of slogging, winching, chopping, and digging their way through the inhospitable jungles of the Darién Gap.
One proposed option to bridge the gap is a short ferry link from Colombia to a new ferry port in Panama, with an extension of the existing Panama highway that would complete the highway without violating these environmental concerns. However, past attempts to operate such a service have ended in failure.
Southern section
Colombia and Venezuela
The southern part of the highway begins in Turbo, Colombia, from where it follows Colombia Highway 62 to Medellín. On Google Maps, the beginning of the highway is at a marker at the end of a road built out to a place along the proposed extended highway in the flatlands 11 miles from the Atrato River and 20 miles frm the main highway called "El Cuarenta" or "Lomas Aisladas" ("Isolated hills"), which has a marker "Inicio del tramo sur de la carretera Panamericana" ("Start of the southern section of the Pan-American Highway"). At Medellín, Colombia Highway 56 leads to Bogotá, but Colombia Highway 25 turns south for a more direct route. Colombia Highway 40 is routed southwest from Bogotá to join Highway 25 at Zarzal. Highway 25 continues all the way to the border with Ecuador.
Travelers along the Inter-American Highway portion of the Pan-American Highway in Panama can take a ferry from Panama City to the port of Buenaventura, which is 115 km northwest of Cali. Cali represents a major junction between Buenaventura and two northern spurs of the Pan-American Highway that connect from northern Colombia and Venezuela.
The main route of the Pan-American Highway in Colombia (starting from the northeast) begins just east of Cúcuta, the capital city of the department of Norte de Santander. The highway follows Colombia Route 55 for 63 km from Cúcuta to Pamplona, where it shifts to Colombia Route 66 for 45 km to reach the border with the department of Santander.
From the department border, Route 66 continues southwest for 50 km toward Bucaramanga, state capital of Santander located on a plateau in the Cordillera Oriental. From Bucaramanga, the Pan-American Highway switches from Route 66 to Colombia Route 45A, which it follows south by southwest to the town of Barbosa. This 203-km stretch on Route 45A is a toll road. Approximately 26 km of this stretch of highway enters the department of Boyacá and reenters Santander between Vado Real and Guepsa. From Barbosa, the Pan-American Highways switches from Route 45A to Colombia Route 62 and immediately reenters Boyacá towards Tunja, the state capital of Boyacá.
A 53-km stretch of highway connects Barbosa with Tunja, an important agriculture and mining center in the region. The Pan-American Highway switches routes again in Tunja, returning to Colombia Route 55 on its way to Cundinamarca and the national capital, Bogotá. The stretch of highway from Tunja to the departmental border with Cundinamarca is 54 km and is a toll road. From the Cundinamarca departmental line, the highway continues another 26 km without tolls before becoming a toll road again. From that point, the highway reaches Bogotá in 52 km.
In Bogotá, the highway crosses from the north to the southwest portion of the city, switching from Route 55 to Colombia Route 40. Continuing as a toll road from Bogotá, it travels for 128 km through Fusagasugá to the departmental border with Tolima.
From the Tolima departmental border, the highway continues as a toll road for another 16 km to El Espinal. It travels west and after another 37 km, reaches the city of Ibagué. From Ibagué to the Quindío departmental border near La Linea is about 77 km.
Upon crossing into Quindío, Colombia Route 40 continues westward for another 4 km before reaching Calarcá, where Route 40 splits into two spurs. One enters Armenia. The spurs rejoin about 18 km southwest of Calarcá at the town of Club Campestre. From Club Campestre, it covers another 16 km until reaching the Valle del Cauca departmental border.
From the border, the Pan-American Highway travels 26 km to the town of La Paila, where it crosses Colombia Route 25. At this junction, the two routes merge and become a toll road for 61 km from that point to Buga. At Buga, Route 40 splits west toward the city of Buenaventura and the Pacific Ocean; the Pan-American Highway continues south on Route 25 for another 42 km until arriving near Palmira. From Palmira, the highway continues southwest for 23 km, where it reaches the large city of Cali. From Cali to the Cauca departmental border is 19 km.
The highway continues south along Route 25 throughout its length in the department of Cauca. After 50 km, the highway becomes a toll road at Santander de Quilichao until reaching Santander de Quilichao and Popayán after 74 km. It continues generally southwest from Popayán; reaching Mojarras after 135 km. At Mojarras, Route 25 splits into two spurs; the western spur is preferred for traveling south toward the Nariño departmental border. From Mojarras to the departmental border near Remolino is 36 km.
From the Nariño departmental border, the highway continues south as Route 25. From the border to the city of Pasto is 84 km. From Pasto to the national border between Colombia and Ecuador near Ipiales is 82 km. The Pan American crosses the border at the Rumichaca Bridge.
Simón Bolívar Highway
Another route, known as the Simón Bolívar Highway, runs from Bogotá (Colombia) to La Guaira (Venezuela). It begins by using Colombia Highways 55 & 66 all the way to the border with Venezuela. From there it uses Venezuela Highway 1 to Caracas and Venezuela Highway 9 to its end at La Guaira.
The road ends at Venezuela Highway 9 in Güiria, a small town in the state of Sucre just west of Trinidad along the Gulf of Paria coastline. From Güiria, the highway winds west towards the town of Yaguaraparo.
Once the highway reaches Yaguaraparo, on the southern portion of the Paria Peninsula along the Gulf of Paria, Highway 9 continues west for approximately to the towns of Casanay and Pantoño.
Upon reaching Casanay, the highway crosses Venezuela Highway 10, a major north–south highway. From Casanay and neighboring Pantoño, it continues west, paralleling the Gulf of Cariaco. The highway crosses Secondary Highway 2 at Villa Frontado, which travels south into the state of Monagas. The distance from Casanay to Cumaná is about . From Cumaná, the road travels southwest approximately 65 km to the border with the state of Anzoátegui.
After crossing into Anzoátegui, it almost immediately enters the city of Barcelona.
Travelling from Barcelona and nearby Puerto La Cruz, the highway continues westward. For about 47 km, it becomes a limited-access expressway, returning to a two-lane highway at Puerto Píritu. The highway travels another 62 km reaches the border with the state of Miranda at the town of Boca de Uchire. This portion includes a short run through the llanos, or Venezuelan savannas.
About 34 km west of Boca de Uchire, the highway starts climbing up the Cordillera Central, in the Andes mountains. Highway 9 begins here to move further away from the Caribbean Sea coastline. From Boca de Uchire to El Guapo the distance is 65 km; from El Guapo to Caucagua adds 59 km.
At Caucagua, the Pan-American Highway crosses Venezuela Highway 12. Highway 9 continues through the Cordillera Central. After about 21 km, the highway becomes a limited-access expressway for 32 km west towards the Caracas metropolitan area and the Venezuelan Federal District.
Maracay is the capital city of Aragua state in central Venezuela. It was officially established on March 5, 1701, by Bishop Diego de Baños y Sotomayor in the valleys of Tocopio and Tapatapa (what is known today as the central valley of Aragua) in northern Venezuela. From Maracay the highway extends about 44 km to Valencia, passing San Joaquin and near Cuacara en route to the city.
Valencia is the capital city of Carabobo state. In Valencia, the highway shifts northwest and passes through mountains that are part of the Sistema Coriano. The distance between Valencia and the small town of El Palito on the Caribbean Sea is approximately 40 km. At El Palito, the road joins with Venezuela Highway 3 for 10 km, then splits off at Morón. The distance from Morón to the state border with Yaracuy state and the village of Guaremal is about 20 km.
From Guaremal to San Felipe, the major city in Yaracuy, is about 26 km. The city itself is located on a route adjacent to Highway 1. The road runs 73 km from San Felipe to the state border with Lara (located just past the town of Cambural). The distance from the border to Barquisimeto is about 16 km.
From Barquisimeto, Highway 1 continues roughly west, then southwest (at around Agua Salada) for 147 km to the state line with Trujillo, near El Empedrado.
Once in the state of Trujillo (near the town of Parajá), the Pan-American Highway continues in a southwest direction; it does not travel through the state capital city of Trujillo but connects to Trujillo by way of state highways 3 and 1. The highway's length in this state is about 111 km.
The highway enters Mérida state near Arapuey. As in Trujillo, the highway does not travel through the major population centers, of which the largest is Mérida. Highway 1 connects to Mérida via an 88 km stretch of Mérida state highway 4. The highway covers 104 km in Mérida.
Upon entering the state of Táchira, the high extends about 58 km from the border to the junction with Venezuela Highway 6. From the junction to the city of San Cristóbal the distance is 44 km, although there is a separate expressway that parallels the Pan-American Highway along this stretch. From San Cristóbal to the Venezuela-Colombia border, near San Antonio de Táchira, the distance is 32 km.
Peru, Ecuador and Chile
Ecuador Highway 35 runs the whole length of the country.Ecuador Highway 35, the "Troncal de la Sierra" (Highland's Road), is commonly known to Ecuadorians as "La Panamericana" and forms Ecuador's contribution to the project. It connects cities and towns from the Sierra region, from Tulcán at the north (border with Colombia), passing through Quito, the country's capital, to the southern border with Peru. Part of this highway is a toll-road administered by Panavial, a private concessionary. The road condition is quite good, but it mostly goes through mountains and it has some bad trails around the province of Cañar (center of the country), making it a fairly dangerous road to drive on.
The Ecuadorian portion begins at the Colombian border in Carchi province and almost immediately enters the city of Tulcán, the capital of Carchi province. From Tulcán, the road continues south for 125 km, reaching Ibarra. From Ibarra, the highway continues south for 115 km until reaching Quito. From Quito, the highway continues south for 89 km next reaching Latacunga.
From Latacunga, the Pan-American Highway continues south for 47 km to Ambato. From Ambato, the road continues south for 52 km to Riobamba. Azogues is the capital of Cañar Province. From Azogues the road becomes six lanes wide (three north, three south) built in 1995, until it reaches Cuenca.
From north of the city the Pan-American Highway takes the name of Avenida Circunvalación Sur (Avenue South Ring) until exiting to the south, where it shrinks to two lanes (one north, one south) enlarged in 2009 until reaching Loja.
Peru Highway 1 carries the Pan-American Highway all the way through Peru to the border with Chile. The northern terminus of the highway is located in Aguas Verdes (Tumbes Region) at the border with Ecuador. Starting in this point, the highway is known as Carretera Panamericana Norte ("North Pan-American Highway") until it reaches a point located in central Lima, the country's capital.
From this point south the highway is called Carretera Panamericana Sur ("South Pan-American Highway"), until it reaches the southern border, located in the Santa Rosa Border Post (36 km south of Tacna, the highway's closest major city), in the Tacna Region at the border with Chile.
From Lima to Cañete 148 km
In Chile, the highway follows Chile Route 5 south to (Llaillay), a point north of Santiago, where the highway splits into two parts, one of which goes through Chilean territory to Puerto Montt, where it splits again, to Quellón on Chiloé Island, and to its continuation as the Carretera Austral. The other part goes east along Chile Route 60, which goes through the Andes to the Christ the Redeemer Tunnel, inside the Los Libertadores Pass. The Chilean-Argentinian border is located in the middle of the tunnel.
Argentina and Paraguay
In Argentina, the Argentina National Route 7 starts in the Christ the Redeemer tunnel, and continues to Buenos Aires, the end of the main highway. The highway network also continues south of Buenos Aires along Argentina National Route 3 towards the city of Ushuaia in Tierra del Fuego. Another branch, from Buenos Aires to Asunción in Paraguay, heads out of Buenos Aires on Argentina National Route 9. It switches to Argentina National Route 11 at Rosario, which crosses the border with Paraguay right at Asunción. Other branches probably exist across the center of South America.
Brazil and Uruguay
A continuation of the Pan-American Highway to the Brazilian cities of São Paulo and Rio de Janeiro uses a ferry from Buenos Aires to Colonia in Uruguay and Uruguay Highway 1 to Montevideo. Uruguay Highway 9 and Brazil Highway 471 route to near Pelotas, from where Brazil Highway 116 leads to Brazilian main cities.
Guyana, Suriname and French Guiana
The highway does not have official segments to Belize, Guyana, Suriname (there known in ), and French Guiana, nor to any of the island nations in the Americas. However, highways from Venezuela link to Brazilian Trans-Amazonian highway that provides a southwest entrance to Guyana, route to the coast, and follow a coastal route through Suriname to French Guiana.
West Indies section
Plans have been discussed for including the West Indies in the Pan American Highway system. According to these, a system of ferries would be established to connect terminal points of the highway. Travelers would then be able to ferry from Key West to Havana, drive to the eastern tip of Cuba, ferry to Haiti, drive through Haiti and the Dominican Republic, and ferry again to Puerto Rico. Included in this system would also be a ferry from the western tip of Cuba to the Yucatán Peninsula. Mexico has already surveyed a route which will run across the Yucatán, Campeche, and Chiapas to San Cristobal de Las Casas, on the Pan American Highway.
("The Pan American Highway System" by Travel Division Pan American Union, Washington D.C. October 1947)
Art and culture
Travel writer Tim Cahill wrote a book, Road Fever, about his record-setting 24-day drive from Ushuaia in the Argentine province of Tierra del Fuego to Prudhoe Bay in the U.S. state of Alaska with professional long-distance driver Garry Sowerby, much of their route following the Pan-American Highway.
In the British motoring show Top Gear, the presenters drove on a section of the road in their off-road vehicles in the Bolivian Special.
In 2003, Kevin Sanders, a long-distance rider, broke the Guinness World Record for the fastest traversal of the highway by motorcycle in 34 days.
In 2018, British cyclist Dean Stott, who had planned on riding the length of the Americas in 110 days to set a new Guinness World Record, completed the journey in just under 100 days, riding south-to-north, breaking the record, set by Mexico's Carlos Santamaría Covarrubias in 2015, by 17 days. Stott was inspired to push the timetable after learning that he and his wife had been invited to the wedding of Prince Harry and Meghan Markle, and would have missed the event had he stuck to his original schedule. Stott's record lasted just a couple of months, as Austrian endurance cyclist Michael Strasser, riding north-to-south, broke the record with a time of 84 days, 11 hours and 50 minutes (July 23 – October 16, 2018).
In 2024, American endurance cyclist Bond Almand IV broke Strasser's record, riding north-to-south in 75 days, 17 hours, and 55 minutes (August 31 – November 15, 2024).
Photo gallery
| Technology | Ground transportation networks | null |
301394 | https://en.wikipedia.org/wiki/Piranha | Piranha | A piranha or piraña (, or ; , ) is any of a number of freshwater fish species in the family Serrasalmidae, or the subfamily Serrasalminae within the tetra family, Characidae in order Characiformes. These fish inhabit South American rivers, floodplains, lakes and reservoirs. Although often described as extremely predatory and mainly feeding on fish, their dietary habits vary extensively, and they will also take plant material, leading to their classification as omnivorous.
Etymology
The name originates from Old Tupi pirãîa, being first attested in the 1587 treatise by Portuguese explorer Gabriel Soares de Sousa. Piranha first appears in 1869 in English literature, likely borrowed from Portuguese.
Taxonomy and evolution
Piranhas belong to the family Serrasalmidae, which includes closely related omnivorous fish such as pacus. Traditionally, only the four genera Pristobrycon, Pygocentrus, Pygopristis, and Serrasalmus are considered to be true piranhas, due to their specialized teeth. However, a recent analysis showed, if the piranha group is to be monophyletic, it should be restricted to Serrasalmus, Pygocentrus, and part of Pristobrycon, or expanded to include these taxa plus Pygopristis, Catoprion, and Pristobrycon striolatus. Pygopristis was found to be more closely related to Catoprion than the other three piranha genera.
The total number of piranha species is unknown and contested, and new species continue to be described. Estimates range from fewer than 30 to more than 60.
Distribution
Piranhas are indigenous to the Amazon basin, in the Orinoco, in rivers of the Guianas, in the Paraguay–Paraná, and the São Francisco River systems, but there are major differences in the species richness. In a review where 38–39 piranha species were recognized, 25 were from the Amazon and 16 from Orinoco, while only three were present in Paraguay–Paraná and two in São Francisco. Most species are restricted to a single river system, but some (such as the red-bellied piranha) occur in several. Many species can occur together; for example, seven are found in Caño Maporal, a stream in Venezuela.
Aquarium piranhas have been unsuccessfully introduced into parts of the United States. In many cases, however, reported captures of piranhas are misidentifications of pacu (e.g., red-bellied pacu or Piaractus brachypomus is frequently misidentified as red-bellied piranha or Pygocentrus nattereri). Piranhas have also been discovered in the Kaptai Lake in southeast Bangladesh. Research is being carried out to establish how piranhas have moved to such distant corners of the world from their original habitat. Some rogue exotic fish traders are thought to have released them in the lake to avoid being caught by antipoaching forces. Piranhas were also spotted in the Lijiang River in China.
Description
Size
Depending on the exact species, most piranhas grow to between long. A few can grow larger, with the largest living species, the red-bellied, reaching up to . There are claims of São Francisco piranhas at up to , but the largest confirmed specimens are considerably smaller. The extinct Megapiranha which lived 8–10 million years ago reached about long, and possibly even .
Morphology
Serrasalmus, Pristobrycon, Pygocentrus, and Pygopristis are most easily recognized by their unique dentition. All piranhas have a single row of sharp teeth in both jaws. The teeth are tightly packed and interlocking (via small cusps) and are used for rapid puncture and shearing. Individual teeth are typically broadly triangular, pointed, and blade-like (flat in profile). The variation in the number of cusps is minor. In most species, the teeth are tricuspid with a larger middle cusp which makes the individual teeth appear markedly triangular. The exception is Pygopristis, which has pentacuspid teeth and a middle cusp usually only slightly larger than the other cusps.
Biting abilities
Piranhas have one of the strongest bites found in bony fishes. Relative to body mass, the black piranha (Serrasalmus rhombeus) produces one of the most forceful bites measured in vertebrates. This extremely powerful and dangerous bite is generated by large jaw muscles (adductor mandibulae) that are attached closely to the tip of the jaw, conferring the piranha with a mechanical advantage that favors force production over bite speed. Strong jaws combined with finely serrated teeth make them adept at tearing flesh.
Ecology
Piranhas vary extensively in ecology and behavior depending on exact species. Piranhas, especially the red-bellied (Pygocentrus nattereri), have a reputation as ferocious predators that hunt their prey in schools. Recent research, however, which "started off with the premise that they school as a means of cooperative hunting", discovered they are timid fish that schooled for protection from their own predators, such as cormorants, caimans, and dolphins. Piranhas are "basically like regular fish with large teeth". A few other species may also occur in large groups, while the remaining are solitary or found in small groups.
Although popularly described as highly predatory and primarily feeding on fish, piranha diets vary extensively, leading to their classification as omnivorous. In addition to fish (occasionally even their own species), documented food items for piranhas include other vertebrates (mammals, birds, reptiles), invertebrates (insects, crustaceans), fruits, seeds, leaves and detritus. The diet often shifts with age and size. Research on the species Serrasalmus aff. brandtii and Pygocentrus nattereri in Viana Lake in Maranhão, which is formed during the wet season when the Pindaré River (a tributary of the Mearim River) floods, has shown that they primarily feed on fish, but also eat vegetable matter. In another study of more than 250 Serrasalmus rhombeus at Ji-Paraná (Machado) River, 75% to 81% (depending on season) of the stomach content was fish, but about 10% was fruits or seeds. In a few species such as Serrasalmus serrulatus, the dietary split may be more equal, but this is less certain as based on smaller samples: Among 24 S. serrulatus from flooded forests of Ji-Paraná (Machado) River, there were several with fish remains in their stomachs, but half contained masticated seeds and in most of these this was the dominant item. Piranhas will often scavenge, and some species such as Serrasalmus elongatus are specialized scale-eaters, feeding primarily on scales and fins of other fish. Scale- and fin-eating is more widespread among juvenile and sub-adult piranhas.
Piranhas lay their eggs in pits dug during the breeding season and swim around to protect them. Newly hatched young feed on zooplankton, and eventually move on to small fish once large enough.
Relationship with humans
Piranha teeth are often used as tools themselves (such as for carving wood or cutting hair) or to modify other tools (such as sharpening of darts). This practice has been documented among several South American tribes including the Camayura and Shavante in Brazil and the Pacahuara in Bolivia.
Piranhas are also popular as food, being both eaten as a subsistence catch by fishers and sold at market. However, they are often considered a nuisance by fishers because they steal bait, eat catches, damage fishing gear, and may bite when accidentally caught.
Piranhas can be bought as pets in some areas, but they are illegal in many parts of the United States, and in the Philippines, where importers face six months to four years in jail, and the piranhas are destroyed to prevent proliferation.
The most common aquarium piranha is Pygocentrus nattereri, the red-bellied piranha. Piranhas can be bought fully grown or as young, often no larger than a thumbnail. It is important to keep Pygocentrus piranhas alone or in groups of four or more, not in pairs, since aggression among them is common, not allowing the weaker fish to survive, and is distributed more widely when kept in larger groups. It is not uncommon to find individual piranhas with one eye missing due to a previous attack.
Attacks
Although often described as extremely dangerous in the media, piranhas typically do not represent a serious risk to humans. However, attacks have occurred, especially when the piranhas are in a stressed situation such as the dense groups that may occur when the water is lower during the dry season and food is relatively scarce. Swimming near fishermen may increase the risk of attacks due to the commotion caused by struggling fish and the presence of bait in the water. Splashing attracts piranhas and for this reason children are more often attacked than adults. Being in the water when already injured or otherwise incapacitated also increases the risk. There are sometimes warning signs at high-risk locations and beaches in such areas are sometimes protected by a barrier.
Most piranha attacks on humans only result in minor injuries, typically to the feet or hands, but they are occasionally more serious and very rarely can be fatal. Near the city of Palmas in Brazil, 190 piranha attacks, all involving single bites to the feet, were reported in the first half of 2007 in an artificial lake which appeared after the damming of the Tocantins River. In the state of São Paulo, a series of attacks in 2009 in the Tietê River resulted in minor injuries to 15 people. In 2011, another series of attacks at José de Freitas in the Brazilian state of Piauí resulted in 100 people being treated for bites to their toes or heels. On 25 December 2013, more than 70 bathers were attacked at Rosario in Argentina, causing injuries to their hands or feet. In 2011, a drunk 18-year-old man was attacked and killed in Rosario del Yata, Bolivia. In 2012, a five-year-old Brazilian girl was attacked and killed by a shoal of P. nattereri. In January 2015, a six-year-old girl was found dead with signs of piranha bites on part of her body after her family canoe capsized during a vacation in Monte Alegre, Brazil. Whereas fatal attacks on humans are rare, piranhas will readily feed on bodies of people that already have died, such as drowning victims.
Reputation
Various stories exist about piranhas, such as how they can skeletonize a human body or cattle in seconds. These legends refer specifically to the red-bellied piranha.
Piranha solution, a dangerous mixture of sulfuric acid and hydrogen peroxide known to aggressively dissolve organic material, draws its name from these legends surrounding the piranha fish.
A common falsehood is that they can be attracted by blood and are exclusively carnivores. A Brazilian legend called "piranha cattle" states that they sweep the rivers at high speed and attack the first of the cattle entering the water allowing the rest of the group to traverse the river. These legends were dismissed through research by Hélder Queiroz and Anne Magurran and published in Biology Letters.
Accounts from Theodore Roosevelt
When former US president Theodore Roosevelt visited Brazil in 1913, he went on a hunting expedition through the Amazon Rainforest. While standing on the bank of the Amazon River, he witnessed a spectacle created by local fishermen. After blocking off part of the river and starving the piranhas for several days, they pushed a cow into the water, where it was quickly torn apart and skeletonized by a school of hungry piranhas. Roosevelt later described piranhas as vicious creatures in his 1914 book Through the Brazilian Wilderness.
| Biology and health sciences | Characiformes | null |
301647 | https://en.wikipedia.org/wiki/Barrier%20island | Barrier island | Barrier islands are a coastal landform, a type of dune system and sand island, where an area of sand has been formed by wave and tidal action parallel to the mainland coast. They usually occur in chains, consisting of anything from a few islands to more than a dozen. They are subject to change during storms and other action, but absorb energy and protect the coastlines and create areas of protected waters where wetlands may flourish. A barrier chain may extend for hundreds of kilometers, with islands periodically separated by tidal inlets. The largest barrier island in the world is Padre Island of Texas, United States, at long. Sometimes an important inlet may close permanently, transforming an island into a peninsula, thus creating a barrier peninsula, often including a beach, barrier beach.
Though many are long and narrow, the length and width of barriers and overall morphology of barrier coasts are related to parameters including tidal range, wave energy, sediment supply, sea-level trends, and basement controls. The amount of vegetation on the barrier has a large impact on the height and evolution of the island.
Chains of barrier islands can be found along approximately 13-15% of the world's coastlines. They display different settings, suggesting that they can form and be maintained in a variety of environments. Numerous theories have been given to explain their formation.
A human-made offshore structure constructed parallel to the shore is called a breakwater. In terms of coastal morphodynamics, it acts similarly to a naturally occurring barrier island by dissipating and reducing the energy of the waves and currents striking the coast. Hence, it is an important aspect of coastal engineering.
Constituent parts
Upper shoreface
The shoreface is the part of the barrier where the ocean meets the shore of the island. The barrier island body itself separates the shoreface from the backshore and lagoon/tidal flat area. Characteristics common to the upper shoreface are fine sands with mud and possibly silt. Further out into the ocean the sediment becomes finer. The effect of waves at this point is weak because of the depth. Bioturbation is common and many fossils can be found in upper shoreface deposits in the geologic record.
Middle shoreface
The middle shoreface is located in the upper shoreface. The middle shoreface is strongly influenced by wave action because of its depth. Closer to shore the sand is medium-grained, with shell pieces common. Since wave action is heavier, bioturbation is not likely.
Lower shoreface
The lower shoreface is constantly affected by wave action. This results in development of herringbone sedimentary structures because of the constant differing flow of waves. The sand is coarser.
Foreshore
The foreshore is the area on land between high and low tide. Like the upper shoreface, it is constantly affected by wave action. Cross-bedding and lamination are present and coarser sands are present because of the high energy present by the crashing of the waves. The sand is also very well sorted.
Backshore
The backshore is always above the highest water level point. The berm is also found here which marks the boundary between the foreshore and backshore. Wind is the important factor here, not water. During strong storms high waves and wind can deliver and erode sediment from the backshore.
Dunes
Coastal dunes, created by wind, are typical of a barrier island. They are located at the top of the backshore. The dunes will display characteristics of typical aeolian wind-blown dunes. The difference is that dunes on a barrier island typically contain coastal vegetation roots and marine bioturbation.
Lagoon and tidal flats
The lagoon and tidal flat area is located behind the dune and backshore area. Here the water is still, which allows fine silts, sands, and mud to settle out. Lagoons can become host to an anaerobic environment. This will allow high amounts of organic-rich mud to form. Vegetation is also common.
Location
Barrier Islands can be observed on every continent on Earth, except Antarctica. They occur primarily in areas that are tectonically stable, such as "trailing edge coasts" facing (moving away from) ocean ridges formed by divergent boundaries of tectonic plates, and around smaller marine basins such as the Mediterranean Sea and the Gulf of Mexico. Areas with relatively small tides and ample sand supply favor barrier island formation.
Australia
Moreton Bay, on the east coast of Australia and directly east of Brisbane, is sheltered from the Pacific Ocean by a chain of very large barrier islands. Running north to south they are Bribie Island, Moreton Island, North Stradbroke Island and South Stradbroke Island (the last two used to be a single island until a storm created a channel between them in 1896). North Stradbroke Island is the second largest sand island in the world and Moreton Island is the third largest.
Fraser Island, another barrier island lying 200 km north of Moreton Bay on the same coastline, is the largest sand island in the world.
United States
Barrier islands are found most prominently on the United States' East and Gulf Coasts, where every state, from Maine to Florida (East Coast) and from Florida to Texas (Gulf coast), features at least part of a barrier island. Many have large numbers of barrier islands; Florida, for instance, had 29 (in 1997) in just along the west (Gulf) coast of the Florida peninsula, plus about 20 others on the east coast and several barrier islands and spits along the panhandle coast. Padre Island, in Texas, is the world's longest barrier island; other well-known islands on the Gulf Coast include Galveston Island in Texas and Sanibel and Captiva Islands in Florida. Those on the East Coast include Miami Beach and Palm Beach in Florida; Hatteras Island in North Carolina; Assateague Island in Virginia and Maryland; Absecon Island in New Jersey, where Atlantic City is located; and Jones Beach Island and Fire Island, both off Long Island in New York. No barrier islands are found on the Pacific Coast of the United States due to the rocky shore and short continental shelf, but barrier peninsulas can be found. Barrier islands can also be seen on Alaska's Arctic coast.
Canada
Barrier Islands can also be found in Maritime Canada, and other places along the coast. A good example is found at Miramichi Bay, New Brunswick, where Portage Island as well as Fox Island and Hay Island protect the inner bay from storms in the Gulf of Saint Lawrence.
Mexico
Mexico's Gulf of Mexico coast has numerous barrier islands and barrier peninsulas.
New Zealand
Barrier islands are more prevalent in the north of both of New Zealand's main islands. Notable barrier islands in New Zealand include Matakana Island, which guards the entrance to Tauranga Harbour, and Rabbit Island, at the southern end of Tasman Bay. | Physical sciences | Oceanic and coastal landforms | Earth science |
301664 | https://en.wikipedia.org/wiki/Watercraft | Watercraft | A watercraft or waterborne vessel is any vehicle designed for travel across or through water bodies, such as a boat, ship, hovercraft, submersible or submarine.
Types
Historically, watercraft have been divided into two main categories.
Rafts, which gain their buoyancy from the fastening together of components that are each buoyant in their own right. Generally, a raft is a "flow through" structures, whose users would have difficulty keeping dry as it passes through waves. Consequently, apart from short journeys (such as a river crossing) their use is confined to warmer regions (roughly 40° N to 40° S). Outside this area, use of rafts at sea are impracticable due to the risks of exposure to the crew. Rafts divide into a number of types bundle raft can be made from, for example, papyrus that has been tied into bundles. These can even be shaped
Boats and ships, which float by having the submerged part of their structure exclude water with a waterproof surface, so creating a space that contains air, as well as cargo, passengers, crew, etc. In total, this structure weighs less than the water that would occupy the same volume.
Watercraft can be grouped into surface vessels, which include ships, yachts, boats, hydroplanes, wingships, unmanned surface vehicles, sailboards and human-powered craft such as rafts, canoes, kayaks and paddleboards; underwater vessels, which include submarines, submersibles, unmanned underwater vehicles (UUVs), wet subs and diver propulsion vehicles; and amphibious vehicles, which include hovercraft, car boats, amphibious ATVs and seaplanes. Many of these watercraft have a variety of subcategories and are used for different needs and applications.
Design
The design of watercraft requires a tradeoff among internal capacity (tonnage), speed and seaworthiness. Tonnage is important for transport of goods, speed is important for warships and racing vessels, and the degree of seaworthiness varies according to the bodies of water on which a watercraft is used. Regulations apply to larger watercraft, to avoid foundering at sea and other problems. Design technologies include the use of computer modeling and ship model basin testing before construction.
Propulsion
Watercraft propulsion can be divided into five categories.
Water power is used by drifting with a river current or a tidal stream. An anchor or weight may be lowered to provide enough steerage way to keep in the best part of the current (as in drudging) or paddles or poles might be used to keep position.
Human effort is used through a pole pushing against the bottom of shallow water, or paddles or oars operating in the surface of the water.
Wind power is used by sails
Towing is used, either from the land, such as the bank of a canal, with the motive power provided by draught animals, humans or machinery, or one watercraft may tow another.
Mechanical propulsion uses a motor whose power is derived from burning a fuel or stored energy such as batteries. This power is commonly converted into propulsion by propellers or by water jets, with paddle wheels being a largely historical method.
Any one watercraft might use more than one of these methods at different times or in conjunction with each other. For instance, early steamships often set sails to work alongside the engine power. Before steam tugs became common, sailing vessels would back and fill their sails to maintain a good position in a tidal stream while drifting with the tide in or out of a river. In a modern yacht, motor-sailingtravelling under the power of both sails and engineis a common method of making progress, if only in and out of harbour.
| Technology | Naval transport | null |
301728 | https://en.wikipedia.org/wiki/IRAS | IRAS | The Infrared Astronomical Satellite (Dutch: Infrarood Astronomische Satelliet) (IRAS) was the first space telescope to perform a survey of the entire night sky at infrared wavelengths. Launched on 25 January 1983, its mission lasted ten months. The telescope was a joint project of the United States (NASA), the Netherlands (NIVR), and the United Kingdom (SERC). Over 250,000 infrared sources were observed at 12, 25, 60, and 100 micrometer wavelengths.
Support for the processing and analysis of data from IRAS was contributed from the Infrared Processing and Analysis Center at the California Institute of Technology. Currently, the Infrared Science Archive at IPAC holds the IRAS archive.
The success of IRAS led to interest in the 1985 Infrared Telescope (IRT) mission on the Space Shuttle, and the planned Shuttle Infrared Telescope Facility which eventually transformed into the Space Infrared Telescope Facility, SIRTF, which in turn was developed into the Spitzer Space Telescope, launched in 2003. The success of early infrared space astronomy led to further missions, such as the Infrared Space Observatory (1990s) and the Hubble Space Telescope NICMOS instrument.
Mission
IRAS was the first observatory to perform an all-sky survey at infrared wavelengths. It mapped 96% of the sky four times, at 12, 25, 60 and 100 micrometers, with resolutions ranging from 30 arcseconds at 12 micrometers to 2 arcminutes at 100 micrometers. It discovered about 350,000 sources, many of which are still awaiting identification. About 75,000 of those are believed to be starburst galaxies, still enduring their star-formation stage. Many other sources are normal stars with disks of dust around them, possibly the early stage of planetary system formation. New discoveries included a dust disk around Vega and the first images of the Milky Way core.
IRAS's life, like that of most infrared satellites that followed, was limited by its cooling system. To effectively work in the infrared domain, a telescope must be cooled to cryogenic temperatures. In IRAS's case, of superfluid helium kept the telescope at a temperature of , keeping the satellite cool by evaporation. IRAS was the first use of superfluids in space. The on-board supply of liquid helium was depleted after 10 months on 21 November 1983, causing the telescope temperature to rise, preventing further observations. The spacecraft continues to orbit the Earth.
IRAS was designed to catalog fixed sources, so it scanned the same region of sky several times. Jack Meadows led a team at Leicester University, including John K. Davies and Simon F. Green, which searched the rejected sources for moving objects. This led to the discovery of three asteroids, including 3200 Phaethon (an Apollo asteroid and the parent body of the Geminid meteor shower), six comets, and a huge dust trail associated with comet 10P/Tempel. The comets included 126P/IRAS, 161P/Hartley–IRAS, and comet IRAS–Araki–Alcock (C/1983 H1), which made a close approach to the Earth in 1983. Out of the six comets IRAS found, four were long period and two were short period comets.
Discoveries
Overall, over a quarter million discrete targets were observed during its operations, both inside and beyond the Solar System. In addition, new objects were discovered including asteroids and comets.
The observatory made headlines briefly with the announcement on 10 December 1983 of the discovery of an "unknown object" at first described as "possibly as large as the giant planet Jupiter and possibly so close to Earth that it would be part of this solar system". Further analysis revealed that, of several unidentified objects, nine were distant galaxies and the tenth was "intergalactic cirrus". None were found to be Solar System bodies.
During its mission, IRAS (and later the Spitzer Space Telescope) detected odd infrared signatures around several stars. This led to the systems being targeted by the Hubble Space Telescope's NICMOS instrument between 1999 and 2006, but nothing was detected. In 2014, using new image processing techniques on the Hubble data, researchers discovered planetary disks around these stars.
IRAS discovered six comets, out of total of 22 discoveries and recoveries of all comets that year. This was a lot for this period, before the launch of SOHO in 1995, which would allow the discovery of many more comets in the next decade (it would detect 1000 comets in ten years).
Asteroid discoveries
Later surveys
Several infrared space telescopes have continued and greatly expanded the study of the infrared Universe, such as the Infrared Space Observatory launched in 1995, the Spitzer Space Telescope launched in 2003, and the Akari Space Telescope launched in 2006.
A next generation of infrared space telescopes began when NASA's Wide-field Infrared Survey Explorer launched on 14 December 2009 aboard a Delta II rocket from Vandenberg Air Force Base. Known as WISE, the telescope provided results hundreds of times more sensitive than IRAS at the shorter wavelengths; it also had an extended mission dubbed NEOWISE beginning in October 2010 after its coolant supply ran out.
A planned mission is NASA's Near-Earth Object Surveillance Mission (NEOSM), which is a successor to the NEOWISE mission.
2020 near-miss
On , IRAS was expected to pass as closely as 12 meters from the U.S. Air Force's Gravity Gradient Stabilization Experiment (GGSE-4) of 1967, another un-deorbited satellite left aloft; the 14.7-kilometer per second pass had an estimated risk of collision of 5%. Further complications arose from the fact that GGSE-4 was outfitted with an 18 meter long stabilization boom that was in an unknown orientation and may have struck the satellite even if the spacecraft's main body did not. Initial observations from amateur astronomers seemed to indicate that both satellites had survived the pass, with the California-based debris tracking organization LeoLabs later confirming that they had detected no new tracked debris following the incident.
| Technology | Space-based observatories | null |
301737 | https://en.wikipedia.org/wiki/Right%20whale | Right whale | Right whales are three species of large baleen whales of the genus Eubalaena: the North Atlantic right whale (E. glacialis), the North Pacific right whale (E. japonica) and the Southern right whale (E. australis). They are classified in the family Balaenidae with the bowhead whale. Right whales have rotund bodies with arching rostrums, V-shaped blowholes and dark gray or black skin. The most distinguishing feature of a right whale is the rough patches of skin on its head, which appear white due to parasitism by whale lice. Right whales are typically long and weigh up to or more.
All three species are migratory, moving seasonally to feed or give birth. The warm equatorial waters form a barrier that isolates the northern and southern species from one another although the southern species, at least, has been known to cross the equator. In the Northern Hemisphere, right whales tend to avoid open waters and stay close to peninsulas and bays and on continental shelves, as these areas offer greater shelter and an abundance of their preferred foods. In the Southern Hemisphere, right whales feed far offshore in summer, but a large portion of the population occur in near-shore waters in winter. Right whales feed mainly on copepods but also consume krill and pteropods. They may forage the surface, underwater or even the ocean bottom. During courtship, males gather into large groups to compete for a single female, suggesting that sperm competition is an important factor in mating behavior. Gestation tends to last a year, and calves are weaned at eight months old.
Right whales were a preferred target for whalers because of their docile nature, their slow surface-skimming feeding behaviors, their tendency to stay close to the coast, and their high blubber content (which makes them float when they are killed, and which produced high yields of whale oil). Although the whales no longer face pressure from commercial whaling, humans remain by far the greatest threat to these species: the two leading causes of death are being struck by ships and entanglement in fishing gear. Today, the North Atlantic and North Pacific right whales are among the most endangered whales in the world.
Naming
A common explanation for the name right whales is that they were regarded as the ones to hunt, as they float when killed and often swim within sight of shore. They are quite docile and do not tend to shy away from approaching boats. As a result, they were hunted nearly to extinction during the active years of the whaling industry. However, this origin is questionable: in his history of American whaling, Eric Jay Dolin writes:
For the scientific names, the generic name Eubalaena means "good or true whales", and specific names include glacialis ("ice") for North Atlantic species, australis ("southern") for Southern Hemisphere species, and japonica ("Japanese") for North Pacific species.
Taxonomy
The right whales were first classified in the genus Balaena in 1758 by Carl Linnaeus, who at the time considered all of the right whales (including the bowhead) as a single species. Through the 19th and 20th centuries, in fact, the family Balaenidae has been the subject of great taxonometric debate. Authorities have repeatedly recategorized the three populations of right whale plus the bowhead whale, as one, two, three or four species, either in a single genus or in two separate genera. In the early whaling days, they were all thought to be a single species, Balaena mysticetus. Eventually, it was recognized that bowheads and right whales were in fact different, and John Edward Gray proposed the genus Eubalaena for the right whale in 1864. Later, morphological factors such as differences in the skull shape of northern and southern right whales indicated at least two species of right whale – one in the Northern Hemisphere, the other in the Southern Ocean.
As recently as 1998, Rice, in his comprehensive and otherwise authoritative classification listed just two species: Balaena glacialis (the right whales) and Balaena mysticetus (the bowheads).
In 2000, two studies of DNA samples from each of the whale populations concluded the northern and southern populations of right whale should be considered separate species. What some scientists found more surprising was the discovery that the North Pacific and North Atlantic populations are also distinct, and that the North Pacific species is more closely related to the southern right whale than to the North Atlantic right whale.
The authors of one of these studies concluded that these species have not interbred for between 3 million and 12 million years.
In 2001, Brownell et al. reevaluated the conservation status of the North Pacific right whale as a distinct species, and in 2002, the Scientific Committee of the International Whaling Commission (IWC) accepted Rosenbaum's findings, and recommended that the Eubalaena nomenclature be retained for this genus.
A 2007 study by Churchill provided further evidence to conclude that the three different living right whale species constitute a distinct phylogenetic lineage from the bowhead, and properly belong to a separate genus.
The following cladogram of the family Balaenidae serves to illustrate the current scientific consensus as to the relationships between the three right whales and the bowhead whale.
A cladogram is a tool for visualizing and comparing the evolutionary relationships between taxa; the point where each node branches is analogous to an evolutionary branching – the diagram can be read left-to-right, much like a timeline.
Whale lice, parasitic cyamid crustaceans that live off skin debris, offer further information through their own genetics. Because these lice reproduce much more quickly than whales, their genetic diversity is greater. Marine biologists at the University of Utah examined these louse genes and determined their hosts split into three species 5–6 million years ago, and these species were all equally abundant before whaling began in the 11th century.
The communities first split because of the joining of North and South America. The rising temperatures of the equator then created a second split, into northern and southern groups, preventing them from interbreeding.
"This puts an end to the long debate about whether there are three Eubalaena species of right whale. They really are separate beyond a doubt", Jon Seger, the project's leader, told BBC News.
Others
The pygmy right whale (Caperea marginata), a much smaller whale of the Southern Hemisphere, was until recently considered a member of the Family Balaenidae. However, they are not right whales at all, and their taxonomy is presently in doubt. Most recent authors place this species into the monotypic Family Neobalaenidae,
but a 2012 study suggests that it is instead the last living member of the Family Cetotheriidae, a family previously considered extinct.
Yet another species of right whale was proposed by Emanuel Swedenborg in the 18th century—the so-called Swedenborg whale. The description of this species was based on a collection of fossil bones unearthed at Norra Vånga, Sweden, in 1705 and believed to be those of giants. The bones were examined by Swedenborg, who realized they belong to a species of whale. The existence of this species has been debated, and further evidence for this species was discovered during the construction of a motorway in Strömstad, Sweden in 2009.
To date, however, scientific consensus still considers Hunterius swedenborgii to be a North Atlantic right whale.
According to a DNA analysis conducted, it was later confirmed that the fossil bones are actually from a bowhead whale.
Characteristics
Adult right whales are typically long. They have extremely thick bodies with a girth as much as 60% of total body length in some cases. They have large, broad and blunt pectoral flippers and the deeply notched, smoothly tipped tail flukes make up to 40% of their body length. The North Pacific species is on average the largest of the three species. weigh . The upper jaw of a right whale is a bit arched, and the lower lip is strongly curved. On each side of the upper jaw are 200–270 baleen plates. These are narrow and approximately long, and are covered in very thin hairs. Right whales have a distinctive wide V-shaped blow, caused by the widely spaced blowholes on the top of the head. The blow rises above the surface.
The skin is generally black with occasional white blotches on the body, while some individuals have mottled patterns. Unlike other whales, a right whale has distinctive callosities (roughened patches of skin) on its head. The callosities appear white due to large colonies of cyamids (whale lice).
Each individual has a unique callosities pattern. In 2016, a competitive effort resulted in the use of facial recognition software to derive a process to uniquely identify right whales with about 87% accuracy based on their callosities. The primary role of callosities has been considered to be protection against predators. Right whale declines might have also reduced barnacles.
An unusually large 40% of their body weight is blubber, which is of relatively low density. Consequently, unlike many other species of whale, dead right whales tend to float. Many southern right whales are seen with rolls of fats behind blowholes that northern species often lack, and these are regarded as a sign of better health condition due to sufficient nutrition supply, and could have contributed in vast differences in recovery status between right whales in the southern and northern hemisphere, other than direct impacts by humankind.
The penis on a right whale can be up to – the testes, at up to in length, in diameter, and weighing up to 525 kg (1157 lbs), are also by far the largest of any animal on Earth. The blue whale may be the largest animal on the planet, yet the testicles of the right whale are ten times the size of those of the blue whale. They also exceed predictions in terms of relative size, being six times larger than would be expected on the basis of body mass. Together, the testicles make up nearly 1% of the right whale's total body weight. This strongly suggests sperm competition is important in mating, which correlates to the fact that right whales are highly promiscuous.
Range and habitat
The three Eubalaena species inhabit three distinct areas of the globe: the North Atlantic in the western Atlantic Ocean, the North Pacific in a band from Japan to Alaska and all areas of the Southern Ocean. The whales can only cope with the moderate temperatures found between 20 and 60 degrees in latitude. The warm equatorial waters form a barrier that prevents mixing between the northern and southern groups with minor exclusions. Although the southern species in particular must travel across open ocean to reach its feeding grounds, the species is not considered to be pelagic. In general, they prefer to stay close to peninsulas and bays and on continental shelves, as these areas offer greater shelter and an abundance of their preferred foods.
Because the oceans are so large, it is very difficult to accurately gauge whale population sizes. Approximate figures:
400 North Atlantic right whales (Eubalaena glacialis) live in the North Atlantic;
23 North Pacific right whales have been identified in the eastern North Pacific (Eubalaena japonica) and
15,000 southern right whales (Eubalaena australis) are spread throughout the southern part of the Southern Hemisphere.
North Atlantic right whale
Almost all of the 400 North Atlantic right whales live in the western North Atlantic Ocean. In northern spring, summer and autumn, they feed in areas off the Canadian and northeast U.S. coasts in a range stretching from New York to Newfoundland. Particularly popular feeding areas are the Bay of Fundy and Cape Cod Bay. In winter, they head south towards Georgia and Florida to give birth. There have been a smattering of sightings further east over the past few decades; several sightings were made close to Iceland in 2003. These are possibly the remains of a virtually extinct eastern Atlantic stock, but examination of old whalers' records suggests they are more likely to be strays. However, a few sightings have happened between Norway, Ireland, Spain, Portugal, the Canary Islands and Italy; at least the Norway individuals come from the Western stock.
North Pacific right whale
The North Pacific right whale appears to occur in two populations. The population in the eastern North Pacific/Bering Sea is extremely low, numbering about 30 individuals. A larger western population of 100–200 appears to be surviving in the Sea of Okhotsk, but very little is known about this population. Thus, the two northern right whale species are the most endangered of all large whales and two of the most endangered animal species in the world. Based on current population density trends, both species are predicted to become extinct within 200 years. The Pacific species was historically found in summer from the Sea of Okhotsk in the west to the Gulf of Alaska in the east, generally north of 50°N. Today, sightings are very rare and generally occur in the mouth of the Sea of Okhotsk and in the eastern Bering Sea. Although this species is very likely to be migratory like the other two species, its movement patterns are not known.
Southern right whale
The last major population review of southern right whales by the International Whaling Commission was in 1998. Researchers used data about adult female populations from three surveys (one in each of Argentina, South Africa and Australia) and extrapolated to include unsurveyed areas and estimated counts of males and calves (using available male:female and adult:calf ratios), giving an estimated 1997 population of 7,500 animals. More recent data from 2007 indicate those survey areas have shown evidence of strong recovery, with a population approaching twice that of a decade earlier. However, other breeding populations are still very small, and data are insufficient to determine whether they, too, are recovering.
The southern right whale spends the summer months in the far Southern Ocean feeding, probably close to Antarctica. It migrates north in winter for breeding, and can be seen around the coasts of Argentina, Australia, Brazil, Chile, Mozambique, New Zealand, South Africa and Uruguay. The South American, South African and Australasian groups apparently intermix very little, if at all, because of the strong fidelity of mothers to their feeding and calving grounds. The mother passes these instincts to her calves.
Life history
Right whales swim slowly, reaching only at top speed. However, they are highly acrobatic and frequently breach (jump clear of the sea surface), tail-slap and lobtail.
Diet and predation
The right whales' diets consist primarily of zooplankton, primarily the tiny crustaceans called copepods, as well as krill, and pteropods, although they are occasionally opportunistic feeders. As with other baleens, they feed by filtering prey from the water. They swim with an open mouth, filling it with water and prey. The whale then expels the water, using its baleen plates to retain the prey. Prey must occur in sufficient numbers to trigger the whale's interest, be large enough that the baleen plates can filter it, and be slow enough that it cannot escape. The "skimming" may take place on the surface, underwater, or even at the seabed, indicated by mud occasionally observed on right whales' bodies.
The right whales' two known predators are humans and orcas. When danger lurks, a group of right whales may cluster into a circle, and thrash their outwards-pointing tails. They may also head for shallow water, which sometimes proves to be an ineffective defense. Aside from the strong tails and massive heads equipped with callosities, the sheer size of this animal is its best defense, making young calves the most vulnerable to orca and shark attacks.
Vocalization and hearing
Vocalizations made by right whales are not elaborate compared to those made by other whale species. The whales make groans, pops and belches typically at frequencies around 500 Hz. The purpose of the sounds is not known but may be a form of communication between whales within the same group. Northern right whales responded to sounds similar to police sirens—sounds of much higher frequency than their own. On hearing the sounds, they moved rapidly to the surface. The research was of particular interest because northern rights ignore most sounds, including those of approaching boats. Researchers speculate this information may be useful in attempts to reduce the number of ship-whale collisions or to encourage the whales to surface for ease of harvesting.
Courtship and reproduction
During the mating season, which can occur at any time in the North Atlantic, right whales gather into "surface-active groups" made up of as many as 20 males consorting a single female. The female has her belly to the surface while the males stroke her with their flippers or keep her underwater. The males do not compete as aggressively against each other as male humpbacks. The female may not become pregnant but she is still able to assess the condition of potential mates. The mean age of first parturition in North Atlantic right whales is estimated at between 7.5 and 9 years. Females breed every 3–5 years; the most commonly seen calving intervals are 3 years and may vary from 2 up to 21 years due to multiple factors.
Both reproduction and calving take place during the winter months. Calves are approximately in weight and in length at birth following a gestation period of 1 year. The right whale grows rapidly in its first year, typically doubling in length. Weaning occurs after eight months to one year and the growth rate in later years is not well understood—it may be highly dependent on whether a calf stays with its mother for a second year.
Respective congregation areas in the same region may function as for different objectives for whales.
Lifespan
Very little is known about the life span of right whales. One of the few well-documented cases is of a female North Atlantic right whale that was photographed with a baby in 1935, then photographed again in 1959, 1980, 1985, and 1992. Consistent callosity patterns ensured it was the same animal. She was last photographed in 1995 with a seemingly fatal head wound, presumably from a ship strike. By conservative estimates (e.g. she was a new mother who had just reached sexual maturity in 1935), she was nearly 70 years to more than 100 years of age, if not older. Research on the closely related bowhead whale exceeding 210 years suggests this lifespan is not uncommon and may even be exceeded.
Relationship to humans
Whaling
In the early centuries of shore-based whaling before 1712, right whales were virtually the only catchable large whales, for three reasons:
They often swam close to shore where they could be spotted by beach lookouts, and hunted from beach-based whaleboats.
They are relatively slow swimmers, allowing whalers to catch up to them in their whaleboats.
Once killed by harpoons, they were more likely to float, and thus could be retrieved. However, some did sink when killed (10–30% in the North Pacific) and were lost unless they later stranded or surfaced.
Basque people were the first to hunt right whales commercially, beginning as early as the 11th century in the Bay of Biscay. They initially sought oil, but as meat preservation technology improved, the animal was also used for food. Basque whalers reached eastern Canada by 1530 and the shores of Todos os Santos Bay (in Bahia, Brazil) by 1602. The last Basque voyages were made before the Seven Years' War (1756–1763). All attempts to revive the trade after the war failed. Basque shore whaling continued sporadically into the 19th century.
"Yankee whalers" from the new American colonies replaced the Basques. Setting out from Nantucket, Massachusetts, and Long Island, New York, they took up to a hundred animals in good years. By 1750, the commercial hunt of the North Atlantic right whale was essentially over. The Yankee whalers moved into the South Atlantic before the end of the 18th century. The southernmost Brazilian whaling station was established in 1796, in Imbituba. Over the next hundred years, Yankee whaling spread into the Southern and Pacific Oceans, where the Americans were joined by fleets from several European nations. The beginning of the 20th century saw much greater industrialization of whaling, and the harvest grew rapidly. According to whalers' records, by 1937 there had been 38,000 takes in the South Atlantic, 39,000 in the South Pacific, 1,300 in the Indian Ocean, and 15,000 in the North Pacific. The incompleteness of these records means the actual take was somewhat higher.
As it became clear the stocks were nearly depleted, the world banned right whaling in 1937. The ban was largely successful, although violations continued for several decades. Madeira took its last two right whales in 1968. Japan took twenty-three Pacific right whales in the 1940s and more under scientific permit in the 1960s. Illegal whaling continued off the coast of Brazil for many years, and the Imbituba land station processed right whales until 1973. The Soviet Union illegally took at least 3,212 southern right whales during the 1950s and '60s, although it reported taking only four.
Whale watching
The southern right whale has made Hermanus, South Africa, one of the world centers for whale watching. During the winter months (July–October), southern right whales come so close to the shoreline, visitors can watch whales from strategically placed hotels. The town employs a "whale crier" (cf. town crier) to walk through the town announcing where whales have been seen. In Brazil, Imbituba in Santa Catarina has been recognized as the National Right Whale Capital and holds annual Right Whale Week celebrations in September when mothers and calves are more often seen. The old whaling station there has been converted to a museum dedicated to the whales. In winter in Argentina, Península Valdés in Patagonia hosts the largest breeding population of the species, with more than 2,000 animals catalogued by the Whale Conservation Institute and Ocean Alliance.
Conservation
Both the North Atlantic and North Pacific species are listed as a "species threatened with extinction which [is] or may be affected by trade" (Appendix I) by CITES, and as "endangered" by the IUCN Red List. In the United States, the National Marine Fisheries Service (NMFS), a subagency of the National Oceanic and Atmospheric Administration (NOAA) has classified all three species as "endangered" under the Endangered Species Act. Under the Marine Mammal Protection Act, they are listed as "depleted".
The southern right whale is listed as "endangered" under the Australian Environment Protection and Biodiversity Conservation Act, as "nationally endangered" under the New Zealand Threat Classification System, as a "natural monument" by the Argentine National Congress, and as a "State Natural Monument" under the Brazilian National Endangered Species List.
The U.S. and Brazil added new protections for right whales in the 2000s to address the two primary hazards. While environmental campaigners were, as reported in 2001, pleased about the plan's positive effects, they attempted to force the US government to do more. In particular, they advocated speed limits for ships within of US ports in times of high right whale presence. Citing concerns about excessive trade disruption, it did not institute greater protections. The Defenders of Wildlife, the Humane Society of the United States and the Ocean Conservancy sued the NMFS in September 2005 for "failing to protect the critically endangered North Atlantic Right Whale, which the agency acknowledges is 'the rarest of all large whale species' and which federal agencies are required to protect by both the Marine Mammal Protection Act and the Endangered Species Act", demanding emergency protection measures.
The southern right whale, listed as "endangered" by CITES and "lower risk - conservation dependent" by the IUCN, is protected in the jurisdictional waters of all countries with known breeding populations (Argentina, Australia, Brazil, Chile, New Zealand, South Africa and Uruguay). In Brazil, a federal Environmental Protection Area encompassing some and of coastline in Santa Catarina State was established in 2000 to protect the species' main breeding grounds in Brazil and promote whale watching.
On February 6, 2006, NOAA proposed its Strategy to Reduce Ship Strikes to North Atlantic Right Whales. The proposal, opposed by some shipping interests, limited ship speeds during calving season. The proposal was made official when on December 8, 2008, NOAA issued a press release that included the following:
Effective January 2009, ships or longer are limited to in waters off New England when whales begin gathering in this area as part of their annual migration. The restriction extends to around major mid-Atlantic ports.
The speed restriction applies in waters off New England and the southeastern US, where whales gather seasonally:
Southeastern US from St. Augustine, Florida to Brunswick, Georgia from Nov 15 to April 15
Mid-Atlantic U.S. areas from Rhode Island to Georgia from Nov 1 to April 30.
Cape Cod Bay from Jan 1 to May 15
Off Race Point at the northern end of Cape Cod from March 1 to April 30
Great South Channel of New England from April 1 to July 31
Temporary voluntary speed limits in other areas or times when a group of three or more right whales is confirmed
Scientists would assess the rule's effectiveness before the rule expires in 2013.
In 2020, NOAA published its assessment and found that since the speed rule was adopted, the total number of documented deaths from vessel strike decreased but serious and non-serious injuries have increased. A report by the organization Oceana found that between 2017 and 2020, disobedience of the rule reached close to 90% in mandatory speed zones while in voluntary areas, disobedience neared 85%.
Threats
The leading cause of death among the North Atlantic right whale, which migrates through some of the world's busiest shipping lanes while journeying off the east coast of the United States and Canada, is being struck by ships. At least sixteen ship-strike deaths were reported between 1970 and 1999, and probably more remain unreported. According to NOAA, twenty-five of the seventy-one right whale deaths reported since 1970 resulted from ship strikes.
A second major cause of morbidity and mortality in the North Atlantic right whale is entanglement in plastic fishing gear. Right whales ingest plankton with wide-open mouths, risking entanglement in any rope or net fixed in the water column. Rope wraps around their upper jaws, flippers and tails. Some are able to escape, but others remain tangled. Whales can be successfully disentangled, if observed and aided. In July 1997, the U.S. NOAA introduced the Atlantic Large Whale Take Reduction Plan, which seeks to minimize whale entanglement in fishing gear and record large whale sightings in an attempt to estimate numbers and distribution.
In 2012, the U.S. Navy proposed to create a new undersea naval training range immediately adjacent to northern right whale calving grounds in shallow waters off the Florida/Georgia border. Legal challenges by leading environmental groups including the Natural Resources Defense Council were denied in federal court, allowing the Navy to proceed. These rulings were made despite the extremely low numbers (as low as 313 by some estimates) of right whales in existence at this time, and a very poor calving season.
| Biology and health sciences | Baleen whales | Animals |
301743 | https://en.wikipedia.org/wiki/Southern%20right%20whale | Southern right whale | The southern right whale (Eubalaena australis) is a baleen whale, one of three species classified as right whales belonging to the genus Eubalaena. Southern right whales inhabit oceans south of the Equator, between the latitudes of 20° and 60° south. In 2009 the global population was estimated to be approximately 13,600.
Taxonomy
Right whales were first classified in the genus Balaena in 1758 by Carl Linnaeus, who at the time considered all right whales (including the bowhead) to be a single species. In the 19th and 20th centuries the family Balaenidae was the subject of great taxonometric debate. Authorities have repeatedly recategorised the three populations of right whale plus the bowhead whale, as one, two, three or four species, either in a single genus or in two separate genera. In the early whaling days, they were all thought to be a single species, Balaena mysticetus.
The southern right whale was initially described as Balaena australis by Desmoulins in 1822. Eventually, it was recognised that bowheads and right whales were different, and John Edward Gray proposed the genus Eubalaena for the right whale in 1864. Later, morphological factors such as differences in the skull shape of northern and southern right whales indicated at least two species of right whale—one in the Northern Hemisphere, the other in the Southern Ocean. As recently as 1998, Rice, in his comprehensive and otherwise authoritative classification, Marine mammals of the world: systematics and distribution, listed just two species: Balaena glacialis (all of the right whales) and Balaena mysticetus (the bowheads).
In 2000, Rosenbaum et al. disagreed, based on data from their genetic study of DNA samples from each of the whale populations. Genetic evidence now shows that the northern and southern populations of right whale have not interbred for between 3 million and 12 million years, confirming the southern right whale as a distinct species. The northern Pacific and Atlantic populations are also distinct, with the North Pacific right whale being more closely related to the southern right whale than to the North Atlantic right whale. Genetic differences between E. japonica (North Pacific) and E. australis (South Pacific) are much smaller than other baleen whales represent among different ocean basins.
It is believed that the right whale populations first split because of the joining of North and South America when the Panama isthmus formed. The rising temperatures at the equator then created a second split, into the northern and southern groups, preventing them from interbreeding.
In 2002, the Scientific Committee of the International Whaling Commission (IWC) accepted Rosenbaum's findings, and recommended that the Eubalaena nomenclature be retained for this genus.
The cladogram is a tool for visualising and comparing the evolutionary relationships between taxa. The point where a node branches off is analogous to an evolutionary branching – the diagram can be read left-to-right, much like a timeline. The following cladogram of the family Balaenidae serves to illustrate the current scientific consensus as to the relationships between the southern right whale and the other members of its family.
Other junior synonyms for E. australis have included B. antarctica (Lesson, 1828), B. antipodarum (Gray, 1843), Hunterus temminckii (Gray, 1864), and E. glacialis australis (Tomilin, 1962) (see side panel for more synonyms).
Description
Like other right whales, the southern right whale is readily distinguished from others by the callosities on its head, a broad back without a dorsal fin, and a long arching mouth that begins above the eye. Its skin is very dark grey or black, occasionally with some white patches on the belly. The right whale's callosities appear white due to large colonies of cyamids (whale lice). It is almost indistinguishable from the closely related North Atlantic and the North Pacific right whales, displaying only minor skull differences. It may have fewer callosities on its head than North Atlantic and more on its lower lips than the two northern species. The biological functions of callosities are unclear, although protection against predators has been put forward as the primal role.
An adult female is and can weigh up to , with the larger records of in length and or up to in weight, making them slightly smaller than other right whales in the Northern Hemisphere. The testicles of right whales are likely to be the largest of any animal, each weighing around . This suggests that sperm competition is important in the mating process.
The proportion and numbers of molten-coloured individuals are notable in this species compared with the other species in the Northern Hemisphere. Some whales remain white even after growing up.
The median lifespan is around 73, with some individuals surviving to over 130.
Behaviour
Like other right whales, they are rather active on the water surface and curious towards human vessels. Southern rights appear to be more active and tend to interact with humans more than the other two northern species. One behaviour unique to the southern right whale, known as tail sailing, is that of using their elevated flukes to catch the wind, remaining in the same position for a considerable amount of time. It appears to be a form of play and is most commonly seen off the coast of Argentina and South Africa. Some other species such as humpback whales are also known to display. Right whales are often seen interacting with other cetaceans, especially humpback whales and dolphins. There have been records of southern rights and humpbacks thought to be involved in mating activities off Mozambique, and along Bahia, Brazil.
A female southern right whale was spotted off the coast of Western Australia accompanying a lone humpback whale calf, although the actual relationship of this pair is unclear.
Reproduction
Southern right whales display strong maternal fidelity to their calving grounds. Calving females are known to return to calving grounds at 3-year intervals. The most commonly observed calving interval is 3 years, but intervals can range from 2 to 21 years. Calving takes place between June and November in calving grounds between 20 and 30° S.
In Australia, southern right whales have shown a preference for calving grounds along coastlines with high wave energy, such as the Head of the Bight. Here, the sound of breaking waves may mask the sound of the whales' presence, and so protect infants and calving cows from predators such as killer whales. Deep waters alongside shallower calving grounds may serve as training grounds for calves to build up their stamina ahead of migration.
Females give birth to their first calf when they are between eight and ten years old. A single calf is born after a gestation period of one year, about in weight and in length. The calf usually remains with its mother during the first year of its life, during which time it will double in length.
Southern right whales have been observed nursing unrelated orphans on occasions.
Feeding
Like right whales in other oceans, southern right whales feed almost exclusively on zooplankton, particularly krill. They feed just beneath the water's surface, holding their mouths partly open and skimming water continuously while swimming. They strain the water out through their long baleen plates to capture their prey. A southern right whale's baleen can measure up to long, and is made up of 220–260 baleen plates.
Population and distribution
The global population of southern right whales was estimated at 13,611 in 2009. An estimate published by National Geographic in October 2008 put the southern whale population at 10,000. A population estimate of 7,000 followed a March 1998 IWC workshop. Researchers used population data from three surveys of adult females in the 1990s (Argentina, South Africa and Australia). They extrapolated to include the population of unsurveyed areas, and used known male-to-female and adult-to-calf ratios to estimate and include numbers of males and calves. Recovery of the overall population size of the species is predicted to be at less than 50% of its pre-whaling state by 2100 due to heavier impacts of whaling and slower recovery rates. Since hunting ceased, the population is estimated to have grown by 7% a year.
The southern right whale spends summer in the far Southern Ocean feeding, probably close to Antarctica. If the opportunity arises, feeding can occur even in temperate waters such as along Buenos Aires. It migrates north in winter for breeding and can be seen by the coasts of Argentina, Australia, Brazil, Chile, Namibia, Mozambique, Peru, Tristan de Cunha, Uruguay, Madagascar, New Zealand and South Africa; whales have also been known to winter in sub-Antarctic regions. It appears that the South American, South African and Australasian groups intermix very little if at all, because maternal fidelity to feeding and calving habitats is very strong. The mother also passes these choices to her calves.
Right whales do not normally cross the warm equatorial waters to connect with the other species and (inter)breed: their thick layers of insulating blubber make it difficult for them to dissipate their internal body heat in tropical waters. Based on historical records and unconfirmed sightings in modern periods, E. australis transits may sometimes occur through equatorial waters.
Whaling records for the hemisphere include a whaling ground in the central northern Indian Ocean and recent sightings among near-equatorial regions. If the sighting off Kiribati was truly of E. australis, this species may have crossed the Equator on irregular occasions and their original distributions might have been much broader and more northerly distributed than is currently believed. A stranding of a 21.3 m (71 feet) right whale at Gajana, northwestern India in November 1944 was reported, but the true identity of this animal is unclear.
Aside from impacts on whales and environments caused by mankind, their distributions and residences could be largely affected by presences of natural predators or enemies, and similar trends are also probable for other subspecies.
Many locations throughout the Southern Hemisphere were named after current or former presences of southern rights, including Walvis Bay, Punta Ballena, Right Whale Bay, Otago Harbour, Whangarei Harbour, Foveaux Strait, South Taranaki Bight, Moutohora Island and Wineglass Bay.
Africa
South Africa
Hermanus in South Africa has become known as a centre for whale watching. During the Southern Hemisphere winter months (June – October) the southern right whales migrate to the coastal waters of South Africa, with more than 100 whales known to visit the Hermanus area. Whilst in the area, the whales can be seen with their young as they come to Walker Bay to calve and mate. Many behaviours such as breaching, sailing, lobtailing, or spyhopping can be witnessed. In False Bay whales can be seen from the shore from July to October while both Plettenberg Bay and Algoa Bay are also home to the southern right whales from July to December. They can be viewed from land as well as by boat with licensed operators conducting ocean safaris throughout the year.
Recent increases in numbers of whales visiting the north-eastern part of South Africa, the so-called Dolphin Coast such as around Ballito and off Umdloti Beach, indicates the whales' normal ranges are expanding and that re-colonising historical habitats will likely continue as more whales migrate further north.
Western Africa
In Namibia, the majority of confirmed whales are restricted to the south of Luderitz, on the southwestern coast. Only a handful of animals venture further north to historical breeding grounds such as at Walvis Bay, but their numbers are slowly increasing. Until illegal hunting ceased, whales were rare along Namibian shores, with no sighting recorded north of Orange River until 1971. Calving activities were first confirmed as recently as the 1980s.
Historical records suggest that this whale's regular range could have once reached further northwards up the coasts of Cape Fria (northern Namibia) and Angola as far as Baia dos Tigres (Tiger Bay).
Whaling is known to have been carried out off the coast of Gabon, for example at Cape Lopez, and there have been a few confirmed and unconfirmed sightings including one by Jim Darling, a renowned whale researcher.
Eastern Africa
Southern right whales have been spotted in very small numbers off Mozambique and Madagascar. Whales were historically seen in large numbers at various locations such as off the coast of Durban, in Delagoa/Maputo Bay, Inhaca Island, Ponta do Ouro, and around the Bazaruto Archipelago. The first sighting off Mozambique since the end of whaling was in 1997. In recent years, more whales seem to migrate further north to calve, such as at Île Sainte-Marie, Antongil Bay, Fort Dauphin Toliara, Anakao, Andavadoaka, and Antsiranana Bay, at Madagascar's northern tip. Infrequent sightings have been confirmed off the island of Mayotte. Whales were historically taken off the coast of Tanzania, and may still be present occasionally around Zanzibar.
Mid–South Atlantic
Due to illegal whaling by the USSR, the recovery of many stocks including the population off Tristan da Cunha and adjacent areas such as Gough Island has been severely hindered, resulting in relatively few numbers of visiting animals.
Based on catch records and recent observations, right whales may be seen as far north as the islands of Saint Helena and Ascension Island.
South America
Brazil
In Brazil, more than 300 individuals have been cataloged through photo identification (using head callosities) by the Brazilian Right Whale Project, maintained jointly by Petrobras (the Brazilian state-owned oil company), and the conservation group, the International Wildlife Coalition. The State of Santa Catarina hosts a concentration of breeding and calving right whales from June to November, and females from this population also calve off Argentinian Patagonia and Uruguay. In recent years, possibly due to changing habitat environments by human activities and conflicts with local fisheries, the number of whales visiting the coasts is decreasing. Sighting in locations other than Santa Catarina and Rio Grande do Sul remain sporadic, such as along Cidreira, Rio de Janeiro coasts like Sepetiba Bay (pt), Cabo Frio, Macaé, Prado, Bahia, Castelhanos Bay in Ilha Bela, São Paulo coasts such as within Ilha Anchieta State Park, Honey Island, and bays and estuaries of Paranaguá and Superagui National Park, Paraná, and even entering into the lagoon of Lagoa dos Patos. Recent studies also show a decrease in the number of sightings along the southeastern Brazilian coast, which includes the highly urbanized States of São Paulo and Rio de Janeiro.
Further north, small numbers of whales migrate every year to winter or calve in Bahia, in particular at the Abrolhos Archipelago. Here, certain individuals are recorded returning at intervals of 3 or 4 years. Whaling records including those prior to Maury and Townsend indicate that right whales were once more frequent visitors further north, for example at Salvador, Bahia.
Argentina
Argentina hosts the world's largest breeding population of southern right whales at Península Valdés, Chubut province, with over 2000 estimated individuals gathering on the gulfs of the peninsula during breeding season. The whales are considered a "natural monument" and protected under Argentine law, and there is a developed whale-watching tourism around them.
During the 2012 annual meeting of the International Whaling Commission's Scientific Committee, data was presented regarding the continued phenomenon of southern right whale strandings and high rate of mortality at Península Valdés. Between 2003 and 2011, a total of 482 dead right whales were recorded at Península Valdés. There were at least 55 whale deaths in 2010, and 61 in 2011. As in previous years, the vast majority of strandings were calves of the season.
There have been increasing sightings in various other locations in recent years, such as on Golfo San Jorge, Tierra del Fuego, Puerto Deseado, Mar del Plata, Miramar, Buenos Aires, and Bahía Blanca.
Uruguay
In Uruguay, coastal areas such as Punta del Este host congregating sites for whales in breeding seasons, but these are not likely to be calving grounds. In 2013 the Uruguayan parliament approved the creation of a whale sanctuary off Latin America to aid the recovery of the population. The creation of this protected area had been prevented for nearly a decade by pro-whaling nations such as Japan.
Chile and Peru
For the critically endangered Chile/Peru population, the Cetacean Conservation Center (CCC) has been working on a separate programme for right whales. This population, containing no more than 50 individuals, is under threat from an increase in shipping lanes and the fishing industries. 124 sightings in total were recorded during the period 1964–2008. Aside from vagrants' records, Peru's coastlines possibly host one of the northernmost confirmed range of the species along with Gabon, Senegal, Tanzania, Brazilian coasts, Madagascar, Indian Ocean, western Australia, Kermadec Islands, and tropical waters including South Pacific Islands. The Alfaguara project targeting cetaceans in Chiloe may possibly target this species as well in the future since calving activities have been confirmed in Chiloé Archipelago. Foraging grounds of this population is currently undetected, but possibly Chiloé and down south of Caleta Zorra to southern fiords such as from Penas Gulf to Beagle Channel although numbers of confirmations are small in the Beagle Channel. Hopes are arising for the establishment of a new tourism industry on the eastern side of the Strait of Magellan, especially near Cape Virgenes and Punta Dungeness, as the number of sightings increases. It is unknown whether these increases are due to re-colonisation by whales from the Patagonian population.
Occurrences of brindle individuals have been confirmed from this population as well.
Oceania
Historically, populations of southern right whales in Oceanian regions were robust. Early settlers of Wellington, New Zealand, and the River Derwent in Tasmania complained that sounds of cavorting whales kept them awake at night. In July 1804, clergyman Robert Knopwood claimed that in crossing the River Derwent, "we passed so many whales that it was dangerous for the boat to go up the river unless you kept very near the shore".
By the 1890s southern right whales had been brought to the brink of extinction, with over 25,000 whales killed in Australia and New Zealand.
Studies of population structure and mating systems have shown that the southwest Australian and New Zealand populations are genetically differentiated. The results of satellite tracking suggest that there are at least some interactions between populations in Australia and New Zealand, but the extent of this is unknown. The two groups may share migratory corridors and calving grounds. The return of southern right whales to the Derwent River and other parts of Australia in recent decades is a sign that they are slowly recovering from their earlier exploitation to near extinction.
Australia
Southern right whales in Australian waters show higher rate of recoveries, as they have increased from 2,100 whales in 2008 to 3,500 in 2010. Two genetically distinct groups inhabit Australian waters: the southwestern population of 2,900 whales – in 2012 currently holding the majority of the overall Australian population – and the critically endangered southeastern group, counting only dozens to 300 individuals.
South Australia
Right whales can be found in many parts of southern Australia, where the largest population is found at the Head of the Bight in South Australia, a sparsely populated area south of the middle of the Nullarbor Plain. Over 100 individuals are seen there annually from June to October. Visitors can view the whales from cliff-top boardwalks and lookouts, with whales swimming almost directly below, or by taking a scenic flight over the marine park. A more accessible South Australian location for viewing whales is Encounter Bay where the whales can be seen just off the beaches of the Fleurieu Peninsula, centred around the surfing town of Middleton. The whales have established a newer nursery-ground near Eyre Peninsula, especially at Fowlers Bay. Numbers are much smaller at these locations compared to those in the Bight, with an average of a couple of whales per day, but there were regular sightings of more than ten whales at a time off Basham Beach, near Middleton. The South Australian Whale Centre at Victor Harbor has information on the history of whaling and whale-watching in the area, and maintains an on-line database of whale sightings.
In June 2021 a female gave birth off Christies Beach, a southern suburb of Adelaide, and remained in the shallows off the beach for some time, attracting large crowds.
Victoria and Tasmania
Whale numbers are scarcer in Victoria, where the only established breeding ground which whales use each year, in very small numbers, is at Warrnambool. However, as the whales do seem to be increasing in number generally, but not showing any dramatic increases at Warrnambool, they may be extending their wintering habitats into other areas of Victoria, where the numbers of sightings are slowly increasing. These areas include around Melbourne, such as in Port Phillip Bay, along Waratah Bay, at Ocean Grove, Warrnambool, on Mornington Peninsula, in Apollo Bay, and on Gippsland coasts and at Wilsons Promontory.
Whale numbers in Tasmania are relatively small, however sightings have increased in recent years. Some whales migrate through Tasmanian waters while some others remain throughout wintering seasons.
Other states and territories
Waters off the coasts of Western Australia, New South Wales, and Queensland coasts have all historically been inhabited by whales. Their historical range was much wider than it is today, and reached around the southern coast of the continent, extending up to Australian Abrolhos Island, Exmouth and Shark Bay on the west coast, and on the east coast as far north as Hervey Bay, Moreton Bay and Great Barrier Reef. Today, the east-coast population remains endangered and very small (in the low-tens), contributing to small numbers and limited re-colonization, but increases have been confirmed in many areas such as Port Jackson, Port Stephens, Twofold Bay, Jervis Bay, Broulee, Moruya River, Narooma, and Byron Bay. 12 foraging areas have been officially announced by the Australian government.
In sub-Antarctic regions, numbers of whales visiting long-used habitats differ drastically by location. The population is recovering well at the New Zealand Subantarctic Islands, while whale numbers are less successful at Macquarie Island.
It is not known whether Australian populations will re-colonise historical oceanic habitats such as Norfolk Island and Lord Howe Island with Lord Howe Seamount Chain (historically known as the "Middle Ground" for whalers)
in the future.
New Zealand
The current population of right whales in New Zealand waters is difficult to establish. However, studies by the Department of Conservation and sightings reported by locals have helped to build up a better picture. The pre-exploitation size of the New Zealand group is estimated at between 28,800 and 47,100 whales. 35,000 – 41,000 catches were made between 1827 and 1980. The number of whales surviving commercial and illegal whaling operations is estimated to have decreased to just 110 whales (around 30 of which were females) in 1915. As a result of such a steep decline in numbers, the population of southern right whales in this region has experienced a population bottleneck and suffers from low genetic diversity.
The population at the sub-Antarctic Auckland Islands is showing a remarkable recovery but continues to have some of the lowest genetic diversities in the world. In the Campbell Islands, recovery is slower. Here, the population is estimated to have dropped to as low as 20 individuals post WWII. There had been no confirmed sightings or strandings of right whales for 36 years until 1963 when four separate sightings including a cow-calf pair were made over a wide area. Remnants of sub-Antarctic populations were reported in the 1980s and re-discovered in the 1990s.
Today, the majority of right whales congregate at the Auckland and Campbell Islands, where they form exceptionally dense and limited congregations including mating adults and calving females. In the waters around Port Ross up to 200 whales may winter at the same time. It is notable that whales of all age groups are present in this small area annually, not only using them as feeding and summering grounds but also for wintering, breeding, and calving during harsh, cold periods. Low genetic diversity as a result of population decline has caused changes in skin coloration amongst this group.
Scientists used to believe there was a very small remnant population of southern right whales inhabiting New Zealand's main islands (North and South Island), estimated to contain 11 reproductive females. In winter, whales migrate north to New Zealand waters and large concentrations occasionally visit the southern coasts of South Island. Bay areas along Foveaux Strait from Fiordland region to northern Otago are important breeding habitats for right whales, especially Preservation, Chalky Inlets, Te Waewae Bay, and Otago Peninsula. Calving activities are observed all around New Zealand, but with more regularity around North Island shores from the Taranaki coast in the west to Hawke's Bay, Bay of Plenty in the east, and areas in Hauraki Gulf such as Firth of Thames or Bay of Islands in the north.
There are various parts of the nation where large numbers of whales were seen historically, but sightings are less common nowadays. These areas include the Marlborough Region, especially from Clifford Bay and Cloudy Bay to Port Underwood, Golden Bay, Awaroa Bay, and coastlines on West Coast and Hokianga Harbour in Northland. Other than a handful of confirmed observations, very little information is available for modern migrations to historical oceanic habitats of Kermadec Islands and Chatham Islands. The northernmost sighting recorded historically was at 27°S.
A 2009 study revealed that the right whale populations from New Zealand's main islands and the sub-Antarctic islands interbreed, though it is still unknown whether the two stock originally came from a single population. Feeding areas in pelagic waters are unclear while congregations have been confirmed along the southern edge of the Chatham Rise.
Some Australian ranges are located close to the ranges of New Zealand groups (Norfolk Island, Macquarie Island). It is unclear whether whales historically or currently from these Australian ranges once originated in New Zealand groups.
Other
In oceanic islands and offshore waters other than the above-mentioned areas, very little about the presence and recovery status of southern right whales is known. Right whales' historical ranges were much greater than today; during the whaling era of the 19th century whales were known to occur in lower latitude areas such as around the Pacific Islands, off the Gilbert Islands (nowadays Kiribati), and also to frequent lower latitudes of the central Indian Ocean.
It is unclear whether right whales have been historically or currently distributed among parts of hemisphere lacking great land masses and reached far more pelagic islands such as Alejandro Selkirk and Robinson Crusoe Islands, Hanga Roa, Pitcairn, Galapagos Islands, and the Easter Island.
Populations among sub-Antarctic islands in the Scotia Sea such as South Georgia and the South Sandwich Islands and Falkland Islands were severely damaged and show slower recoveries today. Antarctic distributions are difficult to establish due to low levels of sightings around oceanic islands in these areas, including Elephant Island.
Indian Ocean
Historically, there were known to be populations which summered in the Crozet Islands and the Kerguelen Islands, and migrated to La Roche Godon and Île Saint-Paul, Île Amsterdam, and the Central Indian Ocean. They may be distinct from the population of whales seen on Mozambique coasts. Repopulation of whales among these areas of the Indian Ocean is likely to be happening at even lower rates than in other areas. Sightings have been fewer in modern periods among Crozet, Réunion, Mauritius, Marion Islands, Île Amsterdam, and Kerguelen.
Killings of these whales have been recorded on central Indian Ocean near the equator, especially around the area between Diego Garcia, Egmont Islands, and the Great Chagos Bank in the west, and the Cocos (Keeling) Islands in the east. The range of whales in the Indian Ocean is comparable to the range of some other populations around South America, Africa, and the South Pacific islands including Kiribati, the northernmost reach of all the populations known today.
Whaling
By 1750 the North Atlantic right whale was as good as extinct for commercial purposes, and the American whalers moved into the South Atlantic before the end of the 18th century. The most southerly Brazilian whaling station was established in 1796, in Imbituba. Over the next hundred years, American whaling spread into the Southern and Pacific Oceans, where the American fleet was joined by fleets from several European nations.
The southern right whale had been coming to Australian and New Zealand waters in large numbers before the 19th century, but was extensively hunted from 1800 to 1850. Hunting gradually declined with the whale population and then all but ended in coastal waters in Australasia. The beginning of the 20th century brought industrial whaling, and the catch grew rapidly. By 1937, according to whalers' records, 38,000 were harpooned in the South Atlantic, 39,000 in the South Pacific, and 1,300 in the Indian Ocean. Given the incompleteness of these records, the total take was somewhat higher.
As it became clear that the population was nearly depleted, the harpooning of right whales was banned in 1937. The ban was largely successful, although some illegal whaling continued for several decades. Madeira took its last two right whales in 1968. Illegal whaling continued off the coast of Brazil for years, and the Imbituba station processed right whales until 1973. The USSR admitted to taking illegally over 3,300 during the 1950s and 1960s, although it only reported taking 4.
Illegal operations continued even in the 1970s, such as the case in Brazil until 1973. It was also revealed that Japan was supporting these destructive hunts by neglecting and disregarding its monitoring obligations. There were agreements between Japan and the Soviet Union to keep their illegal mass whaling activities in foreign/international protected waters secret.
Right whales began to be seen again in Australian and New Zealand waters from the early 1960s. It is possible that if the Soviet hunts had never happened, the New Zealand population would be three or four times larger than its current size.
Conservation
The southern right whale, listed as "endangered" by CITES, is protected by all countries with known breeding populations (Argentina, Australia, Brazil, Chile, New Zealand, South Africa and Uruguay). In Argentina, it is considered a "Natural Monument" under national law Nº 23094, with all whales sighted on Argentine waters under legal protection. In Brazil, a federal Environmental Protection Area encompassing some and of coastline in Santa Catarina State was established in 2000 to protect the species' main breeding grounds in Brazil and promote regulated whale watching. The southern right whale is listed on Appendix I of the Convention on the Conservation of Migratory Species of Wild Animals (CMS) as this species has been categorized as being in danger of extinction throughout all or a significant proportion of their range. This species is also covered by the Memorandum of Understanding for the Conservation of Cetaceans and Their Habitats in the Pacific Islands Region (Pacific Cetaceans MoU). In 2017, the IUCN Red List of Threatened Species listed the species' status as Least Concern with a population trend listed as "unknown".
In Australia, Southern right whales are listed for protection variously under state and federal legislation, as reflected in the table below:
A two-year, £740,000 project, led by the British Antarctic Survey began in 2016, to discover why almost 500 young have been washed up on the Valdes Peninsula over the last ten years. The project is funded by the UK's Department for Environment, Food and Rural Affairs (Defra) and the EU. Possible reasons are a lack of krill in the whale feeding grounds at South Georgia and the South Sandwich Islands, exposure to toxic algae and attacks by kelp gulls (Larus dominicanus).
Gull attacks
One possibly significant contributor to the calf mortality rate has alarmed scientists – since at least 1996, kelp gulls off the coast of Patagonia have been observed attacking and feeding on live right whales. The kelp gull uses its powerful beak to peck down several centimetres into the skin and blubber, often leaving the whales with large open sores – some of which have been observed to be half a metre in diameter. This predatory behaviour, primarily targeted towards mother/calf pairs, has been continually documented in Argentinian waters, and continues today. Observers note that the whales are spending up to a third of their time and energy performing evasive manoeuvres – therefore, mothers spend less time nursing, and the calves are thinner and weaker as a result. Researchers speculate that many years ago, waste from fish processing plants allowed the gull populations to soar. Their resulting overpopulation, combined with reduced waste output, caused the gulls to seek out this alternative food source. Scientists fear that the gulls' learned behaviour could proliferate, and the IWC Scientific Committee has urged Brazil to consider taking immediate action if and when similar gull behaviour is observed in their waters. Such action may include the removal of attacking gulls, following Argentina's lead in attempting to reverse the trend.
Threats
Southern right whales are threatened by entanglement in commercial fishing gear and ship strikes. Entanglement in fishing gear can cut through a whale's skin, causing infection, amputation and death. Underwater noise from human activities such as drilling and dredging can interfere with whales' communication, and deter them from their usual habitats and breeding grounds.
Whale watching
Africa
The southern right whale has made Hermanus, South Africa, one of the world centres for whale watching. During the winter months (June to October), southern right whales come so close to the shoreline that visitors can watch them from the shore as well as from strategically placed hotels. The town employs a "whale crier" (cf. town crier) to walk through the town announcing where whales have been seen. Hermanus also has two boat–based whale watching operators. Southern right whales can also be watched at False Bay from the shore or from the boats of operators in Simon's Town. Plettenberg Bay along the Garden Route of South Africa is also known for whale watching including both land and boat based watching, not only for southern rights (July to December) but throughout the year. Southern right whales can also be seen off the coast of Port Elizabeth with marine eco tours running from the Port Elizabeth harbour, as some southern right whales make Algoa Bay their home for the winter months.
Although southern right whales have been seen in neighboring countries including Namibia, Mozambique, and Madagascar, they are not the targeted species for whale watching tours in these countries.
South America
In Brazil, Imbituba in Santa Catarina has been recognised as the National Right Whale Capital and holds annual Right Whale Week celebrations in September, when mothers and calves are more often seen. The old whaling station there is now a museum that documents the history of right whales in Brazil. In Argentina, Península Valdés in Patagonia hosts (in winter) the largest breeding population, with more than 2,000 catalogued by the Whale Conservation Institute and Ocean Alliance. As in the south of Argentina, the whales come within of the main beach in the city of Puerto Madryn and form a part of the large ecotourism industry. Uruguay's Parliament on 4 September 2013, has become the first country in the world to make all of its territorial waters a haven for whales and dolphins. Every year, dozens of whales are sighted, especially in the departments of Maldonado and Rocha during winter. Swimming activities for commercial objectives had been banned in the area in 1985, but were legalised in Gulf of San Matías, the only place in the world where humans are formally allowed to swim with the species. Land-based watching and occasional kayaking with whales activities are seen at other locations not renowned for whale-watching as much as Puerto Madryn and with less restrictions on approaching whales, such as at Puerto Deseado, Mar del Plata, and Miramar in Buenos Aires.
Though their numbers are dangerously small, land-based sightings of whales are on the increase in recent years off Chile and Peru, with some hope of creating new tourism industries, especially in the Strait of Magellan, most notably around Cape Virgenes.
Oceania
In Australia's winter and spring, southern right whales can be seen migrating along the Great Australian Bight in South Australia. Viewing locations include the Bunda Cliffs and Twin Rocks, the Head of the Bight (where a visitor centre and cliff-top viewing boardwalks exist) and at Fowler's Bay where accommodation and charter boat tours are offered. Another popular South Australian locality for Southern right whale watching is Encounter Bay, where the South Australian Whale Centre supports local whale-watchers and tourists. In Warrnambool, Victoria, a right whale nursery is also a popular tourist attraction. The whales' migratory range is extending as the species continues to recover and re-colonize other areas of the continent, including the coastal waters of New South Wales and Tasmania. In Tasmania, the first birth since the 19th century was recorded in 2010 in the River Derwent.
Similarly, southern right whales may provide chances for the public to observe whales from shore on New Zealand's coasts with greater regularity than in the past, especially in southern Fiordland, Southland through to the Otago coast, and on the North Island coast, especially in Northland and other locations such as the Bay of Plenty and the South Taranaki Bight. Births of calves could have always been occurring on the main islands' coasts, but were confirmed with two cow-calf pairs in 2012.
Subantarctic
In the Subantarctic Islands and in the vicinity of Antarctica, where few regulations exist or are enforced, whales can be observed on expedition tours with increasing probability. The Auckland Islands are a specially designated sanctuary for right whales, where whale-watching tourism is prohibited without authorisation.
Popular culture
The species was featured on a 70p commemorative stamp issued by Tristan da Cunha in 2019 as part of a set celebrating different species of whale.
| Biology and health sciences | Baleen whales | Animals |
302358 | https://en.wikipedia.org/wiki/Potassium%20chlorate | Potassium chlorate | Potassium chlorate is the inorganic compound with the molecular formula KClO3. In its pure form, it is a white solid. After sodium chlorate, it is the second most common chlorate in industrial use. It is a strong oxidizing agent and its most important application is in safety matches. In other applications it is mostly obsolete and has been replaced by safer alternatives in recent decades. It has been used
in fireworks, propellants and explosives,
to prepare oxygen, both in the lab and in chemical oxygen generators,
as a disinfectant, for example in dentifrices and medical mouthwashes,
in agriculture as an herbicide.
Production
On the industrial scale, potassium chlorate is produced by the salt metathesis reaction of sodium chlorate and potassium chloride:
The reaction is driven by the low solubility of potassium chlorate in water. The equilibrium of the reaction is shifted to the right hand side by the continuous precipitation of the product (Le Chatelier's Principle). The precursor sodium chlorate is produced industrially in very large quantities by electrolysis of sodium chloride, common table salt.
The direct electrolysis of KCl in aqueous solution is also used sometimes, in which elemental chlorine formed at the anode reacts with KOH in situ. The low solubility of KClO3 in water causes the salt to conveniently isolate itself from the reaction mixture by simply precipitating out of solution.
Potassium chlorate can be produced in small amounts by disproportionation in a sodium hypochlorite solution followed by metathesis reaction with potassium chloride:
It can also be produced by passing chlorine gas into a hot solution of caustic potash:
as seen in this video
According to X-ray crystallography, potassium chlorate is a dense salt-like structure consisting of chlorate and potassium ions in close association.
Uses
Potassium chlorate was one key ingredient in early firearms percussion caps (primers). It continues in that application, where not supplanted by potassium perchlorate.
Chlorate-based propellants are more efficient than traditional gunpowder and are less susceptible to damage by water. However, they can be extremely unstable in the presence of sulfur or phosphorus and are much more expensive. Chlorate propellants must be used only in equipment designed for them; failure to follow this precaution is a common source of accidents. Potassium chlorate, often in combination with silver fulminate, is used in trick noise-makers known as "crackers", "snappers", "pop-its", "caps" or "bang-snaps", a popular type of novelty firework.
Another application of potassium chlorate is as the oxidizer in a smoke composition such as that used in smoke grenades. Since 2005, a cartridge with potassium chlorate mixed with lactose and rosin is used for generating the white smoke signaling the election of new pope by a papal conclave.
High school and college laboratories often use potassium chlorate to generate oxygen gas. It is a far cheaper source than a pressurized or cryogenic oxygen tank. Potassium chlorate readily decomposes if heated while in contact with a catalyst, typically manganese(IV) dioxide (MnO2). Thus, it may be simply placed in a test tube and heated over a burner. If the test tube is equipped with a one-holed stopper and hose, warm oxygen can be drawn off. The reaction is as follows:
Heating it in the absence of a catalyst converts it into potassium perchlorate:
With further heating, potassium perchlorate decomposes to potassium chloride and oxygen:
The safe performance of this reaction requires very pure reagents and careful temperature control. Molten potassium chlorate is an extremely powerful oxidizer and spontaneously reacts with many common materials such as sugar. Explosions have resulted from liquid chlorates spattering into the latex or PVC tubes of oxygen generators and from contact between chlorates and hydrocarbon sealing greases. Impurities in potassium chlorate itself can also cause problems. When working with a new batch of potassium chlorate, it is advisable to take a small sample (~1 gram) and heat it strongly on an open glass plate. Contamination may cause this small quantity to explode, indicating that the chlorate should be discarded.
Potassium chlorate is used in chemical oxygen generators (also called chlorate candles or oxygen candles), employed as oxygen-supply systems of e.g. aircraft, space stations, and submarines, and has been responsible for at least one plane crash. A fire on the space station Mir was traced to oxygen generation candles that use a similar lithium perchlorate. The decomposition of potassium chlorate was also used to provide the oxygen supply for limelights.
Potassium chlorate is used also as a pesticide. In Finland it was sold under trade name Fegabit.
Potassium chlorate can react with sulfuric acid to form a highly reactive solution of chloric acid and potassium sulfate:
The solution so produced is sufficiently reactive that it spontaneously ignites if combustible material (sugar, paper, etc.) is present.
In schools, molten potassium chlorate is used in screaming jelly babies, Gummy bear, Haribo, and Trolli candy demonstration where the candy is dropped into the molten salt.
In chemical labs it is used to oxidize HCl and release small amounts of gaseous chlorine.
Militant groups in Afghanistan also use potassium chlorate extensively as a key component in the production of improvised explosive devices (IEDs). When significant effort was made to reduce the availability of ammonium nitrate fertilizer in Afghanistan, IED makers started using potassium chlorate as a cheap and effective alternative. In 2013, 60% of IEDs in Afghanistan used potassium chlorate, making it the most common ingredient used in IEDs.
Potassium chlorate was also the main ingredient in the car bomb used in 2002 Bali bombings that killed 202 people.
Potassium chlorate is used to force the blossoming stage of the longan tree, causing it to produce fruit in warmer climates.
Safety
Potassium chlorate should be handled with care. It reacts vigorously, and in some cases spontaneously ignites or explodes, when mixed with many combustible materials. It burns vigorously in combination with virtually any combustible material, even those normally only slightly flammable (including ordinary dust and lint). Mixtures of potassium chlorate and a fuel can ignite by contact with sulfuric acid, so it should be kept away from this reagent.
Sulfur should be avoided in pyrotechnic compositions containing potassium chlorate, as these mixtures are prone to spontaneous deflagration. Most sulfur contains trace quantities of sulfur-containing acids, and these can cause spontaneous ignition - "Flowers of sulfur" or "sublimed sulfur", despite the overall high purity, contains significant amounts of sulfur acids. Also, mixtures of potassium chlorate with any compound with ignition promoting properties, such as antimony(III) sulfide, are very dangerous to prepare, as they are extremely shock sensitive.
| Physical sciences | Halide oxyanions | Chemistry |
302454 | https://en.wikipedia.org/wiki/Mesothelioma | Mesothelioma | Mesothelioma is a type of cancer that develops from the thin layer of tissue that covers many of the internal organs (known as the mesothelium). The area most commonly affected is the lining of the lungs and chest wall. Less commonly the lining of the abdomen and rarely the sac surrounding the heart, or the sac surrounding each testis may be affected. Signs and symptoms of mesothelioma may include shortness of breath due to fluid around the lung, a swollen abdomen, chest wall pain, cough, feeling tired, and weight loss. These symptoms typically come on slowly.
More than 80% of mesothelioma cases are caused by exposure to asbestos. The greater the exposure, the greater the risk. As of 2013, about 125 million people worldwide have been exposed to asbestos at work. High rates of disease occur in people who mine asbestos, produce products from asbestos, work with asbestos products, live with asbestos workers, or work in buildings containing asbestos. Asbestos exposure and the onset of cancer are generally separated by about 40 years. Washing the clothing of someone who worked with asbestos also increases the risk. Other risk factors include genetics and infection with the simian virus 40. The diagnosis may be suspected based on chest X-ray and CT scan findings, and is confirmed by either examining fluid produced by the cancer or by a tissue biopsy of the cancer.
Prevention focuses on reducing exposure to asbestos. Treatment often includes surgery, radiation therapy, and chemotherapy. A procedure known as pleurodesis, which involves using substances such as talc to scar together the pleura, may be used to prevent more fluid from building up around the lungs. Chemotherapy often includes the medications cisplatin and pemetrexed. The percentage of people that survive five years following diagnosis is on average 8% in the United States.
In 2015, about 60,800 people had mesothelioma, and 32,000 died from the disease. Rates of mesothelioma vary in different areas of the world. Rates are higher in Australia, the United Kingdom, and lower in Japan. It occurs in about 3,000 people per year in the United States. It occurs more often in males than females. Rates of disease have increased since the 1950s. Diagnosis typically occurs after the age of 65 and most deaths occur around 70 years old. The disease was rare before the commercial use of asbestos.
Signs and symptoms
Lungs
Symptoms or signs of mesothelioma may not appear until 20 to 50 years (or more) after exposure to asbestos. Shortness of breath, cough, and pain in the chest due to an accumulation of fluid in the pleural space (pleural effusion) are often symptoms of pleural mesothelioma.
Mesothelioma that affects the pleura can cause these signs and symptoms:
Chest wall pain
Pleural effusion, or fluid surrounding the lung
Shortness of breath – which could be due to a collapsed lung or the pleural effusion
Fatigue or anemia
Wheezing, hoarseness, or a cough
Blood in the sputum (fluid) coughed up (hemoptysis)
In severe cases, the person may have many tumor masses. The individual may develop a pneumothorax, or collapse of the lung. The disease may metastasize, or spread to other parts of the body.
Abdomen
The most common symptoms of peritoneal mesothelioma are abdominal swelling and
pain due to ascites (a buildup of fluid in the abdominal cavity). Other features may include weight loss, fever, night sweats, poor appetite, vomiting, constipation, and umbilical hernia. If the cancer has spread beyond the mesothelium to other parts of the body, symptoms may include pain, trouble swallowing, or swelling of the neck or face. These symptoms may be caused by mesothelioma or by other, less serious conditions.
Tumors that affect the abdominal cavity often do not cause symptoms until they are at a late stage. Symptoms include:
Abdominal pain
Ascites, or an abnormal buildup of fluid in the abdomen
A mass in the abdomen
Problems with bowel function
Weight loss
Heart
Pericardial mesothelioma is not well characterized, but observed cases have included cardiac symptoms, specifically constrictive pericarditis, heart failure, pulmonary embolism, and cardiac tamponade. They have also included nonspecific symptoms, including substernal chest pain, orthopnea (shortness of breath when lying flat), and cough. These symptoms are caused by the tumor encasing or infiltrating the heart.
End-stage
In severe cases of the disease, the following signs and symptoms may be present:
Blood clots in the veins, which may cause thrombophlebitis
Disseminated intravascular coagulation, a disorder causing severe bleeding in many body organs
Jaundice, or yellowing of the eyes and skin
Low blood sugar
Pleural effusion
Pulmonary embolism, or blood clots in the arteries of the lungs
Severe ascites
If a mesothelioma forms metastases, these most commonly involve the liver, adrenal gland, kidney, or other lung.
Causes
Working with asbestos is the most common risk factor for mesothelioma. However, mesothelioma has been reported in some individuals without any known exposure to asbestos. Tentative evidence also raises concern about carbon nanotubes.
Asbestos
The incidence of mesothelioma has been found to be higher in populations living near naturally occurring asbestos. People can be exposed to naturally occurring asbestos in areas where mining or road construction is occurring, or when the asbestos-containing rock is naturally weathered. Another common route of exposure is through asbestos-containing soil, which is used to whitewash, plaster, and roof houses in Greece. In central Cappadocia, Turkey, mesothelioma was causing 50% of all deaths in three small villages—Tuzköy, Karain, and Sarıhıdır. Initially, this was attributed to erionite. Environmental exposure to asbestos has caused mesothelioma in places other than Turkey, including Corsica, Greece, Cyprus, China, and California. In the northern Greek mountain town of Metsovo, this exposure had resulted in mesothelioma incidence around 300 times more than expected in asbestos-free populations, and was associated with very frequent pleural calcification known as Metsovo lung.
The documented presence of asbestos fibers in water supplies and food products has fostered concerns about the possible impact of long-term and, as yet, unknown exposure of the general population to these fibers.
Exposure to talc is also a risk factor for mesothelioma; exposure can affect those who live near talc mines, work in talc mines, or work in talc mills.
In the United States, asbestos is considered the major cause of malignant mesothelioma and has been considered "indisputably" associated with the development of mesothelioma. Indeed, the relationship between asbestos and mesothelioma is so strong that many consider mesothelioma a "signal" or "sentinel" tumor. A history of asbestos exposure exists in most cases.
Pericardial mesothelioma may not be associated with asbestos exposure.
Asbestos was known in antiquity, but it was not mined and widely used commercially until the late 19th century. Its use greatly increased during World War II. Since the early 1940s, millions of American workers have been exposed to asbestos dust. Initially, the risks associated with asbestos exposure were not publicly known. However, an increased risk of developing mesothelioma was later found among naval personnel (e.g., Navy, Marine Corps, and Coast Guard), shipyard workers, people who work in asbestos mines and mills, producers of asbestos products, workers in the heating and construction industries, and other tradespeople. Today, the official position of the U.S. Occupational Safety and Health Administration (OSHA) and the United States Environmental Protection Agency (EPA) is that protections and "permissible exposure limits" required by U.S. regulations, while adequate to prevent most asbestos-related non-malignant disease, are not adequate to prevent or protect against asbestos-related cancers such as mesothelioma. Likewise, the British Government's Health and Safety Executive (HSE) states formally that any threshold for exposure to asbestos must be at a very low level and it is widely agreed that if any such threshold does exist at all, then it cannot currently be quantified. For practical purposes, therefore, HSE assumes that no such "safe" threshold exists. Others have noted as well that there is no evidence of a threshold level below which there is no risk of mesothelioma. There appears to be a linear, dose–response relationship, with increasing dose producing increasing risk of disease. Nevertheless, mesothelioma may be related to brief, low level, or indirect exposures to asbestos. The dose necessary for effect appears to be lower for asbestos-induced mesothelioma than for pulmonary asbestosis or lung cancer. Again, there is no known safe level of exposure to asbestos as it relates to increased risk of mesothelioma.
The time from first exposure to onset of the disease, is between 25 and 70 years. It is virtually never less than fifteen years and peaks at 30–40 years. The duration of exposure to asbestos causing mesothelioma can be short. For example, cases of mesothelioma have been documented with only 1–3 months of exposure.
Occupational
Exposure to asbestos fibers has been recognized as an occupational health hazard since the early 20th century. Numerous epidemiological studies have associated occupational exposure to asbestos with the development of pleural plaques, diffuse pleural thickening, asbestosis, carcinoma of the lung and larynx, gastrointestinal tumors, and diffuse malignant mesothelioma of the pleura and peritoneum. Asbestos has been widely used in many industrial products, including cement, brake linings, gaskets, roof shingles, flooring products, textiles, and insulation.
Commercial asbestos mining at Wittenoom, Western Australia, took place from 1937 to 1966. The first case of mesothelioma in the town occurred in 1960. The second case was in 1969, and new cases began to appear more frequently thereafter. The lag time between initial exposure to asbestos and the development of mesothelioma varied from 12 years 9 months up to 58 years. A cohort study of miners employed at the mine reported that 85 deaths attributable to mesothelioma had occurred by 1985. By 1994, 539 reported deaths due to mesothelioma had been reported in Western Australia.
Occupational exposure to asbestos in the United States mainly occurs when people are maintaining buildings that already have asbestos. Approximately 1.3 million US workers are exposed to asbestos annually; in 2002, an estimated 44,000 miners were potentially exposed to asbestos.
Paraoccupational secondary exposure
Family members and others living with asbestos workers have an increased risk of developing mesothelioma, and possibly other asbestos-related diseases. This risk may be the result of exposure to asbestos dust brought home on the clothing and hair of asbestos workers via washing a worker's clothes or coming into contact with asbestos-contaminated work clothing. To reduce the chance of exposing family members to asbestos fibres, asbestos workers are usually required to shower and change their clothing before leaving the workplace.
Asbestos in buildings
Many building materials used in both public and domestic premises prior to the banning of asbestos may contain asbestos. Those performing renovation works or DIY activities may expose themselves to asbestos dust. In the UK, use of chrysotile asbestos was banned at the end of 1999. Brown and blue asbestos were banned in the UK around 1985. Buildings built or renovated prior to these dates may contain asbestos materials.
Therefore, it is a legal requirement that all who may come across asbestos in their day-to-day work have been provided with the relevant asbestos training.
Genetic disposition
In a recent research carried on white American population in 2012, it was found that people with a germline mutation in their BAP1 gene are at higher risk of developing mesothelioma and uveal melanoma.
Erionite
Erionite is a zeolite mineral with similar properties to asbestos and is known to cause mesothelioma. Detailed epidemiological investigation has shown that erionite causes mesothelioma mostly in families with a genetic predisposition. Erionite is found in deposits in the Western United States, where it is used in gravel for road surfacing, and in Turkey, where it is used to construct homes. In Turkey, the United States, and Mexico, erionite has been associated with mesothelioma and has thus been designated a "known human carcinogen" by the US National Toxicology Program.
Other
In rare cases, mesothelioma has also been associated with irradiation of the chest or abdomen, intrapleural thorium dioxide (thorotrast) as a contrast medium, and inhalation of other fibrous silicates, such as erionite or talc. Some studies suggest that simian virus 40 (SV40) may act as a cofactor in the development of mesothelioma. This has been confirmed in animal studies, but studies in humans are inconclusive.
Pathophysiology
Systemic
The mesothelium consists of a single layer of flattened to cuboidal cells forming the epithelial lining of the serous cavities of the body including the peritoneal, pericardial, and pleural cavities. Deposition of asbestos fibers in the parenchyma of the lung may result in the penetration of the visceral pleura from where the fiber can then be carried to the pleural surface, thus leading to the development of malignant mesothelial plaques. The processes leading to the development of peritoneal mesothelioma remain unresolved, although it has been proposed that asbestos fibers from the lung are transported to the abdomen and associated organs via the lymphatic system. Additionally, asbestos fibers may be deposited in the gut after ingestion of sputum contaminated with asbestos fibers.
Pleural contamination with asbestos or other mineral fibers has been shown to cause cancer. Long thin asbestos fibers (blue asbestos, amphibole fibers) are more potent carcinogens than "feathery fibers" (chrysotile or white asbestos fibers). However, there is now evidence that smaller particles may be more dangerous than the larger fibers. They remain suspended in the air where they can be inhaled, and may penetrate more easily and deeper into the lungs. "We probably will find out a lot more about the health aspects of asbestos from [the World Trade Center attack], unfortunately," said Dr. Alan Fein, chief of pulmonary and critical-care medicine at North Shore-Long Island Jewish Health System.
Mesothelioma development in rats has been demonstrated following intra-pleural inoculation of phosphorylated chrysotile fibers. It has been suggested that in humans, transport of fibers to the pleura is critical to the pathogenesis of mesothelioma. This is supported by the observed recruitment of significant numbers of macrophages and other cells of the immune system to localized lesions of accumulated asbestos fibers in the pleural and peritoneal cavities of rats. These lesions continued to attract and accumulate macrophages as the disease progressed, and cellular changes within the lesion culminated in a morphologically malignant tumor.
Experimental evidence suggests that asbestos acts as a complete carcinogen with the development of mesothelioma occurring in sequential stages of initiation and promotion. The molecular mechanisms underlying the malignant transformation of normal mesothelial cells by asbestos fibers remain unclear despite the demonstration of its oncogenic capabilities (see next-but-one paragraph). However, complete in vitro transformation of normal human mesothelial cells to a malignant phenotype following exposure to asbestos fibers has not yet been achieved. In general, asbestos fibers are thought to act through direct physical interactions with the cells of the mesothelium in conjunction with indirect effects following interaction with inflammatory cells such as macrophages.
Intracellular
Analysis of the interactions between asbestos fibers and DNA has shown that phagocytosed fibers make contact with chromosomes, often adhering to the chromatin fibers or becoming entangled within the chromosome. This contact between the asbestos fiber and the chromosomes or structural proteins of the spindle apparatus can induce complex abnormalities. The most common abnormality is monosomy of chromosome 22. Other frequent abnormalities include structural rearrangement of 1p, 3p, 9p, and 6q chromosome arms.
Common gene abnormalities in mesothelioma cell lines include deletion of the tumor suppressor genes:
Neurofibromatosis type 2 at 22q12
P16INK4A
P14ARF
Asbestos has also been shown to mediate the entry of foreign DNA into target cells. Incorporation of this foreign DNA may lead to mutations and oncogenesis by several possible mechanisms:
Inactivation of tumor suppressor genes
Activation of oncogenes In tumor cells, these genes are often mutated, or expressed at high levels.
Activation of proto-oncogenes due to incorporation of foreign DNA containing a promoter region
Activation of DNA repair enzymes, which may be prone to error
Activation of telomerase
Prevention of apoptosis
Several genes are commonly mutated in mesothelioma, and may be prognostic factors. These include primarily BAP1, NF2, and TP53; epidermal growth factor receptor (EGFR) and C-Met, receptor tyrosine kinases can also be altered and overexpressed in many mesotheliomas. Some association has been found with EGFR and epithelioid histology but no clear association has been found between EGFR overexpression and overall survival; BAP1 alterations are also predominantly in the epithelioid histology. Aneuploidy, ranging from haploidy to tetraploidy, and CpG island hypermethylation have also been shown to be frequent and associated with survival. Expression of AXL receptor tyrosine kinase is a negative prognostic factor. Expression of PDGFRB is a positive prognostic factor. In general, mesothelioma is characterized by loss of function in tumor suppressor genes, rather than by an overexpression or gain of function in oncogenes.
As an environmentally triggered malignancy, mesothelioma tumors have been found to be polyclonal in origin, by performing an X-inactivation based assay on epitheloid and biphasic tumors obtained from female patients. These results suggest that an environmental factor, most likely asbestos exposure, may damage and transform a group of cells in the tissue, resulting in a population of tumor cells that are, albeit only slightly, genetically different.
Immune system
Asbestos fibers have been shown to alter the function and secretory properties of macrophages, ultimately creating conditions which favour the development of mesothelioma. Following asbestos phagocytosis, macrophages generate increased amounts of hydroxyl radicals, which are normal by-products of cellular anaerobic metabolism. However, these free radicals are also known clastogenic (chromosome-breaking)
and membrane-active agents thought to promote asbestos carcinogenicity. These oxidants can participate in the oncogenic process by directly and indirectly interacting with DNA, modifying membrane-associated cellular events, including oncogene activation and perturbation of cellular antioxidant defences.
Asbestos also may possess immunosuppressive properties. For example, chrysotile fibres have been shown to depress the in vitro proliferation of phytohemagglutinin-stimulated peripheral blood lymphocytes, suppress natural killer cell lysis, and significantly reduce lymphokine-activated killer cell viability and recovery. Furthermore, genetic alterations in asbestos-activated macrophages may result in the release of potent mesothelial cell mitogens such as platelet-derived growth factor (PDGF) and transforming growth factor-β (TGF-β) which in turn, may induce the chronic stimulation and proliferation of mesothelial cells after injury by asbestos fibres.
Diagnosis
Diagnosis of mesothelioma can be suspected with imaging but is confirmed with biopsy. It must be clinically and histologically differentiated from other pleural and pulmonary malignancies, including reactive pleural disease, primary lung carcinoma, pleural metastases of other cancers, and other primary pleural cancers.
Primary pericardial mesothelioma is often diagnosed after it has metastasized to lymph nodes or the lungs.
Imaging
Diagnosing mesothelioma is often difficult because the symptoms are similar to those of a number of other conditions. Diagnosis begins with a review of the patient's medical history. A history of exposure to asbestos may increase clinical suspicion for mesothelioma. A physical examination is performed, followed by chest X-ray and often lung function tests. The X-ray may reveal pleural thickening commonly seen after asbestos exposure and increases suspicion of mesothelioma. A CT (or CAT) scan or an MRI is usually performed. If a large amount of fluid is present, abnormal cells may be detected by cytopathology if this fluid is aspirated with a syringe. For pleural fluid, this is done by thoracentesis or tube thoracostomy (chest tube); for ascites, with paracentesis or ascitic drain; and for pericardial effusion with pericardiocentesis. While absence of malignant cells on cytology does not completely exclude mesothelioma, it makes it much more unlikely, especially if an alternative diagnosis can be made (e.g., tuberculosis, heart failure). However, with primary pericardial mesothelioma, pericardial fluid may not contain malignant cells and a tissue biopsy is more useful in diagnosis. Using conventional cytology diagnosis of malignant mesothelioma is difficult, but immunohistochemistry has greatly enhanced the accuracy of cytology.
Biopsy
Generally, a biopsy is needed to confirm a diagnosis of malignant mesothelioma. A doctor removes a sample of tissue for examination under a microscope by a pathologist. A biopsy may be done in different ways, depending on where the abnormal area is located. If the cancer is in the chest, the doctor may perform a thoracoscopy. In this procedure, the doctor makes a small cut through the chest wall and puts a thin, lighted tube called a thoracoscope into the chest between two ribs. Thoracoscopy allows the doctor to look inside the chest and obtain tissue samples. Alternatively, the cardiothoracic surgeon might directly open the chest (thoracotomy). If the cancer is in the abdomen, the doctor may perform a laparoscopy. To obtain tissue for examination, the doctor makes a small incision in the abdomen and inserts a special instrument into the abdominal cavity. If these procedures do not yield enough tissue, an open surgical procedure may be necessary.
Immunochemistry
Immunohistochemical studies play an important role for the pathologist in differentiating malignant mesothelioma from neoplastic mimics, such as breast or lung cancer that has metastasized to the pleura. There are numerous tests and panels available, but no single test is perfect for distinguishing mesothelioma from carcinoma or even benign versus malignant. The positive markers indicate that mesothelioma is present; if other markers are positive it may indicate another type of cancer, such as breast or lung adenocarcinoma. Calretinin is a particularly important marker in distinguishing mesothelioma from metastatic breast or lung cancer.
Subtypes
There are three main histological subtypes of malignant mesothelioma: epithelioid, sarcomatous, and biphasic. Epithelioid and biphasic mesothelioma make up approximately 75-95% of mesotheliomas and have been well characterized histologically, whereas sarcomatous mesothelioma has not been studied extensively. Most mesotheliomas express high levels of cytokeratin 5 regardless of subtype.
Epithelioid mesothelioma is characterized by high levels of calretinin.
Sarcomatous mesothelioma does not express high levels of calretinin.
Other morphological subtypes have been described:
Desmoplastic
Clear cell
Deciduoid
Adenomatoid
Glandular
Mucohyaline
Cartilaginous and osseous metaplasia
Lymphohistiocytic
Differential diagnosis
Metastatic adenocarcinoma
Pleural sarcoma
Synovial sarcoma
Thymoma
Metastatic clear cell renal cell carcinoma
Metastatic osteosarcoma
Staging
Staging of mesothelioma is based on the recommendation by the International Mesothelioma Interest Group. TNM classification of the primary tumor, lymph node involvement, and distant metastasis is performed. Mesothelioma is staged Ia–IV (one-A to four) based on the TNM status.
Prevention
Mesothelioma can be prevented in most cases by preventing exposure to asbestos. The US National Institute for Occupational Safety and Health maintains a recommended exposure limit of 0.1 asbestos fiber per cubic centimeter.
Screening
There is no universally agreed protocol for screening people who have been exposed to asbestos. Screening tests might diagnose mesothelioma earlier than conventional methods thus improving the survival prospects for patients. The serum osteopontin level might be useful in screening asbestos-exposed people for mesothelioma. The level of soluble mesothelin-related protein is elevated in the serum of about 75% of patients at diagnosis and it has been suggested that it may be useful for screening. Doctors have begun testing the Mesomark assay, which measures levels of soluble mesothelin-related proteins (SMRPs) released by mesothelioma cells.
Treatment
Mesothelioma is generally resistant to radiation and chemotherapy treatment. Long-term survival and cures are exceedingly rare. Treatment of malignant mesothelioma at earlier stages has a better prognosis. Clinical behavior of the malignancy is affected by several factors including the continuous mesothelial surface of the pleural cavity which favors local metastasis via exfoliated cells, invasion to underlying tissue and other organs within the pleural cavity, and the extremely long latency period between asbestos exposure and development of the disease. The histological subtype and the patient's age and health status also help predict prognosis. The epithelioid histology responds better to treatment and has a survival advantage over sarcomatoid histology.
The effectiveness of radiotherapy compared to chemotherapy or surgery for malignant pleural mesothelioma is not known.
Surgery
Surgery, by itself, has proved disappointing. In one large series, the median survival with surgery (including extrapleural pneumonectomy) was only 11.7 months. However, research indicates varied success when used in combination with radiation, chemotherapy (Duke, 2008), or both. A pleurectomy/decortication is the most common surgery, in which the lining of the chest is removed. Less common is an extrapleural pneumonectomy (EPP), in which the lung, lining of the inside of the chest, the hemi-diaphragm, and the pericardium are removed. In localized pericardial mesothelioma, pericardectomy can be curative; when the tumor has metastasized, pericardectomy is a palliative care option. It is often not possible to remove the entire tumor.
Radiation
For patients with localized disease, and who can tolerate a radical surgery, radiation can be given post-operatively as a consolidative treatment. The entire hemithorax is treated with radiation therapy, often given simultaneously with chemotherapy. Delivering radiation and chemotherapy after a radical surgery has led to extended life expectancy in selected patient populations. It can also induce severe side-effects, including fatal pneumonitis. As part of a curative approach to mesothelioma, radiotherapy is commonly applied to the sites of chest drain insertion, in order to prevent growth of the tumor along the track in the chest wall.
Although mesothelioma is generally resistant to curative treatment with radiotherapy alone, palliative treatment regimens are sometimes used to relieve symptoms arising from tumor growth, such as obstruction of a major blood vessel. Radiation therapy, when given alone with curative intent, has never been shown to improve survival from mesothelioma. The necessary radiation dose to treat mesothelioma that has not been surgically removed would be beyond human tolerance. Radiotherapy is of some use in pericardial mesothelioma.
Chemotherapy
Chemotherapy is the only treatment for mesothelioma that has been proven to improve survival in randomised and controlled trials. The landmark study published in 2003 by Vogelzang and colleagues compared cisplatin chemotherapy alone with a combination of cisplatin and pemetrexed (brand name Alimta) chemotherapy in patients who had not received chemotherapy for malignant pleural mesothelioma previously and were not candidates for more aggressive "curative" surgery. This trial was the first to report a survival advantage from chemotherapy in malignant pleural mesothelioma, showing a statistically significant improvement in median survival from 10 months in the patients treated with cisplatin alone to 13.3 months in the group of patients treated with cisplatin in the combination with pemetrexed and who also received supplementation with folate and vitamin B12. Vitamin supplementation was given to most patients in the trial and pemetrexed related side effects were significantly less in patients receiving pemetrexed when they also received daily oral folate 500mcg and intramuscular vitamin B12 1000mcg every 9 weeks compared with patients receiving pemetrexed without vitamin supplementation. The objective response rate increased from 20% in the cisplatin group to 46% in the combination pemetrexed group. Some side effects such as nausea and vomiting, stomatitis, and diarrhoea were more common in the combination pemetrexed group but only affected a minority of patients and overall the combination of pemetrexed and cisplatin was well tolerated when patients received vitamin supplementation; both quality of life and lung function tests improved in the combination pemetrexed group. In February 2004, the United States Food and Drug Administration (FDA) approved pemetrexed for treatment of malignant pleural mesothelioma. However, there are still unanswered questions about the optimal use of chemotherapy, including when to start treatment, and the optimal number of cycles to give. Cisplatin and pemetrexed together give patients a median survival of 12.1 months.
Cisplatin in combination with raltitrexed has shown an improvement in survival similar to that reported for pemetrexed in combination with cisplatin, but raltitrexed is no longer commercially available for this indication. For patients unable to tolerate pemetrexed, cisplatin in combination with gemcitabine or vinorelbine is an alternative, or vinorelbine on its own, although a survival benefit has not been shown for these drugs. For patients in whom cisplatin cannot be used, carboplatin can be substituted but non-randomised data have shown lower response rates and high rates of haematological toxicity for carboplatin-based combinations, albeit with similar survival figures to patients receiving cisplatin. Cisplatin in combination with premetrexed disodium, folic acid, and vitamin B12 may also improve survival for people who are responding to chemotherapy.
In January 2009, the United States FDA approved using conventional therapies such as surgery in combination with radiation and/or chemotherapy on stage I or II Mesothelioma after research conducted by a nationwide study by Duke University concluded an almost 50 point increase in remission rates.
In pericardial mesothelioma, chemotherapy—typically adriamycin or cisplatin—is primarily used to shrink the tumor and is not curative.
Immunotherapy
Treatment regimens involving immunotherapy have yielded variable results. For example, intrapleural inoculation of Bacillus Calmette-Guérin (BCG) in an attempt to boost the immune response, was found to be of no benefit to the patient (while it may benefit patients with bladder cancer). Mesothelioma cells proved susceptible to in vitro lysis by LAK cells following activation by interleukin-2 (IL-2), but patients undergoing this particular therapy experienced major side effects. Indeed, this trial was suspended in view of the unacceptably high levels of IL-2 toxicity and the severity of side effects such as fever and cachexia. Nonetheless, other trials involving interferon alpha have proved more encouraging with 20% of patients experiencing a greater than 50% reduction in tumor mass combined with minimal side effects.
In October 2020, the FDA approved the combination of nivolumab (Opdivo) with ipilimumab (Yervoy) for the first-line treatment of adults with malignant pleural mesothelioma (MPM) that cannot be removed by surgery. Nivolumab and ipilimumab are both monoclonal antibodies that, when combined, decrease tumor growth by enhancing T-cell function. The combination therapy was evaluated through a randomized, open-label trial in which participants who received nivolumab in combination with ipilimumab survived a median of 18.1 months while participants who underwent chemotherapy survived a median of 14.1 months.
Hyperthermic intrathoracic chemotherapy
Hyperthermic intrathoracic chemotherapy is used in conjunction with surgery, including in patients with malignant pleural mesothelioma. The surgeon removes as much of the tumor as possible followed by the direct administration of a chemotherapy agent, heated to between 40 and 48 °C, in the abdomen. The fluid is perfused for 60 to 120 minutes and then drained. High concentrations of selected drugs are then administered into the pleural cavity. Heating the chemotherapy treatment increases the penetration of the drugs into tissues. Also, heating itself damages the malignant cells more than the normal cells.
Multimodality therapy
Multimodal therapy, which includes a combined approach of surgery, radiation, or photodynamic therapy, and chemotherapy, is not suggested for routine practice for treating malignant pleural mesothelioma. The effectiveness and safety of multimodal therapy is not clear (not enough research has been performed) and one clinical trial has suggested a possible increased risk of adverse effects.
Large series of examining multimodality treatment have only demonstrated modest improvement in survival (median survival 14.5 months and only 29.6% surviving 2 years). Reducing the bulk of the tumor with cytoreductive surgery is key to extending survival. Two surgeries have been developed: extrapleural pneumonectomy and pleurectomy/decortication. The indications for performing these operations are unique. The choice of operation namely depends on the size of the patient's tumor. This is an important consideration because tumor volume has been identified as a prognostic factor in mesothelioma. Pleurectomy/decortication spares the underlying lung and is performed in patients with early stage disease when the intention is to remove all gross visible tumor (macroscopic complete resection), not simply palliation. Extrapleural pneumonectomy is a more extensive operation that involves resection of the parietal and visceral pleurae, underlying lung, ipsilateral (same side) diaphragm, and ipsilateral pericardium. This operation is indicated for a subset of patients with more advanced tumors, who can tolerate a pneumonectomy.
Prognosis
Mesothelioma usually has a poor prognosis. Typical survival despite surgery is between 12 and 21 months depending on the stage of disease at diagnosis with about 7.5% of people surviving for 5 years.
Women, young people, people with low-stage cancers, and people with epithelioid cancers have better prognoses. Negative prognostic factors include sarcomatoid or biphasic histology, high platelet counts (above 400,000), age over 50 years, white blood cell counts above 15.5, low glucose levels in the pleural fluid, low albumin levels, and high fibrinogen levels. Several markers are under investigation as prognostic factors, including nuclear grade, and serum c-reactive protein. Long-term survival is rare.
Pericardial mesothelioma has a 10-month median survival time.
In peritoneal mesothelioma, high expression of WT-1 protein indicates a worse prognosis.
Epidemiology
Although reported incidence rates have increased in the past 20 years, mesothelioma is still a relatively rare cancer. The incidence rate varies from one country to another, from a low rate of less than 1 per 1,000,000 in Tunisia and Morocco, to the highest rate in Britain, Australia, and Belgium: 30 per 1,000,000 per year. For comparison, populations with high levels of smoking can have a lung cancer incidence of over 1,000 per 1,000,000. Incidence of malignant mesothelioma currently ranges from about 7 to 40 per 1,000,000 in industrialized Western nations, depending on the amount of asbestos exposure of the populations during the past several decades. Worldwide incidence is estimated at 1–6 per 1,000,000.
Incidence of mesothelioma lags behind that of asbestosis due to the longer time it takes to develop; due to the cessation of asbestos use in developed countries, mesothelioma incidence is expected to decrease. Incidence is expected to continue increasing in developing countries due to continuing use of asbestos. Mesothelioma occurs more often in men than in women and risk increases with age, but this disease can appear in either men or women at any age. Approximately one fifth to one third of all mesotheliomas are peritoneal. Less than 5% of mesotheliomas are pericardial. The prevalence of pericardial mesothelioma is less than 0.002%; it is more common in men than women. It typically occurs in a person's 50s-70s.
Between 1940 and 1979, approximately 27.5 million people were occupationally exposed to asbestos in the United States. Between 1973 and 1984, the incidence of pleural mesothelioma among Caucasian males increased 300%. From 1980 to the late 1990s, the death rate from mesothelioma in the USA increased from 2,000 per year to 3,000, with men four times more likely to acquire it than women. More than 80% of mesotheliomas are caused by asbestos exposure.
The incidence of peritoneal mesothelioma is 0.5–3.0 per million per year in men, and 0.2–2.0 per million per year in women.
UK
Mesothelioma accounts for less than 1% of all cancers diagnosed in the UK, (around 2,600 people were diagnosed with the disease in 2011), and it is the seventeenth most common cause of cancer death (around 2,400 people died in 2012).
History
Connections between asbestos exposure and mesothelioma were first identified in the 1960s, with significant evidence emerging from South Africa.
In the United States, asbestos manufacture stopped in 2002. Asbestos exposure thus shifted from workers in asbestos textile mills, friction product manufacturing, cement pipe fabrication, and insulation manufacture and installation to maintenance workers in asbestos-containing buildings.
Society and culture
Notable cases
Mesothelioma, though rare, has had a number of notable patients:
Malcolm McLaren, musician and manager of the punk rock band the Sex Pistols, was diagnosed with peritoneal mesothelioma in October 2009 and died on 8 April 2010 in Switzerland.
Steve McQueen, American actor, was diagnosed with peritoneal mesothelioma on December 22, 1979. He was not offered surgery or chemotherapy because doctors felt the cancer was too advanced. McQueen subsequently sought alternative treatments at clinics in Mexico. He died of a heart attack on November 7, 1980, in Juárez, Mexico, following cancer surgery. He may have been exposed to asbestos while serving with the U.S. Marines as a young adult—asbestos was then commonly used to insulate ships' piping—or from its use as an insulating material in automobile racing suits (McQueen was an avid racing driver and fan).
Mickie Most, record producer, died of peritoneal mesothelioma in May 2003; however, it has been questioned whether this was due to asbestos exposure.
Warren Zevon, American musician, was diagnosed with pleural mesothelioma in 2002, and died on September 7, 2003. It is believed that this was caused through childhood exposure to asbestos insulation in the attic of his father's shop.
David Martin, Australian sailor and politician, died on 10 August 1990 of pleural mesothelioma. It is believed that this was caused by his exposure to asbestos on military ships during his career in the Royal Australian Navy.
Paul Kraus, diagnosed in 1997, is considered the longest currently living (as of 2017) mesothelioma survivor in the world.
F. W. De Klerk, South African retired politician, was diagnosed with mesothelioma on March 19, 2021, and died in November 2021.
Paul Gleason, American actor, died on May 27, 2006, just a few months after diagnosis.
Merlin Olsen, American football player, announcer, and actor, died March 11, 2010.
Rogier van Otterloo, Dutch composer and conductor, was diagnosed with mesothelioma and died January 29, 1988.
Although life expectancy with this disease is typically limited, there are notable survivors. In July 1982, Stephen Jay Gould, a well-regarded paleontologist, was diagnosed with peritoneal mesothelioma. After his diagnosis, Gould wrote "The Median Isn't the Message", in which he argued that statistics such as median survival are useful abstractions, not destiny. Gould lived for another 20 years, eventually succumbing to cancer not linked to his mesothelioma.
Legal issues
Some people who were exposed to asbestos have collected damages for an asbestos-related disease, including mesothelioma. Compensation via asbestos funds or class action lawsuits is an important issue in law practices regarding mesothelioma.
The first lawsuits against asbestos manufacturers were in 1929. Since then, many lawsuits have been filed against asbestos manufacturers and employers, for neglecting to implement safety measures after the links between asbestos, asbestosis, and mesothelioma became known (some reports seem to place this as early as 1898). The liability resulting from the sheer number of lawsuits and people affected has reached billions of dollars. The amounts and method of allocating compensation have been the source of many court cases, reaching up to the United States Supreme Court, and government attempts at resolution of existing and future cases. However, to date, the US Congress has not stepped in and there are no federal laws governing asbestos compensation.
In 2013, the "Furthering Asbestos Claim Transparency (FACT) Act of 2013" passed the US House of representatives and was sent to the US Senate, where it was referred to the Senate Judiciary Committee. As the Senate did not vote on it before the end of the 113th Congress, it died in committee. It was revived in the 114th Congress, where it has not yet been brought before the House for a vote.
History
The first lawsuit against asbestos manufacturers was brought in 1929. The parties settled that lawsuit, and as part of the agreement, the attorneys agreed not to pursue further cases. In 1960, an article published by Wagner et al. was seminal in establishing mesothelioma as a disease arising from exposure to asbestos. The article referred to over 30 case studies of people who had had mesothelioma in South Africa. Some exposures were transient and some were mine workers. Before the use of advanced microscopy techniques, malignant mesothelioma was often diagnosed as a variant form of lung cancer. In 1962, McNulty reported the first diagnosed case of malignant mesothelioma in an Australian asbestos worker. The worker had worked in the mill at the asbestos mine in Wittenoom from 1948 to 1950.
In the town of Wittenoom, asbestos-containing mine waste was used to cover schoolyards and playgrounds. In 1965, an article in the British Journal of Industrial Medicine established that people who lived in the neighbourhoods of asbestos factories and mines, but did not work in them, had contracted mesothelioma.
Despite proof that the dust associated with asbestos mining and milling causes asbestos-related disease, mining began at Wittenoom in 1943 and continued until 1966. In 1974, the first public warnings of the dangers of blue asbestos were published in a cover story called "Is this Killer in Your Home?" in Australia's Bulletin magazine. In 1978, the Western Australian Government decided to phase out the town of Wittenoom, following the publication of a Health Dept. booklet, "The Health Hazard at Wittenoom", containing the results of air sampling and an appraisal of worldwide medical information.
By 1979, the first writs for negligence related to Wittenoom were issued against CSR and its subsidiary ABA, and the Asbestos Diseases Society was formed to represent the Wittenoom victims.
In Leeds, England the Armley asbestos disaster involved several court cases against Turner & Newall where local residents who contracted mesothelioma demanded compensation because of the asbestos pollution from the company's factory. One notable case was that of June Hancock, who contracted the disease in 1993 and died in 1997.
Research
The WT-1 protein is overexpressed in mesothelioma and is being researched as a potential target for drugs.
There are two high-confidence miRNAs that can potentially serve as biomarkers of asbestos exposure and malignant mesothelioma. Validation studies are needed to assess their relevance.
Some growth factors have been identified and as a result, targeted therapies have emerged to help slow the growth of oncogenic abnormalities. For example, bevacizumab, a humanized monoclonal antibody, is directed at the vascular endothelial growth factor receptor (VEGFR).
| Biology and health sciences | Cancer | Health |
302598 | https://en.wikipedia.org/wiki/Vombatiformes | Vombatiformes | The Vombatiformes are one of the three suborders of the large marsupial order Diprotodontia. Seven of the nine known families within this suborder are extinct; only the families Phascolarctidae, with the koala, and Vombatidae, with three extant species of wombat, survive.
Among the extinct families are the Diprotodontidae, which includes the rhinoceros sized Diprotodon, believed to be the largest marsupials ever, as well as the "marsupial lions" Thylacoleonidae and "marsupial tapirs" Palorchestidae.
Classification
After
Suborder Vombatiformes
Family †Thylacoleonidae: (marsupial lions)
Genus †Thylacoleo
Genus †Priscileo
Genus †Wakaleo
Genus †Microleo
Family Phascolarctidae: koala (one modern species)
Genus †Perikoala
Genus †Madakoala
Genus †Koobor
Genus †Litokoala
Genus †Nimiokoala
Genus Phascolarctos
Vombatomorphia
Family †Wynyardiidae
Genus †Wynyardia
Genus †Muramura
Genus †Namilamadeta
Family †Ilariidae
Genus †Koalemas
Genus †Kuterintja
Genus †Ilaria
Vombatoidea
Family †Mukupirnidae
Genus †Mukupirna
Family †Maradidae
Genus †Marada
Genus †Nimbavombatus (either considered the most basal vombatid or just outside Vombatidae)
Family Vombatidae: wombats (three living species)
Genus †Rhizophascolonus
Genus Vombatus
Genus †Phascolonus
Genus †Warendja
Genus †Ramsayia
Genus †Sedophascolomys
Genus Lasiorhinus
Superfamily Diprotodontoidea
Genus †Silvabestius
Genus †Ngapakaldia
Genus †Nimbadon
Genus †Neohelos
Family †Diprotodontidae:
Genus †Alkwertatherium
Genus †Bematherium
Genus †Pyramios
Genus †Nototherium
Genus †Meniscolophus
Genus †Euryzygoma
Genus †Diprotodon
Genus †Euowenia
Genus †Sthenomerus
Subfamily †Zygomaturinae
Genus †Neohelos
Genus †Raemeotherium
Genus †Plaisiodon
Genus †Zygomaturus
Genus †Kolopsis
Genus †Kolopsoides
Genus †Hulitherium
Genus †Maokopia
Family †Palorchestidae: (marsupial tapirs)
Genus †Palorchestes
Genus †Propalorchestes
Genus †Pitikantia
| Biology and health sciences | Diprotodontia | Animals |
302776 | https://en.wikipedia.org/wiki/Float%20glass | Float glass | Float glass is a sheet of glass made by floating molten glass on a bed of molten metal of a low melting point, typically tin, although lead was used for the process in the past. This method gives the sheet uniform thickness and a very flat surface. The float glass process is also known as the Pilkington process, named after the British glass manufacturer Pilkington, which pioneered the technique in the 1950s at their production site in St Helens, Merseyside.
Modern windows are usually made from float glass, though Corning Incorporated uses the overflow downdraw method.
Most float glass is soda–lime glass, although relatively minor quantities of specialty borosilicate and flat panel display glass are also produced using the float glass process.
History
Until the 16th century, window glass or other flat glass was generally cut from large discs (or rondels) of crown glass. Larger sheets of glass were made by blowing large cylinders which were cut open and flattened, then cut into panes. Most window glass in the early 19th century was made using the cylinder method. The 'cylinders' were long and in diameter, limiting the width that panes of glass could be cut, and resulting in windows divided by transoms into rectangular panels.
The first advances in automating glass manufacturing were patented in 1848 by Henry Bessemer. His system produced a continuous ribbon of flat glass by forming the ribbon between rollers. This was an expensive process, as the surfaces of the glass needed polishing. If the glass could be set on a perfectly smooth, flat body, like the surface of an open pan of calm liquid, this would reduce costs considerably. Attempts were made to form flat glass on a bath of molten tin—one of the few liquids denser than glass that would be calm at the high temperatures needed to make glass—most notably in the US. Several patents were granted, but this process was unworkable at the time.
Before the development of float glass, larger sheets of plate glass were made by casting a large puddle of glass on an iron surface, and then polishing both sides, a costly process. From the early 1920s, a continuous ribbon of plate glass was passed through a lengthy series of inline grinders and polishers, reducing glass losses and cost.
Glass of lower quality, drawn glass, was made by drawing upwards from a pool of molten glass a thin sheet, held at the edges by rollers. As it cooled the rising sheet stiffened and could then be cut. The two surfaces were of lower quality i.e. not as smooth or uniform as those of float glass. This process continued in use for many years after the development of float glass.
Between 1953 and 1957, at the Cowley Hill Works St Helens, Lancashire, Sir Alastair Pilkington and Kenneth Bickerstaff of the UK's Pilkington Brothers developed the first successful commercial application for forming a continuous ribbon of glass using a molten tin bath on which the molten glass flows unhindered under the influence of gravity. The success of this process lay in the careful balance of the volume of glass fed onto the bath, where it was flattened by its own weight. Full scale profitable sales of float glass were first achieved in 1960, and in the 1960s the process was licensed throughout the world, replacing previous production methods.
Manufacture
Float glass uses common glass-making raw materials, typically consisting of sand, soda ash (sodium carbonate), dolomite, limestone, and salt cake (sodium sulfate) etc. Other materials may be used as colourants, refining agents or to adjust the physical and chemical properties of the glass. The raw materials are mixed in a batch process, then fed together with a controlled proportion of cullet (waste glass) into a furnace, where it is heated to approximately 1,500 °C. Common float glass furnaces are 9 m wide and 45 m long and have capacities of more than 1,200 tons of glass. Once molten, the temperature of the glass is stabilised to approximately 1,200 °C to ensure a homogeneous density.
The molten glass is fed into a "tin bath", a bath of molten tin (about 3–4 m wide, 50 m long, 6 cm deep), from a delivery canal and is poured into the tin bath by a ceramic lip known as the spout lip. The amount of glass allowed to pour onto the molten tin is controlled by a gate called a tweel.
Molten tin is suitable for the float glass process because it has a higher density than glass, so the molten glass floats on it. Its boiling point is higher than the melting point of glass, and its vapour pressure at process temperature is low. However, tin oxidises in a natural atmosphere to form tin dioxide (SnO2). Known in the production process as dross, the tin dioxide adheres to the glass. To prevent oxidation, the tin bath is provided with a positive pressure protective atmosphere of nitrogen and hydrogen.
The glass flows onto the tin surface forming a floating ribbon of even thickness with perfectly smooth surfaces on both sides. As the glass flows along the tin bath, the temperature is gradually reduced from 1,100 °C until at approximately 600 °C the sheet can be lifted from the tin onto rollers. The glass ribbon is pulled off the bath by rollers at a controlled speed. Variation in the flow speed and roller speed enables glass sheets of varying thickness to be formed. Top rollers positioned above the molten tin may be used to control both the thickness and the width of the glass ribbon.
Once off the bath, the glass sheet passes through a lehr kiln for approximately 100 m, where it is cooled gradually so that it anneals without strain and does not crack from the temperature change. On exiting the "cold end" of the kiln, the glass is cut by machines.
Uses
Today, float glass is the most widely produced form of glass, with a multitude of commercial applications. Due to both its high quality with no additional polishing required and its structural flexibility during production, it can easily be shaped and bent into a variety of forms while in a heated, syrupy state. This makes it ideal for a variety of applications such as
Automobile glass (e.g. windshields, windows, mirrors)
Mirrors
Furniture (e.g. in tables and shelves)
Insulated glass
Windows and doors
Most forms of specialized glass such as toughened glass, frosted glass, laminated safety glass and soundproof glass consist of standard float glass that has been further processed.
Market
As of 2009, the world float glass market, not including China and Russia, is dominated by four companies: Asahi Glass, NSG/Pilkington, Saint-Gobain, and Guardian Industries. Other companies include Sise Cam AS, Vitro, formerly PPG, Central Glass, Hankuk (HanGlas), Carlex Glass, and Cardinal Glass Industries.
| Technology | Materials | null |
302812 | https://en.wikipedia.org/wiki/Color%20vision | Color vision | Color vision, a feature of visual perception, is an ability to perceive differences between light composed of different frequencies independently of light intensity.
Color perception is a part of the larger visual system and is mediated by a complex process between neurons that begins with differential stimulation of different types of photoreceptors by light entering the eye. Those photoreceptors then emit outputs that are propagated through many layers of neurons ultimately leading to higher cognitive functions in the brain. Color vision is found in many animals and is mediated by similar underlying mechanisms with common types of biological molecules and a complex history of evolution in different animal taxa. In primates, color vision may have evolved under selective pressure for a variety of visual tasks including the foraging for nutritious young leaves, ripe fruit, and flowers, as well as detecting predator camouflage and emotional states in other primates.
Wavelength
Isaac Newton discovered that white light after being split into its component colors when passed through a dispersive prism could be recombined to make white light by passing them through a different prism. The visible light spectrum ranges from about 380 to 740 nanometers. Spectral colors (colors that are produced by a narrow band of wavelengths) such as red, orange, yellow, green, cyan, blue, and violet can be found in this range. These spectral colors do not refer to a single wavelength, but rather to a set of wavelengths: red, 625–740 nm; orange, 590–625 nm; yellow, 565–590 nm; green, 500–565 nm; cyan, 485–500 nm; blue, 450–485 nm; violet, 380–450 nm.
Wavelengths longer or shorter than this range are called infrared or ultraviolet, respectively. Humans cannot generally see these wavelengths, but other animals may.
Hue detection
Sufficient differences in wavelength cause a difference in the perceived hue; the just-noticeable difference in wavelength varies from about 1 nm in the blue-green and yellow wavelengths to 10 nm and more in the longer red and shorter blue wavelengths. Although the human eye can distinguish up to a few hundred hues, when those pure spectral colors are mixed together or diluted with white light, the number of distinguishable chromaticities can be much higher.
In very low light levels, vision is scotopic: light is detected by rod cells of the retina. Rods are maximally sensitive to wavelengths near 500 nm and play little, if any, role in color vision. In brighter light, such as daylight, vision is photopic: light is detected by cone cells which are responsible for color vision. Cones are sensitive to a range of wavelengths, but are most sensitive to wavelengths near 555 nm. Between these regions, mesopic vision comes into play and both rods and cones provide signals to the retinal ganglion cells. The shift in color perception from dim light to daylight gives rise to differences known as the Purkinje effect.
The perception of "white" is formed by the entire spectrum of visible light, or by mixing colors of just a few wavelengths in animals with few types of color receptors. In humans, white light can be perceived by combining wavelengths such as red, green, and blue, or just a pair of complementary colors such as blue and yellow.
Non-spectral colors
There are a variety of colors in addition to spectral colors and their hues. These include grayscale colors, shades of colors obtained by mixing grayscale colors with spectral colors, violet-red colors, impossible colors, and metallic colors.
Grayscale colors include white, gray, and black. Rods contain rhodopsin, which reacts to light intensity, providing grayscale coloring.
Shades include colors such as pink or brown. Pink is obtained from mixing red and white. Brown may be obtained from mixing orange with gray or black. Navy is obtained from mixing blue and black.
Violet-red colors include hues and shades of magenta. The light spectrum is a line on which violet is one end and the other is red, and yet we see hues of purple that connect those two colors.
Impossible colors are a combination of cone responses that cannot be naturally produced. For example, medium cones cannot be activated completely on their own; if they were, we would see a 'hyper-green' color.
Dimensionality
Color vision is categorized foremost according to the dimensionality of the color gamut, which is defined by the number of primaries required to represent the color vision. This is generally equal to the number of photopsins expressed: a correlation that holds for vertebrates but not invertebrates. The common vertebrate ancestor possessed four photopsins (expressed in cones) plus rhodopsin (expressed in rods), so was tetrachromatic. However, many vertebrate lineages have lost one or many photopsin genes, leading to lower-dimension color vision. The dimensions of color vision range from 1-dimensional and up:
Physiology of color perception
Perception of color begins with specialized retinal cells known as cone cells. Cone cells contain different forms of opsin – a pigment protein – that have different spectral sensitivities. Humans contain three types, resulting in trichromatic color vision.
Each individual cone contains pigments composed of opsin apoprotein covalently linked to a light-absorbing prosthetic group: either 11-cis-hydroretinal or, more rarely, 11-cis-dehydroretinal.
The cones are conventionally labeled according to the ordering of the wavelengths of the peaks of their spectral sensitivities: short (S), medium (M), and long (L) cone types. These three types do not correspond well to particular colors as we know them. Rather, the perception of color is achieved by a complex process that starts with the differential output of these cells in the retina and which is finalized in the visual cortex and associative areas of the brain.
For example, while the L cones have been referred to simply as red receptors, microspectrophotometry has shown that their peak sensitivity is in the greenish-yellow region of the spectrum. Similarly, the S cones and M cones do not directly correspond to blue and green, although they are often described as such. The RGB color model, therefore, is a convenient means for representing color but is not directly based on the types of cones in the human eye.
The peak response of human cone cells varies, even among individuals with so-called normal color vision;
in some non-human species this polymorphic variation is even greater, and it may well be adaptive.
Theories
Two complementary theories of color vision are the trichromatic theory and the opponent process theory. The trichromatic theory, or Young–Helmholtz theory, proposed in the 19th century by Thomas Young and Hermann von Helmholtz, posits three types of cones preferentially sensitive to blue, green, and red, respectively. Others have suggested that the trichromatic theory is not specifically a theory of color vision but a theory of receptors for all vision, including color but not specific or limited to it. Equally, it has been suggested that the relationship between the phenomenal opponency described by Hering and the physiological opponent processes are not straightforward (see below), making of physiological opponency a mechanism that is relevant to the whole of vision, and not just to color vision alone. Ewald Hering proposed the opponent process theory in 1872. It states that the visual system interprets color in an antagonistic way: red vs. green, blue vs. yellow, black vs. white. Both theories are generally accepted as valid, describing different stages in visual physiology, visualized in the adjacent diagram.
Green–magenta and blue–yellow are scales with mutually exclusive boundaries. In the same way that there cannot exist a "slightly negative" positive number, a single eye cannot perceive a bluish-yellow or a reddish-green. Although these two theories are both currently widely accepted theories, past and more recent work has led to criticism of the opponent process theory, stemming from a number of what are presented as discrepancies in the standard opponent process theory. For example, the phenomenon of an after-image of complementary color can be induced by fatiguing the cells responsible for color perception, by staring at a vibrant color for a length of time, and then looking at a white surface. This phenomenon of complementary colors demonstrates cyan, rather than green, to be the complement of red and magenta, rather than red, to be the complement of green, as well as demonstrating, as a consequence, that the reddish-green color proposed to be impossible by opponent process theory is, in fact, the color yellow. Although this phenomenon is more readily explained by the trichromatic theory, explanations for the discrepancy may include alterations to the opponent process theory, such as redefining the opponent colors as red vs. cyan, to reflect this effect. Despite such criticisms, both theories remain in use.
A newer theory proposed by Edwin H. Land, the Retinex Theory, is based on a demonstration of color constancy, which shows that the color of any surface that is part of a complex natural scene is to a large degree independent of the wavelength composition of the light reflected from it. Also the after-image produced by looking at a given part of a complex scene is also independent of the wavelength composition of the light reflected from it alone. Thus, while the color of the after-image produced by looking at a green surface that is reflecting more "green" (middle-wave) than "red" (long-wave) light is magenta, so is the after–image of the same surface when it reflects more "red" than "green" light (when it is still perceived as green). This would seem to rule out an explanation of color opponency based on retinal cone adaptation.
According to Land's Retinex theory, color in a natural scene depends upon the three sets of cone cells ("red," "green," and "blue") separately perceiving each surface's relative lightness in the scene and, together with the visual cortex, assigning color based on comparing the lightness values perceived by each set of cone cells.
Cone cells in the human eye
A range of wavelengths of light stimulates each of these receptor types to varying degrees. The brain combines the information from each type of receptor to give rise to different perceptions of different wavelengths of light.
Cones and rods are not evenly distributed in the human eye. Cones have a high density at the fovea and a low density in the rest of the retina. Thus color information is mostly taken in at the fovea. Humans have poor color perception in their peripheral vision, and much of the color we see in our periphery may be filled in by what our brains expect to be there on the basis of context and memories. However, our accuracy of color perception in the periphery increases with the size of stimulus.
The opsins (photopigments) present in the L and M cones are encoded on the X chromosome; defective encoding of these leads to the two most common forms of color blindness. The OPN1LW gene, which encodes the opsin present in the L cones, is highly polymorphic; one study found 85 variants in a sample of 236 men. A small percentage of women may have an extra type of color receptor because they have different alleles for the gene for the L opsin on each X chromosome. X chromosome inactivation means that while only one opsin is expressed in each cone cell, both types may occur overall, and some women may therefore show a degree of tetrachromatic color vision. Variations in OPN1MW, which encodes the opsin expressed in M cones, appear to be rare, and the observed variants have no effect on spectral sensitivity.
Color in the primate brain
Color processing begins at a very early level in the visual system (even within the retina) through initial color opponent mechanisms. Both Helmholtz's trichromatic theory and Hering's opponent-process theory are therefore correct, but trichromacy arises at the level of the receptors, and opponent processes arise at the level of retinal ganglion cells and beyond. In Hering's theory, opponent mechanisms refer to the opposing color effect of red–green, blue–yellow, and light-dark. However, in the visual system, it is the activity of the different receptor types that are opposed. Some midget retinal ganglion cells oppose L and M cone activity, which corresponds loosely to red–green opponency, but actually runs along an axis from blue-green to magenta. Small bistratified retinal ganglion cells oppose input from the S cones to input from the L and M cones. This is often thought to correspond to blue–yellow opponency but actually runs along a color axis from yellow-green to violet.
Visual information is then sent to the brain from retinal ganglion cells via the optic nerve to the optic chiasma: a point where the two optic nerves meet and information from the temporal (contralateral) visual field crosses to the other side of the brain. After the optic chiasma, the visual tracts are referred to as the optic tracts, which enter the thalamus to synapse at the lateral geniculate nucleus (LGN).
The lateral geniculate nucleus is divided into laminae (zones), of which there are three types: the M-laminae, consisting primarily of M-cells, the P-laminae, consisting primarily of P-cells, and the koniocellular laminae. M- and P-cells receive relatively balanced input from both L- and M-cones throughout most of the retina, although this seems to not be the case at the fovea, with midget cells synapsing in the P-laminae. The koniocellular laminae receives axons from the small bistratified ganglion cells.
After synapsing at the LGN, the visual tract continues on back to the primary visual cortex (V1) located at the back of the brain within the occipital lobe. Within V1 there is a distinct band (striation). This is also referred to as "striate cortex", with other cortical visual regions referred to collectively as "extrastriate cortex". It is at this stage that color processing becomes much more complicated.
In V1 the simple three-color segregation begins to break down. Many cells in V1 respond to some parts of the spectrum better than others, but this "color tuning" is often different depending on the adaptation state of the visual system. A given cell that might respond best to long-wavelength light if the light is relatively bright might then become responsive to all wavelengths if the stimulus is relatively dim. Because the color tuning of these cells is not stable, some believe that a different, relatively small, population of neurons in V1 is responsible for color vision. These specialized "color cells" often have receptive fields that can compute local cone ratios. Such "double-opponent" cells were initially described in the goldfish retina by Nigel Daw; their existence in primates was suggested by David H. Hubel and Torsten Wiesel, first demonstrated by C.R. Michael and subsequently confirmed by Bevil Conway. As Margaret Livingstone and David Hubel showed, double opponent cells are clustered within localized regions of V1 called blobs, and are thought to come in two flavors, red–green and blue-yellow. Red–green cells compare the relative amounts of red–green in one part of a scene with the amount of red–green in an adjacent part of the scene, responding best to local color contrast (red next to green). Modeling studies have shown that double-opponent cells are ideal candidates for the neural machinery of color constancy explained by Edwin H. Land in his retinex theory.
From the V1 blobs, color information is sent to cells in the second visual area, V2. The cells in V2 that are most strongly color tuned are clustered in the "thin stripes" that, like the blobs in V1, stain for the enzyme cytochrome oxidase (separating the thin stripes are interstripes and thick stripes, which seem to be concerned with other visual information like motion and high-resolution form). Neurons in V2 then synapse onto cells in the extended V4. This area includes not only V4, but two other areas in the posterior inferior temporal cortex, anterior to area V3, the dorsal posterior inferior temporal cortex, and posterior TEO. Area V4 was initially suggested by Semir Zeki to be exclusively dedicated to color, and he later showed that V4 can be subdivided into subregions with very high concentrations of color cells separated from each other by zones with lower concentration of such cells though even the latter cells respond better to some wavelengths than to others, a finding confirmed by subsequent studies. The presence in V4 of orientation-selective cells led to the view that V4 is involved in processing both color and form associated with color but it is worth noting that the orientation selective cells within V4 are more broadly tuned than their counterparts in V1, V2 and V3. Color processing in the extended V4 occurs in millimeter-sized color modules called globs. This is the part of the brain in which color is first processed into the full range of hues found in color space.
Anatomical studies have shown that neurons in extended V4 provide input to the inferior temporal lobe. "IT" cortex is thought to integrate color information with shape and form, although it has been difficult to define the appropriate criteria for this claim. Despite this murkiness, it has been useful to characterize this pathway (V1 > V2 > V4 > IT) as the ventral stream or the "what pathway", distinguished from the dorsal stream ("where pathway") that is thought to analyze motion, among other features.
Subjectivity of color perception
Color is a feature of visual perception by an observer. There is a complex relationship between the wavelengths of light in the visual spectrum and human experiences of color. Although most people are assumed to have the same mapping, the philosopher John Locke recognized that alternatives are possible, and described one such hypothetical case with the "inverted spectrum" thought experiment. For example, someone with an inverted spectrum might experience green while seeing 'red' (700 nm) light, and experience red while seeing 'green' (530 nm) light. This inversion has never been demonstrated in experiment, though.
Synesthesia (or ideasthesia) provides some atypical but illuminating examples of subjective color experience triggered by input that is not even light, such as sounds or shapes. The possibility of a clean dissociation between color experience from properties of the world reveals that color is a subjective psychological phenomenon.
The Himba people have been found to categorize colors differently from most Westerners and are able to easily distinguish close shades of green, barely discernible for most people. The Himba have created a very different color scheme which divides the spectrum to dark shades (zuzu in Himba), very light (vapa), vivid blue and green (buru) and dry colors as an adaptation to their specific way of life.
The perception of color depends heavily on the context in which the perceived object is presented.
Psychophysical experiments have shown that color is perceived before the orientation of lines and directional motion by as much as 40ms and 80 ms respectively, thus leading to a perceptual asynchrony that is demonstrable with brief presentation times.
Chromatic adaptation
In color vision, chromatic adaptation refers to color constancy; the ability of the visual system to preserve the appearance of an object under a wide range of light sources. For example, a white page under blue, pink, or purple light will reflect mostly blue, pink, or purple light to the eye, respectively; the brain, however, compensates for the effect of lighting (based on the color shift of surrounding objects) and is more likely to interpret the page as white under all three conditions, a phenomenon known as color constancy.
In color science, chromatic adaptation is the estimation of the representation of an object under a different light source from the one in which it was recorded. A common application is to find a chromatic adaptation transform (CAT) that will make the recording of a neutral object appear neutral (color balance), while keeping other colors also looking realistic. For example, chromatic adaptation transforms are used when converting images between ICC profiles with different white points. Adobe Photoshop, for example, uses the Bradford CAT.
Color vision in nonhumans
Many species can see light with frequencies outside the human "visible spectrum". Bees and many other insects can detect ultraviolet light, which helps them to find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to ultraviolet "colors" and patterns rather than how colorful they appear to humans. Birds, too, can see into the ultraviolet (300–400 nm), and some have sex-dependent markings on their plumage that are visible only in the ultraviolet range. Many animals that can see into the ultraviolet range, however, cannot see red light or any other reddish wavelengths. For example, bees' visible spectrum ends at about 590 nm, just before the orange wavelengths start. Birds, however, can see some red wavelengths, although not as far into the light spectrum as humans. It is a myth that the common goldfish is the only animal that can see both infrared and ultraviolet light; their color vision extends into the ultraviolet but not the infrared.
The basis for this variation is the number of cone types that differ between species. Mammals, in general, have a color vision of a limited type, and usually have red–green color blindness, with only two types of cones. Humans, some primates, and some marsupials see an extended range of colors, but only by comparison with other mammals. Most non-mammalian vertebrate species distinguish different colors at least as well as humans, and many species of birds, fish, reptiles, and amphibians, and some invertebrates, have more than three cone types and probably superior color vision to humans.
In most Catarrhini (Old World monkeys and apes—primates closely related to humans), there are three types of color receptors (known as cone cells), resulting in trichromatic color vision. These primates, like humans, are known as trichromats. Many other primates (including New World monkeys) and other mammals are dichromats, which is the general color vision state for mammals that are active during the day (i.e., felines, canines, ungulates). Nocturnal mammals may have little or no color vision. Trichromat non-primate mammals are rare.
Many invertebrates have color vision. Honeybees and bumblebees have trichromatic color vision which is insensitive to red but sensitive to ultraviolet. Osmia rufa, for example, possess a trichromatic color system, which they use in foraging for pollen from flowers. In view of the importance of color vision to bees one might expect these receptor sensitivities to reflect their specific visual ecology; for example the types of flowers that they visit. However, the main groups of hymenopteran insects excluding ants (i.e., bees, wasps and sawflies) mostly have three types of photoreceptor, with spectral sensitivities similar to the honeybee's. Papilio butterflies possess six types of photoreceptors and may have pentachromatic vision. The most complex color vision system in the animal kingdom has been found in stomatopods (such as the mantis shrimp) having between 12 and 16 spectral receptor types thought to work as multiple dichromatic units.
Vertebrate animals such as tropical fish and birds sometimes have more complex color vision systems than humans; thus the many subtle colors they exhibit generally serve as direct signals for other fish or birds, and not to signal mammals. In bird vision, tetrachromacy is achieved through up to four cone types, depending on species. Each single cone contains one of the four main types of vertebrate cone photopigment (LWS/ MWS, RH2, SWS2 and SWS1) and has a colored oil droplet in its inner segment. Brightly colored oil droplets inside the cones shift or narrow the spectral sensitivity of the cell. Pigeons may be pentachromats.
Reptiles and amphibians also have four cone types (occasionally five), and probably see at least the same number of colors that humans do, or perhaps more. In addition, some nocturnal geckos and frogs have the capability of seeing color in dim light. At least some color-guided behaviors in amphibians have also been shown to be wholly innate, developing even in visually deprived animals.
In the evolution of mammals, segments of color vision were lost, then for a few species of primates, regained by gene duplication. Eutherian mammals other than primates (for example, dogs, mammalian farm animals) generally have less-effective two-receptor (dichromatic) color perception systems, which distinguish blue, green, and yellow—but cannot distinguish oranges and reds. There is some evidence that a few mammals, such as cats, have redeveloped the ability to distinguish longer wavelength colors, in at least a limited way, via one-amino-acid mutations in opsin genes. The adaptation to see reds is particularly important for primate mammals, since it leads to the identification of fruits, and also newly sprouting reddish leaves, which are particularly nutritious.
However, even among primates, full color vision differs between New World and Old World monkeys. Old World primates, including monkeys and all apes, have vision similar to humans. New World monkeys may or may not have color sensitivity at this level: in most species, males are dichromats, and about 60% of females are trichromats, but the owl monkeys are cone monochromats, and both sexes of howler monkeys are trichromats. Visual sensitivity differences between males and females in a single species is due to the gene for yellow-green sensitive opsin protein (which confers ability to differentiate red from green) residing on the X sex chromosome.
Several marsupials, such as the fat-tailed dunnart (Sminthopsis crassicaudata), have trichromatic color vision.
Marine mammals, adapted for low-light vision, have only a single cone type and are thus monochromats.
Evolution
Color perception mechanisms are highly dependent on evolutionary factors, of which the most prominent is thought to be satisfactory recognition of food sources. In herbivorous primates, color perception is essential for finding proper (immature) leaves. In hummingbirds, particular flower types are often recognized by color as well. On the other hand, nocturnal mammals have less-developed color vision since adequate light is needed for cones to function properly. There is evidence that ultraviolet light plays a part in color perception in many branches of the animal kingdom, especially insects. In general, the optical spectrum encompasses the most common electronic transitions in matter and is therefore the most useful for collecting information about the environment.
The evolution of trichromatic color vision in primates occurred as the ancestors of modern monkeys, apes, and humans switched to diurnal (daytime) activity and began consuming fruits and leaves from flowering plants. Color vision, with UV discrimination, is also present in a number of arthropods—the only terrestrial animals besides the vertebrates to possess this trait.
Some animals can distinguish colors in the ultraviolet spectrum. The UV spectrum falls outside the human visible range, except for some cataract surgery patients. Birds, turtles, lizards, many fish and some rodents have UV receptors in their retinas. These animals can see the UV patterns found on flowers and other wildlife that are otherwise invisible to the human eye.
Ultraviolet vision is an especially important adaptation in birds. It allows birds to spot small prey from a distance, navigate, avoid predators, and forage while flying at high speeds. Birds also utilize their broad spectrum vision to recognize other birds, and in sexual selection.
Mathematics of color perception
A "physical color" is a combination of pure spectral colors (in the visible range). In principle there exist infinitely many distinct spectral colors, and so the set of all physical colors may be thought of as an infinite-dimensional vector space (a Hilbert space). This space is typically notated Hcolor. More technically, the space of physical colors may be considered to be the topological cone over the simplex whose vertices are the spectral colors, with white at the centroid of the simplex, black at the apex of the cone, and the monochromatic color associated with any given vertex somewhere along the line from that vertex to the apex depending on its brightness.
An element C of Hcolor is a function from the range of visible wavelengths—considered as an interval of real numbers [Wmin,Wmax]—to the real numbers, assigning to each wavelength w in [Wmin,Wmax] its intensity C(w).
A humanly perceived color may be modeled as three numbers: the extents to which each of the 3 types of cones is stimulated. Thus a humanly perceived color may be thought of as a point in 3-dimensional Euclidean space. We call this space R3color.
Since each wavelength w stimulates each of the 3 types of cone cells to a known extent, these extents may be represented by 3 functions s(w), m(w), l(w) corresponding to the response of the S, M, and L cone cells, respectively.
Finally, since a beam of light can be composed of many different wavelengths, to determine the extent to which a physical color C in Hcolor stimulates each cone cell, we must calculate the integral (with respect to w), over the interval [Wmin,Wmax], of C(w)·s(w), of C(w)·m(w), and of C(w)·l(w). The triple of resulting numbers associates with each physical color C (which is an element in Hcolor) a particular perceived color (which is a single point in R3color). This association is easily seen to be linear. It may also easily be seen that many different elements in the "physical" space Hcolor can all result in the same single perceived color in R3color, so a perceived color is not unique to one physical color.
Thus human color perception is determined by a specific, non-unique linear mapping from the infinite-dimensional Hilbert space Hcolor to the 3-dimensional Euclidean space R3color.
Technically, the image of the (mathematical) cone over the simplex whose vertices are the spectral colors, by this linear mapping, is also a (mathematical) cone in R3color. Moving directly away from the vertex of this cone represents maintaining the same chromaticity while increasing its intensity. Taking a cross-section of this cone yields a 2D chromaticity space. Both the 3D cone and its projection or cross-section are convex sets; that is, any mixture of spectral colors is also a color.
In practice, it would be quite difficult to physiologically measure an individual's three cone responses to various physical color stimuli. Instead, a psychophysical approach is taken. Three specific benchmark test lights are typically used; let us call them S, M, and L. To calibrate human perceptual space, scientists allowed human subjects to try to match any physical color by turning dials to create specific combinations of intensities (IS, IM, IL) for the S, M, and L lights, resp., until a match was found. This needed only to be done for physical colors that are spectral, since a linear combination of spectral colors will be matched by the same linear combination of their (IS, IM, IL) matches. Note that in practice, often at least one of S, M, L would have to be added with some intensity to the physical test color, and that combination matched by a linear combination of the remaining 2 lights. Across different individuals (without color blindness), the matchings turned out to be nearly identical.
By considering all the resulting combinations of intensities (IS, IM, IL) as a subset of 3-space, a model for human perceptual color space is formed. (Note that when one of S, M, L had to be added to the test color, its intensity was counted as negative.) Again, this turns out to be a (mathematical) cone, not a quadric, but rather all rays through the origin in 3-space passing through a certain convex set. Again, this cone has the property that moving directly away from the origin corresponds to increasing the intensity of the S, M, L lights proportionately. Again, a cross-section of this cone is a planar shape that is (by definition) the space of "chromaticities" (informally: distinct colors); one particular such cross-section, corresponding to constant X+Y+Z of the CIE 1931 color space, gives the CIE chromaticity diagram.
This system implies that for any hue or non-spectral color not on the boundary of the chromaticity diagram, there are infinitely many distinct physical spectra that are all perceived as that hue or color. So, in general, there is no such thing as the combination of spectral colors that we perceive as (say) a specific version of tan; instead, there are infinitely many possibilities that produce that exact color. The boundary colors that are pure spectral colors can be perceived only in response to light that is purely at the associated wavelength, while the boundary colors on the "line of purples" can each only be generated by a specific ratio of the pure violet and the pure red at the ends of the visible spectral colors.
The CIE chromaticity diagram is horseshoe-shaped, with its curved edge corresponding to all spectral colors (the spectral locus), and the remaining straight edge corresponding to the most saturated purples, mixtures of red and violet.
| Biology and health sciences | Nervous system | null |
28642445 | https://en.wikipedia.org/wiki/Balaur%20bondoc | Balaur bondoc | Balaur bondoc is a species of paravian theropod dinosaur from the late Cretaceous period, in what is now Romania. It is the type species of the monotypic genus Balaur, after the balaur (), a dragon of Romanian folklore. The specific name bondoc () means "stocky", so Balaur bondoc means "stocky dragon" in Romanian. This name refers to the greater musculature that Balaur had compared to its relatives. The genus, which was first described by scientists in August 2010, is known from two partial skeletons (including the type specimen). Some researchers suggest that the taxon might represent a junior synonym of Elopteryx.
Fossils of Balaur were found in the Densuș-Ciula and Sebeș Formations of Cretaceous Romania which correspond to Hațeg Island, a subtropical island in the European archipelago of the Tethys sea approximately 70 million years ago. Hațeg Island is commonly referred to as the "Island of the Dwarf Dinosaurs" on account of the extensive fossil evidence that its native dinosaurs exhibited island syndrome, a collection of morphological, ecological, physiological and behavioural differences compared with their continental counterparts. Examples included island gigantism of Hatzegopteryx, island dwarfism of the titanosaur Magyarosaurus dacus, and a reduction in flight capacity in Balaur.
Balaur may have been a basal avialan, a group that includes modern birds based on phylogenetic analysis, though some researchers still include the genus within dromaeosaurid dinosaurs, either Velociraptorinae as its original description or Unenlagiinae. This reduction in flight capacity is also seen in extant island birds including the ratites and insular barn owls as well as the extinct moa of New Zealand and the extinct dodo of Mauritius.
Discovery and naming
The first small bones belonging to Balaur bondoc consisted of six elements of the front limbs. Named specimens FGGUB R. 1580–1585, these were discovered in 1997 in Romania by Dan Grigorescu, but the morphology of the arm was so unusual that scientists could not correctly combine them, mistaking them for the remains of an oviraptorosaur. The first partial skeleton was discovered in September 2009 in Romania, approximately 2.5 kilometers north of Sebeș, along the Sebeș river in the Sebeș Formation dating from the early Maastrichtian, and was given the preliminary field number SbG/A-Sk1. Later it received the holotype inventory number EME VP.313. The discovery was made by the geologist and paleontologist Mátyás Vremir of the Transylvanian Museum Society of Cluj Napoca who sent them for analysis to Zoltán Csiki of the University of Bucharest. The findings were described on August 31, 2010, in the Proceedings of the National Academy of Sciences. The 1997 specimens indicate an individual about 45% longer than the holotype; they were also found in a younger stratum.
The generic name Balaur (three syllables, stressed on the second /a/) is from the Romanian word for a dragon of Romanian folklore, while the specific epithet bondoc (meaning "a squat, chubby individual") refers to the small, robust shape of the animal. As the mythological creature Balaur is a winged dragon, the name additionally hints at the close relation of the genus Balaur to the birds within Panaves. The species name bondoc was chosen by the discoverers also because it is derived from the Turkish bunduk, "small ball", thus alluding to the probable Asian origin of the ancestors of Balaur.
Description
Balaur is a genus of theropod dinosaurs estimated to have lived about years ago in the late Cretaceous (Maastrichtian), and contains the single species B. bondoc. The bones of this species were shorter and heavier than those of basal paravians. While the feet of most early paravians bore a single, large "sickle claw" on the second toe which was held retracted off the ground, Balaur had large retractable sickle claws on both the first and second toes of each foot. In addition to its strange feet, the type specimen of Balaur is unique for its status of being the most complete theropod fossil from the late Cretaceous of Europe. It also possesses a great number of additional autapomorphies, including a reduced and presumably nonfunctional third finger, consisting of only one rudimentary phalanx.
The partial skeleton was collected from the red floodplain mudstone of the Sebeș Formation of Romania. It consists of a variety of vertebrae, as well as much of pectoral and pelvic girdles, and a large part of the limbs. It is the first reasonably complete and well-preserved theropod from the Late Cretaceous of Europe.
It is similar in size to Velociraptor, with Balaur'''s recovered skeletal elements suggesting an overall length of around and a body mass of . Balaur had re-evolved a functional first toe used to support its weight, which bore a large claw that could be hyperextended. It had short and stocky feet and legs, and large muscle attachment areas on the pelvis which indicate that it was adapted for strength rather than speed. Csiki et al. describe this "novel body plan" as "a dramatic example of aberrant morphology developed in island-dwelling taxa." The stocky feet are exemplified by the length of the metatarsus being only two times its width. It is 1.5 times wider than the lower leg. Both traits are unique in the Theropoda. The skeleton of Balaur also shows extensive fusion of limb bones. Wrist bones and the metacarpals are fused into a carpometacarpus. The pelvic bones are fused. The shinbone, calf bone and the upper ankle bones have been fused into a tibiotarsus and the lower ankle bones and the metatarsals into a tarsometatarsus. The degree of fusion is typical for the Avialae, the evolutionary branch of the birds and their direct relatives.
Classification
The position of Balaur relative to other bird-like dinosaurs and early birds has been difficult to determine. The initial phylogenetic analysis placed Balaur bondoc closest to the Asiatic mainland dromaeosaurid species Velociraptor mongoliensis. A 2013 study by Brusatte and colleagues, using a modified version of the same data, found it in an unresolved close relationship with the dromaeosaurids Deinonychus and Adasaurus, with some possible alternative trees suggesting it branched off before the common ancestor of Deinonychus and Velociraptor, while others maintained it as the closest relative of Velociraptor, with Adasaurus as their next closest relative.
More recent analyses using different sets of anatomical data have since cast doubt on a dromaeosaurid classification for Balaur. In 2013, a larger analysis containing a wide variety of coelurosaurs found that Balaur was not a dromaeosaurid at all, but a basal avialan, more closely related to modern birds than to Jeholornithiformes but more basal than Omnivoropterygiformes. A study published in 2014 found Balaur to be sister to Pygostylia. An independent analysis using an expanded version of the original data set (the one that found Balaur to be a dromaeosaurid) drew a similar conclusion in 2014. In 2015, researchers Andrea Cau, Tom Brougham, and Darren Naish published a study which specifically attempted to clarify which theropods were close relatives of Balaur. While their analysis could not completely rule out the possibility that B. bondoc was a dromaeosaurid, they concluded that this result was less likely than the classification of Balaur as a non-pygostylian avialan based on several important bird-like features. Many of the presumed unique traits would in fact have been normal for a member of the Avialae. Typical bird features included the degree of fusion of the limb bones, the functional first toe, the first toe claw not being smaller than the second claw, a long penultimate phalanx of the third toe, a small fourth toe claw and a long fifth metatarsal. Some researchers continued to classify Balaur as a dromaeosaurid, with two separate studies published in 2021 placing Balaur within the Velociraptorinae, while the 2025 phylogenetic analysis recovered Balaur within Unenlagiinae.
Some researchers claim that Balaur may represent a junior synonym of Elopteryx. Brusatte and colleagues first mentioned the possibility in 2013, though they did not consider it the most likely case. In 2019, Mayr and colleagues claimed that the synonymy remains possible and more work is needed for confirmation. They also noted similarities with Gargantuavis and Elopteryx, indicating that the three taxa form a clade native to the Late Cretaceous European archipelago. In 2024, Stoicescu and colleagues suggested that Elopteryx is a member of the Avialae based on the new specimen from Romania, and that Balaur bondoc is probably a junior synonym of Elopteryx.
Paleobiology
Diet and lifestyle
Little is known about the behavior of Balaur. Because of the lack of skull material, it is impossible to determine by the shape of the teeth whether Balaur was a carnivore or a herbivore. The original description assumed it was carnivorous because it had been found that it was closely related to Velociraptor. Csiki speculated in 2010 that it may have been one of the apex predators in its limited island ecosystem, as neither the skeletons nor teeth of larger theropods have been discovered in Romania. He also believed that it likely used its double sickle claws for slashing prey, and that the atrophied state of its hands indicates that it probably did not use them to hunt. One of the original discoverers indicated that it "was probably more of a kickboxer than a sprinter" compared to Velociraptor, and was probably able to hunt larger animals than itself. However, more recent studies by Denver Fowler and others have shown that the foot anatomy of paravians like Balaur indicate that they used their large claws to grip and pin prey to the ground while flapping with their proto-wings to stay on top of their victim. Once it was worn out, they might have proceeded to feast while it was still alive as some modern birds of prey still do. Due to the shape of the claws, they would not have been effective in slashing attacks. The very short, fused metatarsus of Balaur and enlarged first claw, strange even by true dromaeosaur standards, are thought to be consistent with these newer studies, lending further support to the idea that Balaur was a predator.
Italian paleontologist Andrea Cau has speculated that the aberrant features present in Balaur may have been a result of this theropod being omnivorous or herbivorous rather than carnivorous like most non-avian theropods. The lack of the third finger may be a sign of reduced predatory behavior, and the robust first toe could be interpreted as a weight-supporting adaptation rather than a weapon. These characteristics are consistent with the relatively short, stocky limbs and wide, swept-back pubis, which may indicate enlarged intestines for digesting vegetation as well as reduced speed. Cau referred to this as the "Dodoraptor" model. However, in light of the research done by Fowler et al., Cau has remarked that the anatomy of Balaur may be more congruent with the hypothesis that Balaur was predatory after all.
In 2015, Cau et al. reconsidered the ecology of Balaur again in their reevaluation of its phylogenetic position, arguing that if Balaur was an avialan, it would be phylogenetically bracketed by taxa known to have been herbivorous, such as Sapeornis and Jeholornis. This suggests a non-hypercarnivorous lifestyle to be a more parsimonious conclusion and supports Cau's initial interpretations of its specializations. This is also indicated by the reduced third finger, the lack of a ginglymoid lower articulation of the second metatarsal and the rather small and moderately recurved second toe claw. Balaur had a broad pelvis, a broad foot, a large first toe, and broad lower ends of the metatarsals relative to the articulation surfaces; such a combination can in the remainder of the Theropoda only be found with the herbivorous Therizinosauridae.
Island syndrome
During the Maastrichtian age, much of Europe was fragmented into islands, and a number of the bizarre morphologies of Balaur are thought to be a result of Island syndrome. This describes the differences in the morphology, ecology, physiology and behaviour of island species like Balaur compared to their continental counterparts as a result of the different selection pressures that act on island species. One common effect is Foster's rule which describes how small mainland species become larger and large mainland species become smaller. This is seen in other taxa from Hațeg Island including the pterosaur Hatzegopteryx which exhibited island gigantism and the titanosaur Magyarosaurus dacus which exhibited island dwarfism. However, Balaur appears to have had comparable body size to other basal avialans and closely related dromaeosaurid dinosaurs. Balaur appears to have exhibited other features of island syndrome, most notably a reduced capacity for flight compared to other basal avialans. This reduction in flight capacity is also seen in extant island birds including the ratites and insular barn owls as well as the extinct moa of New Zealand and the extinct dodo of Mauritius.
In addition to island syndrome, species isolated on islands are also affected by genetic drift and the founder effect to a greater degree due to the small effective population size. This can magnify the effects of mutations which may otherwise be diluted in a larger population and may have given rise to some of the neomorphisms seen in Balaur like the retractable claw on its first toe.
In 2010, the increased robustness of Balaur was compared to parallel changes seen in isolated herbivorous mammals. In 2013, it was claimed that Balaur was the only predatory vertebrate known to have become more robust after invading an island niche and it was suggested that its broad feet had evolved to improve postural stability. The 2015 interpretation of Balaur'' as an omnivorous member of the Avialae, suggested it was the descendant of a flying species that had developed a larger size similar to the development in several other island herbivores. This would then be a rare instance of secondary flightlessness in a paravian to resemble a dromaeosaurid, as predicted by Gregory S. Paul.
| Biology and health sciences | Prehistoric birds | Animals |
18859466 | https://en.wikipedia.org/wiki/Headland | Headland | A headland, also known as a head, is a coastal landform, a point of land usually high and often with a sheer drop, that extends into a body of water. It is a type of promontory. A headland of considerable size often is called a cape. Headlands are characterised by high, breaking waves, rocky shores, intense erosion, and steep sea cliff.
Headlands and bays are often found on the same coastline. A bay is flanked by land on three sides, whereas a headland is flanked by water on three sides. Headlands and bays form on discordant coastlines, where bands of rock of alternating resistance run perpendicular to the coast. Bays form when weak (less resistant) rocks (such as sands and clays) are eroded, leaving bands of stronger (more resistant) rocks (such as chalk, limestone, and granite) forming a headland, or peninsula. Through the deposition of sediment within the bay and the erosion of the headlands, coastlines eventually straighten out, then start the same process all over again.
List of notable headlands
Africa
, Mauritania
Cap-Vert, Senegal
Cape Agulhas, South Africa, Africa's southernmost point
Cape Bojador, Morocco
Cape Correntes, Mozambique
Cape Delgado, Mozambique
Cape Juby, Morocco
Cape Malabata, Morocco
Cape Spartel, Morocco
Cape of Good Hope, South Africa
, Tunisia, Africa's northernmost point
Asia
Beirut, Lebanon
Cabo de Rama, Goa, India
Cape Comorin or Kanyakumari, Tamil Nadu, India
Cape Dezhnev, Russia
Cape Engaño, Philippines
Cape of Fire, Central Sulawesi, Indonesia
Coconut Tree Hill, Mirissa, Sri Lanka
Indira Point, Andaman and Nicobar Islands, India
Europe
Beachy Head, England
, Portugal, the western tip of mainland Europe
Cap Gris-Nez, France
Cape Arkona, Germany
Cape Emine, Bulgaria
Cape Enniberg, Faroe Islands
Cape Finisterre, Galicia, Spain
Cape Greco, Cyprus
Cape Kaliakra, Bulgaria
Cape St. Vincent/Sagres Point, Portugal, the southwestern tip of mainland Europe
Cape Tainaron, Greece, the southernmost tip of mainland Europe
Cape Wrath, Scotland
Dungeness, England
Gibraltar, British Overseas Territory
Great Orme, Wales
Hengistbury Head, England
Land's End, Cornwall, England
Mull of Kintyre, Scotland
North Cape, Norway, the northern tip of mainland Europe
Pointe du Raz, France
St Bees Head, UK, the most westerly point of northern England
North America
Canada
Cape Chidley, Newfoundland and Labrador/Nunavut
Cape Columbia, Nunavut, Canada's northernmost point
Cape Freels, Newfoundland and Labrador
Cape Norman, Newfoundland and Labrador
Cape Spear, Newfoundland and Labrador, Canada's easternmost point
Cape Tormentine, New Brunswick
Leslie Street Spit, Toronto, Ontario - man made landform
Greenland
Cape Farewell, Greenland's southernmost point
Cape Morris Jesup, Greenland's northernmost point
Mexico
, Baja California Sur, Mexico
United States
Cape Ann, Massachusetts
Cape Canaveral, Florida
Cape Charles, Virginia
Cape Cod, Massachusetts
Cape Disappointment, Washington
Cape Fear, North Carolina
Cape Flattery, Washington
Cape Hatteras, North Carolina
Cape Henlopen, Delaware
Cape Henry, Virginia
Cape May, New Jersey
Cape Mendocino, California
Cape Prince of Wales, Alaska
Cascade Head, Oregon
Heceta Head, Oregon
Hilton Head, South Carolina
Marin Headlands, California
Mount Mitchill, New Jersey
North Shore, Lake Superior, Minnesota
Point Reyes, California
West Quoddy Head, Maine
Oceania
Australia
Cape Leeuwin, Western Australia
Cape York Peninsula, Queensland
South East Cape, Tasmania
South West Cape, Tasmania
Sydney Heads, New South Wales
Barrenjoey, New South Wales
New Zealand
Cape Egmont
Cape Foulwind
Cape Reinga
East Cape
North Cape
Young Nick's Head
United States (Hawaii)
Diamond Head, Hawaii
Koko Head, Hawaii
South America
Cape Froward, Chile
Cape Horn, Chile, South America's southernmost point
Cape Virgenes, Argentina
| Physical sciences | Oceanic and coastal landforms | Earth science |
515758 | https://en.wikipedia.org/wiki/Fungicide | Fungicide | Fungicides are pesticides used to kill parasitic fungi or their spores. Fungi can cause serious damage in agriculture, resulting in losses of yield and quality. Fungicides are used both in agriculture and to fight fungal infections in animals. Fungicides are also used to control oomycetes, which are not taxonomically/genetically fungi, although sharing similar methods of infecting plants. Fungicides can either be contact, translaminar or systemic. Contact fungicides are not taken up into the plant tissue and protect only the plant where the spray is deposited. Translaminar fungicides redistribute the fungicide from the upper, sprayed leaf surface to the lower, unsprayed surface. Systemic fungicides are taken up and redistributed through the xylem vessels. Few fungicides move to all parts of a plant. Some are locally systemic, and some move upward.
Most fungicides that can be bought retail are sold in liquid form, the active ingredient being present at 0.08% in weaker concentrates, and as high as 0.5% for less potent fungicides. Fungicides in powdered form are usually around 90% sulfur.
Major fungi in agriculture
Some major fungal threats to agriculture (and the associated diseases) are Ascomycetes ("potato late blight"), basidiomycetes ("powdery mildew"), deuteromycetes (various rusts), and oomycetes ("downy mildew").
Types of fungicides
Like other pesticides, fungicides are numerous and diverse. This complexity has led to diverse schemes for classifying fungicides. Classifications are based on inorganic (elemental sulfur and copper salts) vs organic, chemical structures (dithiocarbamates vs phthalimides), and, most successfully, mechanism of action (MOA). These respective classifications reflect the evolution of the underlying science.
Traditional
Traditional fungicides are simple inorganic compounds like sulfur, and copper salts. While cheap, they must be applied repeatedly and are relatively ineffective. Other active ingredients in fungicides include neem oil, rosemary oil, jojoba oil, the bacterium Bacillus subtilis, and the beneficial fungus Ulocladium oudemansii.
Nonspecific
In the 1930s dithiocarbamate-based fungicides, the first organic compounds used for this purpose, became available. These include ferbam, ziram, zineb, maneb, and mancozeb. These compounds are non-specific and are thought to inhibit cysteine-based protease enzymes. Similarly nonspecific are N-substituted phthalimides. Members include captafol, captan, and folpet. Chlorothalonil is also non-specific.
Specific
Specific fungicides target a particular biological process in the fungus.
Nucleic acid metabolism
bupirimate
metalaxyl
Cytoskeleton and motor proteins
carbendazim
pencycuron
Respiration
Some fungicides target succinate dehydrogenase, a metabolically central enzyme. Fungi of the class Basidiomycetes were the initial focus of these fungicides. These fungi are active against cereals.
azoxystrobin
binapacryl
boscalid
carboxin
cyazofamid
pydiflumetofen
Amino acid and protein synthesis
blasticidin-S
kasugamycin
pyrimethanil
Signal transduction
fludioxonil
procymidone
Lipid synthesis / membrane integrity
propamocarb
pyrazophos
tecnazene
Melanin synthesis in cell wall
tricyclazole
Sterol biosynthesis in membranes
fenpropimorph
hexaconazole
imazalil
myclobutanil
propiconazole
Cell wall biosynthesis
dimethomorph
polyoxins
Host plant defence induction
acibenzolar
fosetyl-Al
phosphorous acid
Mycoviruses
Some of the most common fungal crop pathogens are known to suffer from mycoviruses, and it is likely that they are as common as for plant and animal viruses, although not as well studied. Given the obligately parasitic nature of mycoviruses, it is likely that all of these are detrimental to their hosts, and thus are potential biocontrols/biofungicides.
Resistance
Doses that provide the most control of the disease also provide the largest selection pressure to acquire resistance.
In some cases, the pathogen evolves resistance to multiple fungicides, a phenomenon known as cross resistance. These additional fungicides typically belong to the same chemical family, act in the same way, or have a similar mechanism for detoxification. Sometimes negative cross-resistance occurs, where resistance to one chemical class of fungicides increases sensitivity to a different chemical class of fungicides. This has been seen with carbendazim and diethofencarb. Also possible is resistance to two chemically different fungicides by separate mutation events. For example, Botrytis cinerea is resistant to both azoles and dicarboximide fungicides.
A common mechanism for acquiring resistance is alteration of the target enzyme. For example, Black Sigatoka, an economically important pathogen of banana, is resistant to the QoI fungicides, due to a single nucleotide change resulting in the replacement of one amino acid (glycine) by another (alanine) in the target protein of the QoI fungicides, cytochrome b. It is presumed that this disrupts the binding of the fungicide to the protein, rendering the fungicide ineffective. Upregulation of target genes can also render the fungicide ineffective. This is seen in DMI-resistant strains of Venturia inaequalis.
Resistance to fungicides can also be developed by efficient efflux of the fungicide out of the cell. Septoria tritici has developed multiple drug resistance using this mechanism. The pathogen had five ABC-type transporters with overlapping substrate specificities that together work to pump toxic chemicals out of the cell.
In addition to the mechanisms outlined above, fungi may also develop metabolic pathways that circumvent the target protein, or acquire enzymes that enable the metabolism of the fungicide to a harmless substance.
Fungicides that are at risk of losing their potency due to resistance include Strobilurins such as azoxystrobin. Cross-resistance can occur because the active ingredients share a common mode of action. FRAC is organized by CropLife International.
Safety
Fungicides pose risks for humans.
Fungicide residues have been found on food for human consumption, mostly from post-harvest treatments. Some fungicides are dangerous to human health, such as vinclozolin, which has now been removed from use. Ziram is also a fungicide that is toxic to humans with long-term exposure, and fatal if ingested. A number of fungicides are also used in human health care.
| Technology | Pest and disease control | null |
515933 | https://en.wikipedia.org/wiki/Charles%20Bridge | Charles Bridge | Charles Bridge ( ) is a medieval stone arch bridge that crosses the Vltava river in Prague, Czech Republic. Its construction started in 1357 under the auspices of King Charles IV, and finished in the early 15th century. The bridge replaced the old Judith Bridge built 1158–1172 that had been severely damaged by a flood in 1342. This new bridge was originally called Stone Bridge (Kamenný most) or Prague Bridge (Pražský most), but has been referred to as "Charles Bridge" since 1870.
As the only means of crossing the river Vltava until 1841, Charles Bridge was the most important connection between Prague Castle and the city's Old Town and adjacent areas. This land connection made Prague important as a trade route between Eastern and Western Europe.
The bridge is long and nearly wide. Following the example of the Stone Bridge in Regensburg, it was built as a bow bridge with 16 arches shielded by ice guards. It is protected by three bridge towers, two on the Lesser Quarter side (including the Malá Strana Bridge Tower) and one on the Old Town side, the Old Town Bridge Tower. The bridge is decorated by a continuous alley of 30 statues and statuaries, most of them baroque-style, originally erected around 1700, but now all have been replaced by replicas.
The bridge is currently undergoing a twenty-year process of structural inspections, restoration, and repairs. The process started in late 2019, and is expected to cost 45–60 million CZK (US$1.9–2.6 million).
History
14th to 19th centuries
Throughout its history, Charles Bridge has suffered several disasters and witnessed many historic events. Czech legend has it that construction began on Charles Bridge at 5:31am on 9 July 1357 with the first stone being laid by Charles IV himself. This exact time was very important to the Holy Roman Emperor because he was a strong believer in numerology and felt that this specific time, which formed a palindrome (1357 9/7 5:31), was a numerical bridge, and would imbue Charles Bridge with additional strength. The bridge was completed 45 years later in 1402. A flood in 1432 damaged three pillars. In 1496 the third arch (counting from the Old Town side) broke down after one of the pillars lowered, being undermined by the water (repairs were finished in 1503). A year after the Battle of White Mountain, when the 27 leaders of the anti-Habsburg revolt were executed on 21 June 1621, the Old Town Bridge Tower served as a deterrent display of the severed heads of the victims to stop Czechs from further resistance. During the end of the Thirty Years' War in 1648, the Swedes occupied the west bank of the Vltava, and as they tried to advance into the Old Town the heaviest fighting took place right on the bridge. During the fighting, they severely damaged one side of the Old Town bridge tower (the side facing the river) and the remnants of almost all gothic decorations had to be removed from it afterward. During the late 17th century and early 18th century the bridge gained its typical appearance when an alley of baroque statues was installed on the pillars. During a great flood in 1784, five pillars were severely damaged and, although the arches did not break down, the traffic on the bridge had to be greatly restricted for some time.
The original stairway to Kampa Island was replaced by a new one in 1844. The next year, another great flood threatened the bridge, but the bridge escaped major damage. In 1848, during the revolutionary days, the bridge escaped unharmed from the cannonade, but some of the statues were damaged. In 1866, pseudo-gothic gas lights were erected on the balustrade; they were later replaced with electric lighting. In the 1870s, the first regular public-transport (omnibus) line went over the bridge (officially called "Charles Bridge" after 1870) later replaced by a horse tram. The bridge towers underwent a thorough reconstruction between 1874 and 1883.
On 2–5 September 1890, another disastrous flood struck Prague and severely damaged Charles Bridge. Thousands of rafts, logs and other floating materials that escaped from places upstream gradually formed a huge barrier leaning against the bridge. Three arches were torn down by the great pressure and two pillars collapsed from being undermined by the water, while others were partly damaged. With the fifth pillar, two statues – St. Ignatius of Loyola and St. Xavier, both by Ferdinand Brokoff – also fell into the river. The former statue was replaced by a statuary of Saints Cyril and Methodius by Karel Dvořák; the latter was replaced by a replica of the original. Repair works lasted for two years (the bridge was reopened on 19 November 1892) and cost 665,000 crowns.
20th century to present
In the beginning of the 20th century, Charles Bridge saw a steep rise of heavy traffic. The last day of the horse line on the bridge was 15 May 1905, when it was replaced with an electric tram and later, in 1908, with buses. At the end of World War II, a barricade was built in the Old Town bridge tower gateway. A capital repair of the bridge took place between 1965 and 1978, based on a collaboration among various scientific and cultural institutes. The stability of the pillars was reassured, all broken stone blocks were replaced, and the asphalt top was removed. All vehicular traffic has been excluded from Charles Bridge since then, making it accessible by pedestrians only. The repair cost 50 million crowns.
During the 1990s, some people started criticizing the previous reconstruction of the bridge and proposing further work. As of the beginning of the new millennium, most of the experts appeared to agree that the previous reconstruction had not been flawless but disputed the need for further interference with the bridge. However, after the disastrous floods of 2002 (which themselves caused only minor harm to the bridge), support for an overall bridge reconstruction grew. It was decided that repair and stabilization of the two pillars (numbers 8 and 9) on the Malá Strana side of the bridge would be done. These are the only river pillars that were not repaired after the 1890 floods. The reconstruction was a gradual process that closed off parts of the bridge without closing the span entirely.
Performed from 2008 to 2010, the work included bolstering the pillars and building a new hydroisolation system protecting the bridge. It also encompassed a re-pavimentation of the bridge's pavement and the replacement of many of the stones in the bridge walls, a matter which was controversial due to a heavy-handed approach adopted by the restoration team, which had no previous experience in restoration of cultural heritage monuments. The result has been criticised by conservation professionals all over Europe (see photos on external links), as dozens of new replacement stones do not match the historical ones they are next to, the amount of replaced stones is considered excessive, some stones have been inappropriately positioned, original stones have been chipped and joining materials employed are considered not appropriate for the structure. In 2010 UNESCO's World Heritage Committee adopted a decision stating that "the restoration of Charles Bridge was carried out without adequate conservation advice on materials and techniques".
Statues on the bridge
The avenue of 30 mostly Baroque statues and statuaries situated on the balustrade forms a unique connection of artistic styles with the underlying Gothic bridge. Most sculptures were erected between 1683 and 1714. They depict various saints and patron saints venerated at that time. The most prominent Bohemian sculptors of the time took part in decorating the bridge, such as Matthias Braun, Jan Brokoff, and his sons Michael Joseph and Ferdinand Maxmilian.
Among the most notable sculptures, one can find the statuaries of St. Luthgard, the Holy Crucifix and Calvary, and John of Nepomuk. Well known also is the statue of the knight Bruncvík, although it was erected some 200 years later and does not belong to the main avenue.
Beginning in 1965, all of the statues have been systematically replaced by replicas, and the originals have been exhibited in the Lapidarium of the National Museum.
Tribute
On 9 July 2017, Google celebrated the 660th anniversary of Charles Bridge with a Google Doodle.
| Technology | Bridges | null |
516133 | https://en.wikipedia.org/wiki/Equipartition%20theorem | Equipartition theorem | In classical statistical mechanics, the equipartition theorem relates the temperature of a system to its average energies. The equipartition theorem is also known as the law of equipartition, equipartition of energy, or simply equipartition. The original idea of equipartition was that, in thermal equilibrium, energy is shared equally among all of its various forms; for example, the average kinetic energy per degree of freedom in translational motion of a molecule should equal that in rotational motion.
The equipartition theorem makes quantitative predictions. Like the virial theorem, it gives the total average kinetic and potential energies for a system at a given temperature, from which the system's heat capacity can be computed. However, equipartition also gives the average values of individual components of the energy, such as the kinetic energy of a particular particle or the potential energy of a single spring. For example, it predicts that every atom in a monatomic ideal gas has an average kinetic energy of in thermal equilibrium, where is the Boltzmann constant and T is the (thermodynamic) temperature. More generally, equipartition can be applied to any classical system in thermal equilibrium, no matter how complicated. It can be used to derive the ideal gas law, and the Dulong–Petit law for the specific heat capacities of solids. The equipartition theorem can also be used to predict the properties of stars, even white dwarfs and neutron stars, since it holds even when relativistic effects are considered.
Although the equipartition theorem makes accurate predictions in certain conditions, it is inaccurate when quantum effects are significant, such as at low temperatures. When the thermal energy is smaller than the quantum energy spacing in a particular degree of freedom, the average energy and heat capacity of this degree of freedom are less than the values predicted by equipartition. Such a degree of freedom is said to be "frozen out" when the thermal energy is much smaller than this spacing. For example, the heat capacity of a solid decreases at low temperatures as various types of motion become frozen out, rather than remaining constant as predicted by equipartition. Such decreases in heat capacity were among the first signs to physicists of the 19th century that classical physics was incorrect and that a new, more subtle, scientific model was required. Along with other evidence, equipartition's failure to model black-body radiation—also known as the ultraviolet catastrophe—led Max Planck to suggest that energy in the oscillators in an object, which emit light, were quantized, a revolutionary hypothesis that spurred the development of quantum mechanics and quantum field theory.
Basic concept and simple examples
The name "equipartition" means "equal division," as derived from the Latin equi from the antecedent, æquus ("equal or even"), and partition from the noun, partitio ("division, portion"). The original concept of equipartition was that the total kinetic energy of a system is shared equally among all of its independent parts, on the average, once the system has reached thermal equilibrium. Equipartition also makes quantitative predictions for these energies. For example, it predicts that every atom of an inert noble gas, in thermal equilibrium at temperature , has an average translational kinetic energy of , where is the Boltzmann constant. As a consequence, since kinetic energy is equal to (mass)(velocity)2, the heavier atoms of xenon have a lower average speed than do the lighter atoms of helium at the same temperature. Figure 2 shows the Maxwell–Boltzmann distribution for the speeds of the atoms in four noble gases.
In this example, the key point is that the kinetic energy is quadratic in the velocity. The equipartition theorem shows that in thermal equilibrium, any degree of freedom (such as a component of the position or velocity of a particle) which appears only quadratically in the energy has an average energy of and therefore contributes to the system's heat capacity. This has many applications.
Translational energy and ideal gases
The (Newtonian) kinetic energy of a particle of mass , velocity is given by
where , and are the Cartesian components of the velocity . Here, is short for Hamiltonian, and used henceforth as a symbol for energy because the Hamiltonian formalism plays a central role in the most general form of the equipartition theorem.
Since the kinetic energy is quadratic in the components of the velocity, by equipartition these three components each contribute to the average kinetic energy in thermal equilibrium. Thus the average kinetic energy of the particle is , as in the example of noble gases above.
More generally, in a monatomic ideal gas the total energy consists purely of (translational) kinetic energy: by assumption, the particles have no internal degrees of freedom and move independently of one another. Equipartition therefore predicts that the total energy of an ideal gas of particles is .
It follows that the heat capacity of the gas is and hence, in particular, the heat capacity of a mole of such gas particles is , where NA is the Avogadro constant and R is the gas constant. Since R ≈ 2 cal/(mol·K), equipartition predicts that the molar heat capacity of an ideal gas is roughly 3 cal/(mol·K). This prediction is confirmed by experiment when compared to monatomic gases.
The mean kinetic energy also allows the root mean square speed of the gas particles to be calculated:
where is the mass of a mole of gas particles. This result is useful for many applications such as Graham's law of effusion, which provides a method for enriching uranium.
Rotational energy and molecular tumbling in solution
A similar example is provided by a rotating molecule with principal moments of inertia , and . According to classical mechanics, the rotational energy of such a molecule is given by
where , , and are the principal components of the angular velocity. By exactly the same reasoning as in the translational case, equipartition implies that in thermal equilibrium the average rotational energy of each particle is . Similarly, the equipartition theorem allows the average (more precisely, the root mean square) angular speed of the molecules to be calculated.
The tumbling of rigid molecules—that is, the random rotations of molecules in solution—plays a key role in the relaxations observed by nuclear magnetic resonance, particularly protein NMR and residual dipolar couplings. Rotational diffusion can also be observed by other biophysical probes such as fluorescence anisotropy, flow birefringence and dielectric spectroscopy.
Potential energy and harmonic oscillators
Equipartition applies to potential energies as well as kinetic energies: important examples include harmonic oscillators such as a spring, which has a quadratic potential energy
where the constant describes the stiffness of the spring and is the deviation from equilibrium. If such a one-dimensional system has mass , then its kinetic energy is
where and denote the velocity and momentum of the oscillator. Combining these terms yields the total energy
Equipartition therefore implies that in thermal equilibrium, the oscillator has average energy
where the angular brackets denote the average of the enclosed quantity,
This result is valid for any type of harmonic oscillator, such as a pendulum, a vibrating molecule or a passive electronic oscillator. Systems of such oscillators arise in many situations; by equipartition, each such oscillator receives an average total energy and hence contributes to the system's heat capacity. This can be used to derive the formula for Johnson–Nyquist noise and the Dulong–Petit law of solid heat capacities. The latter application was particularly significant in the history of equipartition.
Specific heat capacity of solids
An important application of the equipartition theorem is to the specific heat capacity of a crystalline solid. Each atom in such a solid can oscillate in three independent directions, so the solid can be viewed as a system of independent simple harmonic oscillators, where denotes the number of atoms in the lattice. Since each harmonic oscillator has average energy , the average total energy of the solid is , and its heat capacity is .
By taking to be the Avogadro constant , and using the relation between the gas constant and the Boltzmann constant , this provides an explanation for the Dulong–Petit law of specific heat capacities of solids, which stated that the specific heat capacity (per unit mass) of a solid element is inversely proportional to its atomic weight. A modern version is that the molar heat capacity of a solid is 3R ≈ 6 cal/(mol·K).
However, this law is inaccurate at lower temperatures, due to quantum effects; it is also inconsistent with the experimentally derived third law of thermodynamics, according to which the molar heat capacity of any substance must go to zero as the temperature goes to absolute zero. A more accurate theory, incorporating quantum effects, was developed by Albert Einstein (1907) and Peter Debye (1911).
Many other physical systems can be modeled as sets of coupled oscillators. The motions of such oscillators can be decomposed into normal modes, like the vibrational modes of a piano string or the resonances of an organ pipe. On the other hand, equipartition often breaks down for such systems, because there is no exchange of energy between the normal modes. In an extreme situation, the modes are independent and so their energies are independently conserved. This shows that some sort of mixing of energies, formally called ergodicity, is important for the law of equipartition to hold.
Sedimentation of particles
Potential energies are not always quadratic in the position. However, the equipartition theorem also shows that if a degree of freedom contributes only a multiple of (for a fixed real number ) to the energy, then in thermal equilibrium the average energy of that part is .
There is a simple application of this extension to the sedimentation of particles under gravity. For example, the haze sometimes seen in beer can be caused by clumps of proteins that scatter light. Over time, these clumps settle downwards under the influence of gravity, causing more haze near the bottom of a bottle than near its top. However, in a process working in the opposite direction, the particles also diffuse back up towards the top of the bottle. Once equilibrium has been reached, the equipartition theorem may be used to determine the average position of a particular clump of buoyant mass . For an infinitely tall bottle of beer, the gravitational potential energy is given by
where is the height of the protein clump in the bottle and g is the acceleration due to gravity. Since , the average potential energy of a protein clump equals . Hence, a protein clump with a buoyant mass of 10 MDa (roughly the size of a virus) would produce a haze with an average height of about 2 cm at equilibrium. The process of such sedimentation to equilibrium is described by the Mason–Weaver equation.
History
The equipartition of kinetic energy was proposed initially in 1843, and more correctly in 1845, by John James Waterston. In 1859, James Clerk Maxwell argued that the kinetic heat energy of a gas is equally divided between linear and rotational energy. In 1876, Ludwig Boltzmann expanded on this principle by showing that the average energy was divided equally among all the independent components of motion in a system. Boltzmann applied the equipartition theorem to provide a theoretical explanation of the Dulong–Petit law for the specific heat capacities of solids.
The history of the equipartition theorem is intertwined with that of specific heat capacity, both of which were studied in the 19th century. In 1819, the French physicists Pierre Louis Dulong and Alexis Thérèse Petit discovered that the specific heat capacities of solid elements at room temperature were inversely proportional to the atomic weight of the element. Their law was used for many years as a technique for measuring atomic weights. However, subsequent studies by James Dewar and Heinrich Friedrich Weber showed that this Dulong–Petit law holds only at high temperatures; at lower temperatures, or for exceptionally hard solids such as diamond, the specific heat capacity was lower.
Experimental observations of the specific heat capacities of gases also raised concerns about the validity of the equipartition theorem. The theorem predicts that the molar heat capacity of simple monatomic gases should be roughly 3 cal/(mol·K), whereas that of diatomic gases should be roughly 7 cal/(mol·K). Experiments confirmed the former prediction, but found that molar heat capacities of diatomic gases were typically about 5 cal/(mol·K), and fell to about 3 cal/(mol·K) at very low temperatures. Maxwell noted in 1875 that the disagreement between experiment and the equipartition theorem was much worse than even these numbers suggest; since atoms have internal parts, heat energy should go into the motion of these internal parts, making the predicted specific heats of monatomic and diatomic gases much higher than 3 cal/(mol·K) and 7 cal/(mol·K), respectively.
A third discrepancy concerned the specific heat of metals. According to the classical Drude model, metallic electrons act as a nearly ideal gas, and so they should contribute to the heat capacity by the equipartition theorem, where Ne is the number of electrons. Experimentally, however, electrons contribute little to the heat capacity: the molar heat capacities of many conductors and insulators are nearly the same.
Several explanations of equipartition's failure to account for molar heat capacities were proposed. Boltzmann defended the derivation of his equipartition theorem as correct, but suggested that gases might not be in thermal equilibrium because of their interactions with the aether. Lord Kelvin suggested that the derivation of the equipartition theorem must be incorrect, since it disagreed with experiment, but was unable to show how. In 1900 Lord Rayleigh instead put forward a more radical view that the equipartition theorem and the experimental assumption of thermal equilibrium were both correct; to reconcile them, he noted the need for a new principle that would provide an "escape from the destructive simplicity" of the equipartition theorem. Albert Einstein provided that escape, by showing in 1906 that these anomalies in the specific heat were due to quantum effects, specifically the quantization of energy in the elastic modes of the solid. Einstein used the failure of equipartition to argue for the need of a new quantum theory of matter. Nernst's 1910 measurements of specific heats at low temperatures supported Einstein's theory, and led to the widespread acceptance of quantum theory among physicists.
General formulation of the equipartition theorem
The most general form of the equipartition theorem states that under suitable assumptions (discussed below), for a physical system with Hamiltonian energy function and degrees of freedom , the following equipartition formula holds in thermal equilibrium for all indices and :
Here is the Kronecker delta, which is equal to one if and is zero otherwise. The averaging brackets is assumed to be an ensemble average over phase space or, under an assumption of ergodicity, a time average of a single system.
The general equipartition theorem holds in both the microcanonical ensemble, when the total energy of the system is constant, and also in the canonical ensemble, when the system is coupled to a heat bath with which it can exchange energy. Derivations of the general formula are given later in the article.
The general formula is equivalent to the following two:
If a degree of freedom xn appears only as a quadratic term anxn2 in the Hamiltonian H, then the first of these formulae implies that
which is twice the contribution that this degree of freedom makes to the average energy . Thus the equipartition theorem for systems with quadratic energies follows easily from the general formula. A similar argument, with 2 replaced by s, applies to energies of the form anxns.
The degrees of freedom xn are coordinates on the phase space of the system and are therefore commonly subdivided into generalized position coordinates qk and generalized momentum coordinates pk, where pk is the conjugate momentum to qk. In this situation, formula 1 means that for all k,
Using the equations of Hamiltonian mechanics, these formulae may also be written
Similarly, one can show using formula 2 that
and
Relation to the virial theorem
The general equipartition theorem is an extension of the virial theorem (proposed in 1870), which states that
where t denotes time. Two key differences are that the virial theorem relates summed rather than individual averages to each other, and it does not connect them to the temperature T. Another difference is that traditional derivations of the virial theorem use averages over time, whereas those of the equipartition theorem use averages over phase space.
Applications
Ideal gas law
Ideal gases provide an important application of the equipartition theorem. As well as providing the formula
for the average kinetic energy per particle, the equipartition theorem can be used to derive the ideal gas law from classical mechanics. If q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum of a particle in the gas, and
F is the net force on that particle, then
where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition formula. Summing over a system of N particles yields
By Newton's third law and the ideal gas assumption, the net force on the system is the force applied by the walls of their container, and this force is given by the pressure P of the gas. Hence
where is the infinitesimal area element along the walls of the container. Since the divergence of the position vector is
the divergence theorem implies that
where is an infinitesimal volume within the container and is the total volume of the container.
Putting these equalities together yields
which immediately implies the ideal gas law for N particles:
where is the number of moles of gas and is the gas constant. Although equipartition provides a simple derivation of the ideal-gas law and the internal energy, the same results can be obtained by an alternative method using the partition function.
Diatomic gases
A diatomic gas can be modelled as two masses, and , joined by a spring of stiffness , which is called the rigid rotor-harmonic oscillator approximation. The classical energy of this system is
where and are the momenta of the two atoms, and is the deviation of the inter-atomic separation from its equilibrium value. Every degree of freedom in the energy is quadratic and, thus, should contribute to the total average energy, and to the heat capacity. Therefore, the heat capacity of a gas of N diatomic molecules is predicted to be : the momenta and contribute three degrees of freedom each, and the extension contributes the seventh. It follows that the heat capacity of a mole of diatomic molecules with no other degrees of freedom should be and, thus, the predicted molar heat capacity should be roughly 7 cal/(mol·K). However, the experimental values for molar heat capacities of diatomic gases are typically about 5 cal/(mol·K) and fall to 3 cal/(mol·K) at very low temperatures. This disagreement between the equipartition prediction and the experimental value of the molar heat capacity cannot be explained by using a more complex model of the molecule, since adding more degrees of freedom can only increase the predicted specific heat, not decrease it. This discrepancy was a key piece of evidence showing the need for a quantum theory of matter.
Extreme relativistic ideal gases
Equipartition was used above to derive the classical ideal gas law from Newtonian mechanics. However, relativistic effects become dominant in some systems, such as white dwarfs and neutron stars, and the ideal gas equations must be modified. The equipartition theorem provides a convenient way to derive the corresponding laws for an extreme relativistic ideal gas. In such cases, the kinetic energy of a single particle is given by the formula
Taking the derivative of with respect to the momentum component gives the formula
and similarly for the and components. Adding the three components together gives
where the last equality follows from the equipartition formula. Thus, the average total energy of an extreme relativistic gas is twice that of the non-relativistic case: for particles, it is .
Non-ideal gases
In an ideal gas the particles are assumed to interact only through collisions. The equipartition theorem may also be used to derive the energy and pressure of "non-ideal gases" in which the particles also interact with one another through conservative forces whose potential depends only on the distance between the particles. This situation can be described by first restricting attention to a single gas particle, and approximating the rest of the gas by a spherically symmetric distribution. It is then customary to introduce a radial distribution function such that the probability density of finding another particle at a distance from the given particle is equal to , where is the mean density of the gas. It follows that the mean potential energy associated to the interaction of the given particle with the rest of the gas is
The total mean potential energy of the gas is therefore , where is the number of particles in the gas, and the factor is needed because summation over all the particles counts each interaction twice.
Adding kinetic and potential energies, then applying equipartition, yields the energy equation
A similar argument, can be used to derive the pressure equation
Anharmonic oscillators
An anharmonic oscillator (in contrast to a simple harmonic oscillator) is one in which the potential energy is not quadratic in the extension (the generalized position which measures the deviation of the system from equilibrium). Such oscillators provide a complementary point of view on the equipartition theorem. Simple examples are provided by potential energy functions of the form
where and are arbitrary real constants. In these cases, the law of equipartition predicts that
Thus, the average potential energy equals , not as for the quadratic harmonic oscillator (where ).
More generally, a typical energy function of a one-dimensional system has a Taylor expansion in the extension :
for non-negative integers . There is no term, because at the equilibrium point, there is no net force and so the first derivative of the energy is zero. The term need not be included, since the energy at the equilibrium position may be set to zero by convention. In this case, the law of equipartition predicts that
In contrast to the other examples cited here, the equipartition formula
does not allow the average potential energy to be written in terms of known constants.
Brownian motion
The equipartition theorem can be used to derive the Brownian motion of a particle from the Langevin equation. According to that equation, the motion of a particle of mass with velocity is governed by Newton's second law
where is a random force representing the random collisions of the particle and the surrounding molecules, and where the time constant τ reflects the drag force that opposes the particle's motion through the solution. The drag force is often written ; therefore, the time constant equals .
The dot product of this equation with the position vector , after averaging, yields the equation
for Brownian motion (since the random force is uncorrelated with the position ). Using the mathematical identities
and
the basic equation for Brownian motion can be transformed into
where the last equality follows from the equipartition theorem for translational kinetic energy:
The above differential equation for (with suitable initial conditions) may be solved exactly:
On small time scales, with , the particle acts as a freely moving particle: by the Taylor series of the exponential function, the squared distance grows approximately quadratically:
However, on long time scales, with , the exponential and constant terms are negligible, and the squared distance grows only linearly:
This describes the diffusion of the particle over time. An analogous equation for the rotational diffusion of a rigid molecule can be derived in a similar way.
Stellar physics
The equipartition theorem and the related virial theorem have long been used as a tool in astrophysics. As examples, the virial theorem may be used to estimate stellar temperatures or the Chandrasekhar limit on the mass of white dwarf stars.
The average temperature of a star can be estimated from the equipartition theorem. Since most stars are spherically symmetric, the total gravitational potential energy can be estimated by integration
where is the mass within a radius and is the stellar density at radius ; represents the gravitational constant and the total radius of the star. Assuming a constant density throughout the star, this integration yields the formula
where is the star's total mass. Hence, the average potential energy of a single particle is
where is the number of particles in the star. Since most stars are composed mainly of ionized hydrogen, equals roughly , where is the mass of one proton. Application of the equipartition theorem gives an estimate of the star's temperature
Substitution of the mass and radius of the Sun yields an estimated solar temperature of T = 14 million kelvins, very close to its core temperature of 15 million kelvins. However, the Sun is much more complex than assumed by this model—both its temperature and density vary strongly with radius—and such excellent agreement (≈7% relative error) is partly fortuitous.
Star formation
The same formulae may be applied to determining the conditions for star formation in giant molecular clouds. A local fluctuation in the density of such a cloud can lead to a runaway condition in which the cloud collapses inwards under its own gravity. Such a collapse occurs when the equipartition theorem—or, equivalently, the virial theorem—is no longer valid, i.e., when the gravitational potential energy exceeds twice the kinetic energy
Assuming a constant density for the cloud
yields a minimum mass for stellar contraction, the Jeans mass
Substituting the values typically observed in such clouds (, ) gives an estimated minimum mass of 17 solar masses, which is consistent with observed star formation. This effect is also known as the Jeans instability, after the British physicist James Hopwood Jeans who published it in 1902.
Derivations
Kinetic energies and the Maxwell–Boltzmann distribution
The original formulation of the equipartition theorem states that, in any physical system in thermal equilibrium, every particle has exactly the same average translational kinetic energy, . However, this is true only for ideal gas, and the same result can be derived from the Maxwell–Boltzmann distribution. First, we choose to consider only the Maxwell–Boltzmann distribution of velocity of the z-component
with this equation, we can calculate the mean square velocity of the -component
Since different components of velocity are independent of each other, the average translational kinetic energy is given by
Notice, the Maxwell–Boltzmann distribution should not be confused with the Boltzmann distribution, which the former can be derived from the latter by assuming the energy of a particle is equal to its translational kinetic energy.
As stated by the equipartition theorem. The same result can also be obtained by averaging the particle energy using the probability of finding the particle in certain quantum energy state.
Quadratic energies and the partition function
More generally, the equipartition theorem states that any degree of freedom which appears in the total energy only as a simple quadratic term , where is a constant, has an average energy of in thermal equilibrium. In this case the equipartition theorem may be derived from the partition function , where is the canonical inverse temperature. Integration over the variable yields a factor
in the formula for . The mean energy associated with this factor is given by
as stated by the equipartition theorem.
General proofs
General derivations of the equipartition theorem can be found in many statistical mechanics textbooks, both for the microcanonical ensemble and for the canonical ensemble.
They involve taking averages over the phase space of the system, which is a symplectic manifold.
To explain these derivations, the following notation is introduced. First, the phase space is described in terms of generalized position coordinates together with their conjugate momenta . The quantities completely describe the configuration of the system, while the quantities together completely describe its state.
Secondly, the infinitesimal volume
of the phase space is introduced and used to define the volume of the portion of phase space where the energy of the system lies between two limits, and :
In this expression, is assumed to be very small, . Similarly, is defined to be the total volume of phase space where the energy is less than :
Since is very small, the following integrations are equivalent
where the ellipses represent the integrand. From this, it follows that is proportional to
where is the density of states. By the usual definitions of statistical mechanics, the entropy equals , and the temperature is defined by
The canonical ensemble
In the canonical ensemble, the system is in thermal equilibrium with an infinite heat bath at temperature (in kelvins). The probability of each state in phase space is given by its Boltzmann factor times a normalization factor , which is chosen so that the probabilities sum to one
where . Using Integration by parts for a phase-space variable the above can be written as
where , i.e., the first integration is not carried out over . Performing the first integral between two limits and and simplifying the second integral yields the equation
The first term is usually zero, either because is zero at the limits, or because the energy goes to infinity at those limits. In that case, the equipartition theorem for the canonical ensemble follows immediately
Here, the averaging symbolized by is the ensemble average taken over the canonical ensemble.
The microcanonical ensemble
In the microcanonical ensemble, the system is isolated from the rest of the world, or at least very weakly coupled to it. Hence, its total energy is effectively constant; to be definite, we say that the total energy is confined between and . For a given energy and spread , there is a region of phase space in which the system has that energy, and the probability of each state in that region of phase space is equal, by the definition of the microcanonical ensemble. Given these definitions, the equipartition average of phase-space variables (which could be either or ) and is given by
where the last equality follows because is a constant that does not depend on . Integrating by parts yields the relation
since the first term on the right hand side of the first line is zero (it can be rewritten as an integral of H − E on the hypersurface where ).
Substitution of this result into the previous equation yields
Since the equipartition theorem follows:
Thus, we have derived the general formulation of the equipartition theorem
which was so useful in the applications described above.
Limitations
Requirement of ergodicity
The law of equipartition holds only for ergodic systems in thermal equilibrium, which implies that all states with the same energy must be equally likely to be populated. Consequently, it must be possible to exchange energy among all its various forms within the system, or with an external heat bath in the canonical ensemble. The number of physical systems that have been rigorously proven to be ergodic is small; a famous example is the hard-sphere system of Yakov Sinai. The requirements for isolated systems to ensure ergodicity—and, thus equipartition—have been studied, and provided motivation for the modern chaos theory of dynamical systems. A chaotic Hamiltonian system need not be ergodic, although that is usually a good assumption.
A commonly cited counter-example where energy is not shared among its various forms and where equipartition does not hold in the microcanonical ensemble is a system of coupled harmonic oscillators. If the system is isolated from the rest of the world, the energy in each normal mode is constant; energy is not transferred from one mode to another. Hence, equipartition does not hold for such a system; the amount of energy in each normal mode is fixed at its initial value. If sufficiently strong nonlinear terms are present in the energy function, energy may be transferred between the normal modes, leading to ergodicity and rendering the law of equipartition valid. However, the Kolmogorov–Arnold–Moser theorem states that energy will not be exchanged unless the nonlinear perturbations are strong enough; if they are too small, the energy will remain trapped in at least some of the modes.
Another simple example is an ideal gas of a finite number of colliding particles in a round vessel. Due to the vessel's symmetry, the angular momentum of such a gas is conserved. Therefore, not all states with the same energy are populated. This results in the mean particle energy being dependent on the mass of this particle, and also on the masses of all the other particles.
Another way ergodicity can be broken is by the existence of nonlinear soliton symmetries. In 1953, Fermi, Pasta, Ulam and Tsingou conducted computer simulations of a vibrating string that included a non-linear term (quadratic in one test, cubic in another, and a piecewise linear approximation to a cubic in a third). They found that the behavior of the system was quite different from what intuition based on equipartition would have led them to expect. Instead of the energies in the modes becoming equally shared, the system exhibited a very complicated quasi-periodic behavior. This puzzling result was eventually explained by Kruskal and Zabusky in 1965 in a paper which, by connecting the simulated system to the Korteweg–de Vries equation led to the development of soliton mathematics.
Failure due to quantum effects
The law of equipartition breaks down when the thermal energy is significantly smaller than the spacing between energy levels. Equipartition no longer holds because it is a poor approximation to assume that the energy levels form a smooth continuum, which is required in the derivations of the equipartition theorem above. Historically, the failures of the classical equipartition theorem to explain specific heats and black-body radiation were critical in showing the need for a new theory of matter and radiation, namely, quantum mechanics and quantum field theory.
To illustrate the breakdown of equipartition, consider the average energy in a single (quantum) harmonic oscillator, which was discussed above for the classical case. Neglecting the irrelevant zero-point energy term since it can be factored out of the exponential functions involved in the probability distribution, the quantum harmonic oscillator energy levels are given by , where is the Planck constant, is the fundamental frequency of the oscillator, and is an integer. The probability of a given energy level being populated in the canonical ensemble is given by its Boltzmann factor
where and the denominator is the partition function, here a geometric series
Its average energy is given by
Substituting the formula for gives the final result
At high temperatures, when the thermal energy is much greater than the spacing between energy levels, the exponential argument is much less than one and the average energy becomes , in agreement with the equipartition theorem (Figure 10). However, at low temperatures, when , the average energy goes to zero—the higher-frequency energy levels are "frozen out" (Figure 10). As another example, the internal excited electronic states of a hydrogen atom do not contribute to its specific heat as a gas at room temperature, since the thermal energy (roughly 0.025 eV) is much smaller than the spacing between the lowest and next higher electronic energy levels (roughly 10 eV).
Similar considerations apply whenever the energy level spacing is much larger than the thermal energy. This reasoning was used by Max Planck and Albert Einstein, among others, to resolve the ultraviolet catastrophe of black-body radiation. The paradox arises because there are an infinite number of independent modes of the electromagnetic field in a closed container, each of which may be treated as a harmonic oscillator. If each electromagnetic mode were to have an average energy , there would be an infinite amount of energy in the container. However, by the reasoning above, the average energy in the higher-frequency modes goes to zero as ν goes to infinity; moreover, Planck's law of black-body radiation, which describes the experimental distribution of energy in the modes, follows from the same reasoning.
Other, more subtle quantum effects can lead to corrections to equipartition, such as identical particles and continuous symmetries. The effects of identical particles can be dominant at very high densities and low temperatures. For example, the valence electrons in a metal can have a mean kinetic energy of a few electronvolts, which would normally correspond to a temperature of tens of thousands of kelvins. Such a state, in which the density is high enough that the Pauli exclusion principle invalidates the classical approach, is called a degenerate fermion gas. Such gases are important for the structure of white dwarf and neutron stars. At low temperatures, a fermionic analogue of the Bose–Einstein condensate (in which a large number of identical particles occupy the lowest-energy state) can form; such superfluid electrons are responsible for superconductivity.
| Physical sciences | Thermodynamics | Physics |
516150 | https://en.wikipedia.org/wiki/Transverse%20mode | Transverse mode | A transverse mode of electromagnetic radiation is a particular electromagnetic field pattern of the radiation in the plane perpendicular (i.e., transverse) to the radiation's propagation direction. Transverse modes occur in radio waves and microwaves confined to a waveguide, and also in light waves in an optical fiber and in a laser's optical resonator.
Transverse modes occur because of boundary conditions imposed on the wave by the waveguide. For example, a radio wave in a hollow metal waveguide must have zero tangential electric field amplitude at the walls of the waveguide, so the transverse pattern of the electric field of waves is restricted to those that fit between the walls. For this reason, the modes supported by a waveguide are quantized. The allowed modes can be found by solving Maxwell's equations for the boundary conditions of a given waveguide.
Types of modes
Unguided electromagnetic waves in free space, or in a bulk isotropic dielectric, can be described as a superposition of plane waves; these can be described as TEM modes as defined below.
However in any sort of waveguide where boundary conditions are imposed by a physical structure, a wave of a particular frequency can be described in terms of a transverse mode (or superposition of such modes). These modes generally follow different propagation constants. When two or more modes have an identical propagation constant along the waveguide, then there is more than one modal decomposition possible in order to describe a wave with that propagation constant (for instance, a non-central Gaussian laser mode can be equivalently described as a superposition of Hermite-Gaussian modes or Laguerre-Gaussian modes which are described below).
Waveguides
Modes in waveguides can be classified as follows:
Transverse electromagnetic (TEM) modes Neither electric nor magnetic field in the direction of propagation.
Transverse electric (TE) modes No electric field in the direction of propagation. These are sometimes called H modes because there is only a magnetic field along the direction of propagation (H is the conventional symbol for magnetic field).
Transverse magnetic (TM) modes No magnetic field in the direction of propagation. These are sometimes called E modes because there is only an electric field along the direction of propagation.
Hybrid modes Non-zero electric and magnetic fields in the direction of propagation. | Physical sciences | Electromagnetic radiation | Physics |
516352 | https://en.wikipedia.org/wiki/Nazca%20plate | Nazca plate | The Nazca plate or Nasca plate, named after the Nazca region of southern Peru, is an oceanic tectonic plate in the eastern Pacific Ocean basin off the west coast of South America. The ongoing subduction, along the Peru–Chile Trench, of the Nazca plate under the South American plate is largely responsible for the Andean orogeny. The Nazca plate is bounded on the west by the Pacific plate and to the south by the Antarctic plate through the East Pacific Rise and the Chile Rise, respectively. The movement of the Nazca plate over several hotspots has created some volcanic islands as well as east–west running seamount chains that subduct under South America. Nazca is a relatively young plate in terms of the age of its rocks and its existence as an independent plate, having been formed from the breakup of the Farallon plate about 23 million years ago. The oldest rocks of the plate are about 50 million years old.
Boundaries
East Pacific and Chile Rise
A triple junction, the Chile triple junction, occurs on the seafloor of the Pacific Ocean off Taitao and Tres Montes Peninsula at the southern coast of Chile. Here, three tectonic plates meet: the Nazca plate, the South American plate, and the Antarctic plate.
Peru–Chile Trench
The eastern margin is a convergent boundary subduction zone under the South American plate and the Andes Mountains, forming the Peru–Chile Trench. The southern side is a divergent boundary with the Antarctic plate, the Chile Rise, where seafloor spreading permits magma to rise. The western side is a divergent boundary with the Pacific plate, forming the East Pacific Rise. The northern side is a divergent boundary with the Cocos plate, the Cocos–Nazca spreading centre.
The subduction of the Nazca plate under southern Chile has a history of producing massive earthquakes, including the largest ever recorded on earth, the moment magnitude 9.5 1960 Valdivia earthquake.
Intraplate features
Hotspots
A second triple junction occurs at the northwest corner of the plate where the Nazca, Cocos, and Pacific plates all join off the coast of Colombia. Yet another triple junction occurs at the southwest corner at the intersection of the Nazca, Pacific, and Antarctic plates off the coast of southern Chile. At each of these triple junctions an anomalous microplate exists, the Galapagos microplate at the northern junction and the Juan Fernandez microplate at the southern junction. The Easter Island microplate is a third microplate that is located just north of the Juan Fernandez Microplate and lies just west of Easter Island.
Aseismic ridges
The Carnegie Ridge is a and up to feature on the ocean floor of the northern Nazca plate that includes the Galápagos archipelago at its western end. It is being subducted under South America with the rest of the Nazca plate.
Plate motion
The absolute motion of the Nazca plate has been calibrated at east motion (88°), one of the fastest absolute motions of any tectonic plate. The subducting Nazca plate, which exhibits unusual flat slab subduction, is tearing as well as deforming as it is subducted (Barzangi and Isacks). The subduction has formed and continues to form the volcanic Andes Mountain Range. Deformation of the Nazca plate even affects the geography of Bolivia, far to the east (Tinker et al.). The 1994 Bolivia earthquake occurred on the Nazca plate; this had a magnitude of 8.2 , which at that time was the strongest instrumentally recorded earthquake occurring deeper than .
Aside from the Juan Fernández Islands, this area has very few other islands that are affected by the earthquakes resulting from complicated movements at these junctions.
Geologic history
The precursor of the Nazca plate, Juan de Fuca plate, and the Cocos plate was the Farallon plate, which split in the late Oligocene, about 22.8 Mya, a date arrived at by interpreting magnetic anomalies. Subduction under the South American continent began about 140 Mya, although the formation of the high parts of the Central Andes and the Bolivian orocline did not occur until 45 Mya. It has been suggested that the mountains were forced up by the subduction of the older and heavier parts of the plate, which sank more quickly into the mantle.
| Physical sciences | Tectonic plates | Earth science |
516680 | https://en.wikipedia.org/wiki/Excited%20state | Excited state | In quantum mechanics, an excited state of a system (such as an atom, molecule or nucleus) is any quantum state of the system that has a higher energy than the ground state (that is, more energy than the absolute minimum). Excitation refers to an increase in energy level above a chosen starting point, usually the ground state, but sometimes an already excited state. The temperature of a group of particles is indicative of the level of excitation (with the notable exception of systems that exhibit negative temperature).
The lifetime of a system in an excited state is usually short: spontaneous or induced emission of a quantum of energy (such as a photon or a phonon) usually occurs shortly after the system is promoted to the excited state, returning the system to a state with lower energy (a less excited state or the ground state). This return to a lower energy level is known as de-excitation and is the inverse of excitation.
Long-lived excited states are often called metastable. Long-lived nuclear isomers and singlet oxygen are two examples of this.
Atomic excitation
Atoms can be excited by heat, electricity, or light. The hydrogen atom provides a simple example of this concept.
The ground state of the hydrogen atom has the atom's single electron in the lowest possible orbital (that is, the spherically symmetric "1s" wave function, which, so far, has been demonstrated to have the lowest possible quantum numbers). By giving the atom additional energy (for example, by absorption of a photon of an appropriate energy), the electron moves into an excited state (one with one or more quantum numbers greater than the minimum possible). When the electron find itself between two states, a shift which happens very fast, it's in a superposition of both states. If the photon has too much energy, the electron will cease to be bound to the atom, and the atom will become ionized.
After excitation the atom may return to the ground state or a lower excited state, by emitting a photon with a characteristic energy. Emission of photons from atoms in various excited states leads to an electromagnetic spectrum showing a series of characteristic emission lines (including, in the case of the hydrogen atom, the Lyman, Balmer, Paschen and Brackett series).
An atom in a high excited state is termed a Rydberg atom. A system of highly excited atoms can form a long-lived condensed excited state, Rydberg matter.
Perturbed gas excitation
A collection of molecules forming a gas can be considered in an excited state if one or more molecules are elevated to kinetic energy levels such that the resulting velocity distribution departs from the equilibrium Boltzmann distribution. This phenomenon has been studied in the case of a two-dimensional gas in some detail, analyzing the time taken to relax to equilibrium.
Calculation of excited states
Excited states are often calculated using coupled cluster, Møller–Plesset perturbation theory, multi-configurational self-consistent field, configuration interaction, and time-dependent density functional theory.
Excited-state absorption
The excitation of a system (an atom or molecule) from one excited state to a higher-energy excited state with the absorption of a photon is called excited-state absorption (ESA). Excited-state absorption is possible only when an electron has been already excited from the ground state to a lower excited state. The excited-state absorption is usually an undesired effect, but it can be useful in upconversion pumping. Excited-state absorption measurements are done using pump–probe techniques such as flash photolysis. However, it is not easy to measure them compared to ground-state absorption, and in some cases complete bleaching of the ground state is required to measure excited-state absorption.
Reaction
A further consequence of excited-state formation may be reaction of the atom or molecule in its excited state, as in photochemistry.
| Physical sciences | Quantum mechanics | Physics |
516931 | https://en.wikipedia.org/wiki/Map%20%28mathematics%29 | Map (mathematics) | In mathematics, a map or mapping is a function in its general sense. These terms may have originated as from the process of making a geographical map: mapping the Earth surface to a sheet of paper.
The term map may be used to distinguish some special types of functions, such as homomorphisms. For example, a linear map is a homomorphism of vector spaces, while the term linear function may have this meaning or it may mean a linear polynomial. In category theory, a map may refer to a morphism. The term transformation can be used interchangeably, but transformation often refers to a function from a set to itself. There are also a few less common uses in logic and graph theory.
Maps as functions
In many branches of mathematics, the term map is used to mean a function, sometimes with a specific property of particular importance to that branch. For instance, a "map" is a "continuous function" in topology, a "linear transformation" in linear algebra, etc.
Some authors, such as Serge Lang, use "function" only to refer to maps in which the codomain is a set of numbers (i.e. a subset of R or C), and reserve the term mapping for more general functions.
Maps of certain kinds have been given specific names. These include homomorphisms in algebra, isometries in geometry, operators in analysis and representations in group theory.
In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems.
A partial map is a partial function. Related terminology such as domain, codomain, injective, and continuous can be applied equally to maps and functions, with the same meaning. All these usages can be applied to "maps" as general functions or as functions with special properties.
As morphisms
In category theory, "map" is often used as a synonym for "morphism" or "arrow", which is a structure-respecting function and thus may imply more structure than "function" does. For example, a morphism in a concrete category (i.e. a morphism that can be viewed as a function) carries with it the information of its domain (the source of the morphism) and its codomain (the target ). In the widely used definition of a function , is a subset of consisting of all the pairs for . In this sense, the function does not capture the set that is used as the codomain; only the range is determined by the function.
| Mathematics | Functions: General | null |
517018 | https://en.wikipedia.org/wiki/Cercozoa | Cercozoa | Cercozoa (now synonymised with Filosa) is a phylum of diverse single-celled eukaryotes. They lack shared morphological characteristics at the microscopic level, and are instead united by molecular phylogenies of rRNA and actin or polyubiquitin. They were the first major eukaryotic group to be recognized mainly through molecular phylogenies. They are the natural predators of many species of bacteria. They are closely related to the phylum Retaria, comprising amoeboids that usually have complex shells, and together form a supergroup called Rhizaria.
Characteristics
The group includes most amoeboids and flagellates that feed by means of filose pseudopods. These may be restricted to part of the cell surface, but there is never a true cytostome or mouth as found in many other protozoa. They show a variety of forms and have proven difficult to define in terms of structural characteristics, although their unity is strongly supported by phylogenetic studies.
Diversity
Some cercozoans are grouped by whether they are "filose" or "reticulose" in the behavior of their cytoskeleton when moving:
Filose, meaning their pseudopods develop as filopodia. For example:
Euglyphids, filose amoebae with shells of siliceous scales or plates, which are commonly found in soils, nutrient-rich waters, and on aquatic plants.
Gromia, a shelled amoeba.
Tectofilosids, filose amoebae that produce organic shells.
Cercomonads, common soil-dwelling amoeboflagellates.
Reticulose, meaning they form a reticulating net of pseudopods. For example:
Chlorarachniophytes, set apart by the presence of chloroplasts bound by four membranes and still possess a vestigial nucleus, called a nucleomorph. As such, they have been of great interest to researchers studying the endosymbiotic origins of organelles.
Other important ecological groups are:
Granofilosea, comprising several groups traditionally considered heliozoa such as Heliomonadida, Desmothoracida and Gymnosphaerida.
Phaeodaria, marine protozoa previously considered radiolarians.
Ecology
As well as being highly diverse in morphology and physiology, Cercozoa also shows high ecological diversity. The phylum Cercozoa includes many of the most abundant and ecologically significant protozoa in soil, marine and freshwater ecosystems.
Soil-dwelling cercozoans are one of the dominant groups of free-living eukaryotic microorganisms found in temperate soils, accounting for around 30% of identifiable protozoan DNA in arid or semi-arid soils and 15% in more humid soils. In transcriptomic analyses they account for 40-60% of all identifiable protozoan RNA found in forest and grassland soils. They also comprise 9-24% of all operational taxonomic units found in the ocean floor.
Some cercozoa are coprophilic or coprozoic, meaning they use feces as a source of nutrients or as transport through animal hosts. The faecal habitat is an understudied reservoir of microbial eukaryotic diversity, dominated by amoeboflagellates from the phylum Cercozoa. Strongly coprophilic examples of cercozoa are the flagellates Cercomonas, Proleptomonas and Helkesimastix, and the sorocarpic amoeba Guttulinopsis. Many new cercozoan lineages, especially among sarcomonads, have been discovered through phylogenetic sampling of feces because they appear preferentially in this medium.
Cercozoan bacterivores (i.e. predators of bacteria) are highly diverse and important in the plant phyllosphere, the leaf surfaces of plants. Particularly sarcomonads, with their ability to cyst, feed and multiply within hours, are perfectly adapted to the fluctuating environmental factors in the phyllosphere. Their predation causes shifts in the bacterial communities: they reduce populations of alphaproteobacteria and betaproteobacteria, which are less resistant to their grazing, in favour of other bacterial populations such as gammaproteobacteria.
Evolution
External evolution
Originally, Cercozoa contained both Filosa and Endomyxa, according to phylogenetic analyses using ribosomal RNA and tubulin. These analyses also confirmed Cercozoa as the sister group of Retaria within the supergroup Rhizaria.
However, the monophyly of the group was still uncertain. Posterior multigene phylogenetic analyses consistently found Cercozoa to be paraphyletic, because Endomyxa clustered next to Retaria instead of Filosa. Because of this, Endomyxa was excluded from Cercozoa, which became a synonym of Filosa.
More recent phylogenomic analyses with better sampling recovered a sister relationship between Filosa (=Cercozoa) and Endomyxa once again, although the modern classification of eukaryotes retains Endomyxa, Cercozoa and Retaria as separate phyla within Rhizaria.
Internal evolution
The phylum Cercozoa previously contained both Filosa and Endomyxa, but in the latest classifications Endomyxa has been excluded, and Cercozoa is now synonymous with Filosa. It is composed of two subphyla: Monadofilosa and Reticulofilosa. According to multigene phylogenetic analyses, Monadofilosa is a robust clade, in which the deepest branching group is Metromonadea, followed by Helkesea as the second group (together forming the paraphyletic Eoglissa) before the divergence of the clade Ventrifilosa (Imbricatea, Sarcomonadea and Thecofilosea). On the other hand, Reticulofilosa is probably paraphyletic, with Granofilosea diverging earlier than Chlorarachnea, which makes Chlorarachnea the sister group of Monadofilosa.
A more recent phylogenomic analysis recovered both Monadofilosa and Reticulofilosa as monophyletic within the clade Filosa.
In addition to the known Granofilosea, Chlorarachnea and Monadofilosa, a variety of clades inside Cercozoa have been discovered in other analyses and have slowly been described and named, such as Tremulida (previously known as Novel Clade 11) and Aquavolonida (Novel Clade 10), although their specific positions among the two main cercozoan subphyla have yet to be refined. These two orders have been classified as the class Skiomonadea, within Reticulofilosa.
Classification
The classification of Cercozoa was revised in 2018:
Subphylum Reticulofilosa
Class Chlorarachnea
Class Granofilosea
Class Skiomonadea
Subphylum Monadofilosa
Superclass Eoglissa
Class Metromonadea
Class Helkesea
Superclass Ventrifilosa
Class Sarcomonadea
Class Imbricatea
Class Thecofilosea
Gallery
| Biology and health sciences | SAR supergroup | Plants |
517253 | https://en.wikipedia.org/wiki/European%20long-distance%20paths | European long-distance paths | The European long-distance paths (E-paths) are a network of long-distance footpaths that traverse Europe. While most long-distance footpaths in Europe are located in just one country or region, each of these numbered European long-distance paths passes through many countries.
The first long-distance hiking trail in Europe was the National Blue Trail of Hungary, established in 1938. The formation of the European Union made transnational hiking trails possible. Today, the network consists of 12 paths and covers more than , crisscrossing Europe. In general, the routes connect and make use of existing national and local trails such as the GR footpaths.
The paths are officially designated by the European Ramblers' Association.
List
| Technology | Ground transportation networks | null |
517512 | https://en.wikipedia.org/wiki/Draft%20horse | Draft horse | A draft horse (US) or draught horse (UK), also known as dray horse, carthorse, work horse or heavy horse, is a large horse bred to be a working animal hauling freight and doing heavy agricultural tasks such as plowing. There are a number of breeds, with varying characteristics, but all share common traits of strength, patience, and a docile temperament.
While indispensable to generations of pre-industrial farmers, draft horses are used today for a multitude of purposes, including farming, draft horse showing, logging, recreation, and other uses. Draft breeds have been crossbred with light riding breeds such as the Thoroughbred to create sport horses or warmbloods. While most draft horses are used for driving, they can be ridden and some of the lighter draft breeds are capable performers under saddle.
Characteristics
Draft horses are recognizable by their extremely muscular build. They tend to have broad, short backs with powerful hindquarters. In general, they are taller and tend to have heavier bone and a more upright shoulder than riding horses, producing conformation that is well suited for pulling. Many draft breeds have heavier hair, called feathering on their lower legs. Draft breeds range from approximately high and from .
Background
Humans domesticated horses and used them to perform a variety of duties. One type of horse-powered work was the hauling of heavy loads, plowing fields, and other tasks that required pulling ability. A heavy, calm, patient, and well-muscled animal was desired for this work. Conversely, a light, more energetic horse was needed for riding and rapid transport. Thus, to the extent possible, a certain amount of selective breeding was used to develop different types of horse for different types of work.
It is a common misunderstanding that the Destrier that carried the armoured knight of the Middle Ages had the size and conformation of a modern draft horse, and some of these Medieval war horses may have provided some bloodlines for some of the modern draft breeds. The reality was that the high-spirited, quick-moving Destrier was closer to the size, build, and temperament of a modern Andalusian or Friesian. There also were horses of more phlegmatic temperaments used for pulling military wagons or performing ordinary farm work which provided bloodlines of the modern draft horse. Records indicate that even medieval drafts were not as large as those today. Of the modern draft breeds, the Percheron probably has the closest ties to the medieval war horse.
By the 19th century horses weighing more than that also moved at a quick pace were in demand. Tall stature, muscular backs, and powerful hindquarters made the draft horse a source of horsepower for farming, hauling freight and moving passengers. The advent of railroads increased demand for working horses, as a growing economy still needed transport over the 'last mile' between the goods yard or station and the final customer. Even in the 20th century, until motor vehicles became an affordable and reliable substitute, draft horses were used for practical work.
Over half a million draft horses were used during World War I. The British were importing American draft horses to supplement their dwindling stock even before America joined the war, preferring Percheron crosses which they said had "great endurance, fine physique, soundness, activity, willingness to work, and almost unfailing good temper". British buyers were buying 10,000 to 25,000 American horses and mules a month, eventually making up about two-thirds of British Army war horses.
In the late 19th century and early 20th century, thousands of draft horses were imported from Western Europe into the United States. Percherons came from France, Belgians from Brabant, Shires from England, Clydesdales from Scotland. Many American draft breed registries were founded in the late 19th century. The Percheron, with 40,000 broodmares registered as of 1915, was America's most numerous draft breed at the turn of the 20th century. A breed developed exclusively in the U.S. was the American Cream Draft, which had a stud book established by the 1930s.
Beginning in the late 19th century, and with increasing mechanization in the 20th century, especially following World War I in the US and after World War II in Europe, the popularity of the internal combustion engine, and particularly the tractor, reduced the need for the draft horse. Many were sold to slaughter for horse meat and a number of breeds went into significant decline.
Modern uses
Today, draft horses can be seen in horse shows, pulling competitions, heavy horse trials, parades pulling large wagons, and pulling tourist carriages. However, they are still seen on some smaller farms in the US and Europe. They are particularly popular with agrarian groups such as the Amish and Mennonites. Draft horses are still used for logging, a forestry management practice to remove logs from dense woodland where there is insufficient space for mechanized vehicles or for other conservation considerations.
Draft horse breeds have played a significant role in the development of many warmblood breeds, popular today in advanced level equine sports.
Small areas still exist where draft horses are widely used as transportation due to legislation preventing automotive traffic, such as on Mackinac Island in the United States.
Care
Management of a large draft horse can be costly, including feed, shoeing, and veterinary care. Although many draft horses can work without a need for shoes, if they are required, farriers may charge twice the price to shoe a draft horse as a light riding horse because of the extra labor and specialized equipment required. Historically, draft horses were shod with horseshoes that were significantly wider and heavier than those for other types of horses, custom-made, often with caulkins.
The draft horse's metabolism is a bit slower than lighter horse breeds, more akin to that of ponies, requiring less feed per pound of body weight. This is possibly due to their calmer nature. Nonetheless, because of their sheer size, most require a significant amount of feed per day. Generally a supplement to balance nutrients is preferred over a large quantity of grain. They consume hay or other forage from 1.5% to 3% of their body weight per day, depending on work level. They also can drink up to of water a day. Overfeeding can lead to obesity, and risk of laminitis can be a concern.
World records
The largest horse in recorded history was probably a Shire born in 1846 named Sampson (renamed Mammoth). He stood high, and his peak weight was estimated at .
At over , a Shire gelding named Goliath (1977–2001) was the Guinness Book of World Records record holder for the world's tallest living horse (until his death). Big Jake (2001–2021), an American Belgian standing , held the record for tallest living horse from 2010 until his death in 2021. As of 2024, there is no living record holder.
Draft breeds
The following breeds of horse are considered draft breeds:
American Belgian Draft
American Cream Draft
Ardennais
Auxois
Belgian Draught/Brabant
Boulonnais
Breton
Clydesdale
Comtois
Dølehest
Dutch Draft
Estonian Draft
Finnhorse
Fjord
Freiberger (Franches-Montagnes)
Friesian
Haflinger
Irish Draught
Italian Heavy Draft
Jutland
Latvian
Lithuanian Heavy Draught
Malopolski
Međimurje (Murakoz)
Noriker
North Swedish Horse
Novoolexandrian Draught
Percheron
Pfalz-Ardenner
Rhenish German Coldblood (Rhineland Heavy Draught)
Russian Heavy Draft
Schleswig Coldblood
Shire
Sokolski
South German Coldblood
Soviet Heavy Draft
Suffolk Punch
Swedish Ardennes
Trait du Maine (extinct)
Trait du Nord
Vladimir Heavy Draft
| Technology | Agriculture, labor and economy | null |
517682 | https://en.wikipedia.org/wiki/Zero-dimensional%20space | Zero-dimensional space | In mathematics, a zero-dimensional topological space (or nildimensional space) is a topological space that has dimension zero with respect to one of several inequivalent notions of assigning a dimension to a given topological space. A graphical illustration of a zero-dimensional space is a point.
Definition
Specifically:
A topological space is zero-dimensional with respect to the Lebesgue covering dimension if every open cover of the space has a refinement that is a cover by disjoint open sets.
A topological space is zero-dimensional with respect to the finite-to-finite covering dimension if every finite open cover of the space has a refinement that is a finite open cover such that any point in the space is contained in exactly one open set of this refinement.
A topological space is zero-dimensional with respect to the small inductive dimension if it has a base consisting of clopen sets.
The three notions above agree for separable, metrisable spaces.
Properties of spaces with small inductive dimension zero
A zero-dimensional Hausdorff space is necessarily totally disconnected, but the converse fails. However, a locally compact Hausdorff space is zero-dimensional if and only if it is totally disconnected. (See for the non-trivial direction.)
Zero-dimensional Polish spaces are a particularly convenient setting for descriptive set theory. Examples of such spaces include the Cantor space and Baire space.
Hausdorff zero-dimensional spaces are precisely the subspaces of topological powers where is given the discrete topology. Such a space is sometimes called a Cantor cube. If is countably infinite, is the Cantor space.
Manifolds
All points of a zero-dimensional manifold are isolated.
| Mathematics | Geometry: General | null |
518211 | https://en.wikipedia.org/wiki/Cloaca | Cloaca | A cloaca ( ), : cloacae ( or ), or vent, is the rear orifice that serves as the only opening for the digestive (rectum), reproductive, and urinary tracts (if present) of many vertebrate animals. All amphibians, reptiles, birds, and a few mammals (monotremes, afrosoricids, and marsupial moles, etc.) have this orifice, from which they excrete both urine and feces; this is in contrast to most placental mammals, which have separate orifices for evacuation and reproduction. Excretory openings with analogous purpose in some invertebrates are also sometimes called cloacae. Mating through the cloaca is called cloacal copulation and cloacal kissing.
The cloacal region is also often associated with a secretory organ, the cloacal gland, which has been implicated in the scent-marking behavior of some reptiles, marsupials, amphibians, and monotremes.
Etymology
The word is from the Latin verb cluo, "(I) cleanse", thus the noun cloaca, "sewer, drain".
Birds
Birds reproduce using their cloaca; this occurs during a cloacal kiss in most birds. Birds that mate using this method touch their cloacae together, in some species for only a few seconds, sufficient time for sperm to be transferred from the male to the female. For palaeognaths and waterfowl, the males do not use the cloaca for reproduction, but have a phallus.
One study has looked into birds that use their cloaca for cooling.
Among falconers, the word vent is also a verb meaning "to defecate".
Fish
Among fish, a true cloaca is present only in elasmobranchs (sharks and rays) and lobe-finned fishes. In lampreys and in some ray-finned fishes, part of the cloaca remains in the adult to receive the urinary and reproductive ducts, although the anus always opens separately. In chimaeras and most teleosts, however, all three openings are entirely separated.
Mammals
With a few exceptions noted below, mammals have no cloaca. Even in the marsupials that have one, the cloaca is partially subdivided into separate regions for the anus and urethra.
Monotremes
The monotremes (egg-laying mammals) possess a true cloaca.
Marsupials
In marsupials, the genital tract is separate from the anus, but a trace of the original cloaca does remain externally. This is one of the features of marsupials (and monotremes) that suggest their basal nature, as the amniotes from which mammals evolved had a cloaca, and probably so did the earliest mammals.
Unlike other marsupials, marsupial moles have a true cloaca. This fact has been used to argue that they are not marsupials.
Placentals
Most adult placentals have no cloaca. In the embryo, the embryonic cloaca divides into a posterior region that becomes part of the anus, and an anterior region that develops depending on sex: in males, it forms the penile urethra, while in females, it develops into the vestibule or urogenital sinus that receives the urethra and vagina. However, some placentals retain a cloaca as adults: those are members of the order Afrosoricida (small mammals native to Africa) as well as pikas, beavers, and some shrews.
Being placental animals, humans have an embryonic cloaca which divides into separate tracts during the development of the urinary and reproductive organs. However, a few human congenital disorders result in persons being born with a cloaca, including persistent cloaca and sirenomelia (mermaid syndrome).
Reptiles
In reptiles, the cloaca consists of the urodeum, proctodeum, and coprodeum. Some species have modified cloacae for increased gas exchange (see reptile respiration and reptile reproduction). This is where reproductive activity occurs.
Cloacal respiration in animals
Some turtles, especially those specialized in diving, are highly reliant on cloacal respiration during dives. They accomplish this by having a pair of accessory air bladders connected to the cloaca, which can absorb oxygen from the water.
Sea cucumbers use cloacal respiration. The constant flow of water through it has allowed various fish, polychaete worms and even crabs to specialize to take advantage of it while living protected inside the cucumber. At night, many of these species emerge through the anus of the sea cucumber in search of food.
| Biology and health sciences | Gastrointestinal tract | Biology |
518397 | https://en.wikipedia.org/wiki/Angle%20of%20repose | Angle of repose | The angle of repose, or critical angle of repose, of a granular material is the steepest angle of descent or dip relative to the horizontal plane on which the material can be piled without slumping. At this angle, the material on the slope face is on the verge of sliding. The angle of repose can range from 0° to 90°. The morphology of the material affects the angle of repose; smooth, rounded sand grains cannot be piled as steeply as can rough, interlocking sands. The angle of repose can also be affected by additions of solvents. If a small amount of water is able to bridge the gaps between particles, electrostatic attraction of the water to mineral surfaces increases the angle of repose, and related quantities such as the soil strength.
When bulk granular materials are poured onto a horizontal surface, a conical pile forms. The internal angle between the surface of the pile and the horizontal surface is known as the angle of repose and is related to the density, surface area and shapes of the particles, and the coefficient of friction of the material. Material with a low angle of repose forms flatter piles than material with a high angle of repose.
The term has a related usage in mechanics, where it refers to the maximum angle at which an object can rest on an inclined plane without sliding down. This angle is equal to the arctangent of the coefficient of static friction μs between the surfaces.
Applications of theory
The angle of repose is sometimes used in the design of equipment for the processing of particulate solids. For example, it may be used to design an appropriate hopper or silo to store the material, or to size a conveyor belt for transporting the material. It can also be used in determining whether or not a slope (of a stockpile, or uncompacted gravel bank, for example) would likely collapse; the talus slope is derived from angle of repose and represents the steepest slope a pile of granular material can take. This angle of repose is also crucial in correctly calculating stability in vessels.
It is also commonly used by mountaineers as a factor in analysing avalanche danger in mountainous areas.
Formulation
If the coefficient of static friction μs is known of a material, then a good approximation of the angle of repose can be made with the following function. This function is somewhat accurate for piles where individual objects in the pile are minuscule and piled in random order.
where is the angle of repose.
A simple free body diagram can be used to understand the relationship between the angle of repose and the stability of the material on the slope. For the heaped material to resist collapse, the frictional forces must be equivalent to the horizontal component of the gravitational force , where is the mass of the material, is the gravitational acceleration and is the slope angle:
The frictional force is equivalent to the multiplication product of the coefficient of static friction and the Normal Force or :
Where is the angle of repose, or the angle at which the slope fails under regular conditions, and is the coefficient of static friction of the material on the slope.
Measurement
There are numerous methods for measuring angle of repose and each produces slightly different results. Results are also sensitive to the exact methodology of the experimenter. As a result, data from different labs are not always comparable. One method is the triaxial shear test, another is the direct shear test.
The measured angle of repose may vary with the method used, as described below.
Tilting box method
This method is appropriate for fine-grained, non-cohesive materials with individual particle size less than 10 mm. The material is placed within a box with a transparent side to observe the granular test material. It should initially be level and parallel to the base of the box. The box is slowly tilted until the material begins to slide in bulk, and the angle of the tilt is measured.
Fixed funnel method
The material is poured through a funnel to form a cone. The tip of the funnel should be held close to the growing cone and slowly raised as the pile grows, to minimize the impact of falling particles. Stop pouring the material when the pile reaches a predetermined height or the base a predetermined width. Rather than attempt to measure the angle of the resulting cone directly, divide the height by half the width of the base of the cone. The inverse tangent of this ratio is the angle of repose.
Revolving cylinder method
The material is placed within a cylinder with at least one transparent end. The cylinder is rotated at a fixed speed, and the observer watches the material move within it. The effect is similar to watching clothes tumble over one another in a slowly rotating clothes dryer. The granular material assumes a certain angle as it flows within the rotating cylinder. This method is recommended for obtaining the dynamic angle of repose, which may vary from the static angle of repose measured by other methods.
Of various materials
Here is a list of various materials and their angle of repose. All measurements are approximated.
With different supports
Different supports modify the shape of the pile, in the illustrations below sand piles, although angles of repose remain the same.
Exploitation by antlion and wormlion (Vermileonidae) larvae
The larvae of the antlions and the unrelated wormlions Vermileonidae trap small insects such as ants by digging conical pits in loose sand, such that the slope of the walls is effectively at the critical angle of repose for the sand. They achieve this by flinging the loose sand out of the pit and permitting the sand to settle at its critical angle of repose as it falls back. Thus, when a small insect, commonly an ant, blunders into the pit, its weight causes the sand to collapse below it, drawing the victim toward the center where the predator that dug the pit lies in wait under a thin layer of loose sand. The larva assists this process by vigorously flicking sand out from the center of the pit when it detects a disturbance. This undermines the pit walls and causes them to collapse toward the center. The sand that the larva flings also pelts the prey with loose rolling material that prevents it from getting any foothold on the easier slopes that the initial collapse of the slope has presented. The combined effect is to bring the prey down to within grasp of the larva, which then can inject venom and digestive fluids.
In geotechnics
| Physical sciences | Soil mechanics | Physics |
518441 | https://en.wikipedia.org/wiki/Mongrel | Mongrel | A mongrel, mutt, or mixed-breed dog is a dog that does not belong to one officially recognized breed, including those that result from intentional breeding. Although the term mixed-breed dog is sometimes preferred, many mongrels have no known purebred ancestors.
Crossbreed dogs, and "designer dogs", while also a mix of breeds, differ from mongrels in being intentionally bred. At other times, the word mongrel has been applied to informally purpose-bred dogs such as curs, which were created at least in part from mongrels, especially if the breed is not officially recognized.
Although mongrels are viewed as of less commercial value than intentionally bred dogs, they are thought to be less susceptible to genetic health problems associated with inbreeding (based on the theory of heterosis), and have enthusiasts and defenders who prefer them to intentionally bred dogs.
Estimates place the prevalence of mongrels at 150 million animals worldwide.
Terminology
Mixed-breed and crossbreed
In the United States, the term mixed-breed is a favored synonym over mongrel among people who wish to avoid negative connotations associated with the latter term. The implication that such dogs must be a mix of defined breeds may stem from an inverted understanding of the origins of dog breeds. Purebred dogs have been, for the most part, artificially created from random-bred populations by human selective breeding with the purpose of enhancing desired physical, behavioral, or temperamental characteristics. Dogs that are not purebred are not necessarily a mix of such defined breeds. Therefore, among some experts and fans of such dogs, mongrel is still the preferred term.
Dog crossbreeds, sometimes called designer dogs, also are not members of a single recognized breed. Unlike mixed-breeds, crossbreed dogs are often the product of artificial selection – intentionally created by humans, whereas the term mongrel specifically refers to dogs that develop by natural selection, without the planned intervention of humans.
Regional and slang terms
The words cur, tyke, mutt, and mongrel are used, sometimes in a derogatory manner. There are also regional terms for mixed-breed dogs. In the United Kingdom, mongrel is the unique technical word for a mixed-breed dog. North Americans generally prefer the term mix or mixed-breed. Mutt is also commonly used in the United States and Canada. Some American registries and dog clubs that accept mixed-breed dogs use the term All-American to describe mixed-breed dogs.
There are also names for mixed-breeds based on geography, behavior, or food. In Hawaii, mixes are referred to as poi dogs, although they are not related to the extinct Hawaiian Poi Dog. In the Bahamas and the Turks and Caicos Islands, the common term is potcake dogs (referring to the table scraps they are fed). In South Africa, the tongue-in-cheek expression pavement special is sometimes used as a description for a mixed-breed dog. In Trinidad and Tobago, these mixed dogs are referred to as pot hounds (pothong). In Serbia, a similar expression is prekoplotski avlijaner (over-the-fence yard-dweller). In Russia, a colloquial term дворняга (yard-dweller) is used most commonly. In the Philippines, mixed-breed street dogs are often called askal, a Tagalog-derived contraction of asong kalye (”street dog"), while in Singapore, they are known as Singapore Specials. In Puerto Rico, they are known as satos; in Venezuela they are called yusos or cacris, the latter being a contraction of the words callejero criollo (literally, street creole, as street dogs are usually mongrels); and in Chile and Bolivia, they are called quiltros. In Costa Rica, it is common to hear the word zaguate, a term originating from a Nahuatl term, zahuatl, that refers to the disease called scabies. In the rural southern United States, a small hunting dog is known as a feist. In Yucatan Peninsula, Mexico, they are called "malix" (ma.liʃ), meaning "no breed" in mayan language.
Slang terms are also common. Heinz 57, Heinz, or Heinz Hound is often used for dogs of uncertain ancestry, in a playful reference to the "57 Varieties" slogan of the H. J. Heinz Company. In some countries, such as Australia, bitsa (or bitzer) is sometimes used, meaning "bits o' this, bits o' that". In Brazil and the Dominican Republic, the name for mixed-breed dogs is vira-lata (trash-can tipper) because of homeless dogs who knock over trash cans to reach discarded food. In Newfoundland, a smaller mixed-breed dog is known as a cracky, hence the colloquial expression "saucy as a cracky" for someone with a sharp tongue.
Determining ancestry
Guessing a mixed-breed's ancestry can be difficult even for knowledgeable dog observers, because mixed-breeds have much more genetic variation than purebreds. For example, two black mixed-breed dogs might each have recessive genes that produce a blond coat and, therefore, produce offspring looking unlike their parents.
Starting in 2007, genetic analysis has become available to the public. The companies claim their DNA-based diagnostic test can genetically determine the breed composition of mixed-breed dogs. These tests are still limited in scope because only a small number of the hundreds of dog breeds have been validated against the tests, and because the same breed in different geographical areas may have different genetic profiles. The tests do not test for breed purity, but for genetic sequences that are common to certain breeds. With a mixed-breed dog, the test is not proof of purebred ancestry, but rather an indication that those dogs share common ancestry with certain purebreds. The American Kennel Club does not recognize the use of DNA tests to determine breed.
Many newer dog breeds can be traced back to a common foundational breed, making them difficult to separate genetically. For example, Labrador Retrievers, Flat-coated Retrievers, Chesapeake Bay Retrievers, and Newfoundland dogs share a common ancestry with the St. John's water dog – a now-extinct naturally occurring dog landrace from the island of Newfoundland.
Health
The theory of hybrid vigor suggests that as a group, dogs of varied ancestry will be generally healthier than their purebred counterparts. In purebred dogs, intentionally breeding dogs of very similar appearance over several generations produces animals that carry many of the same alleles, some of which are detrimental. If the founding population for the breed was small, then the genetic diversity of that particular breed may be small for quite some time.
When humans select certain dogs for new breeds, they artificially isolate that group of genes and cause more copies of that gene to be made than might have otherwise occurred in nature. The population is initially more fragile because of the lack of genetic diversity. If the dog breed is popular, and the line continues, over hundreds of years diversity increases due to mutations and occasional out-breeding. This is why some of the very old breeds are more stable. One issue is when certain traits found in the breed standard are associated with genetic disorders. The artificial selective force favors the duplication of the genetic disorder because it comes with a desired physical trait. The genetic health of hybrids tends to be higher. Healthy traits have been lost in many purebred dog lines because many breeders of showdogs are more interested in conformation – the physical attributes of the dogs in relation to the breed standard – than in the health and working temperament for which the dog was originally bred.
Populations are vulnerable when the dogs bred are closely related. Inbreeding among purebreds has exposed various genetic health problems not always readily apparent in less uniform populations. Mixed-breed dogs are more genetically diverse due to the more haphazard nature of their parents' mating. The offspring of such matings might be less likely to express certain genetic disorders because there might be a decreased chance that both parents carry the same detrimental recessive alleles, but some deleterious recessives occur across many seemingly unrelated breeds, and therefore merely mixing breeds is no guarantee of genetic health. When two poor specimens are bred, the offspring could inherit the worst traits of both parents. This is commonly seen in dogs that came from puppy mills.
Several studies have shown that mixed-breed dogs have a health advantage over purebred dogs. A German study finds that "mongrels require less veterinary treatment". Studies in Sweden have found that "Mongrel dogs are less prone to many diseases than the average purebred dog" and, when referring to death rates, that "mongrels were consistently in the low risk category". Data from Denmark also suggest that mixed breeds have greater longevity on average compared to purebreds.
A British study showed similar results, but a few breeds (notably Jack Russell Terriers, Miniature Poodles and Whippets) lived longer than mixed breeds.
In one study, the effect of breed on longevity in the pet dog was analyzed using mortality data from 23,535 pet dogs. The data were obtained from North American veterinary teaching hospitals. The median age at death was determined for purebred and mixed-breed dogs of different body weights. Within each body weight category, the median age at death was lower for purebred dogs compared with mixed-breed dogs. The median age at death was "8.5 years for all mixed breed dogs, and 6.7 years for all pure breed dogs" in the study.
In 2013, a study found that mixed breeds live on average 1.2 years longer than purebreds, and that increasing body weight was negatively correlated with longevity (i.e. the heavier the dog, the shorter its lifespan). Another study published in 2019 confirmed this 1.2 year difference in lifespan for mixed-breed dogs, and further demonstrated negative impacts of recent inbreeding and benefits of occasional outcrossing for lifespan in individual dogs.
Studies that have been done in the area of health show that mixed-breeds on average are both healthier and longer-lived than their purebred relations. This is because current accepted breeding practices within the pedigreed dog community result in a reduction in genetic diversity, and can result in physical characteristics that lead to health issues.
Studies have shown that crossbreed dogs have a number of desirable reproductive traits. Scott and Fuller found that crossbreed dogs were superior mothers compared to purebred mothers, producing more milk and giving better care. These advantages led to a decreased mortality in the offspring of crossbreed dogs.
Gallery
| Biology and health sciences | Dogs | Animals |
518489 | https://en.wikipedia.org/wiki/Amoebozoa | Amoebozoa | Amoebozoa is a major taxonomic group containing about 2,400 described species of amoeboid protists, often possessing blunt, fingerlike, lobose pseudopods and tubular mitochondrial cristae. In traditional classification schemes, Amoebozoa is usually ranked as a phylum within either the kingdom Protista or the kingdom Protozoa. In the classification favored by the International Society of Protistologists, it is retained as an unranked "supergroup" within Eukaryota. Molecular genetic analysis supports Amoebozoa as a monophyletic clade. Modern studies of eukaryotic phylogenetic trees identify it as the sister group to Opisthokonta, another major clade which contains both fungi and animals as well as several other clades comprising some 300 species of unicellular eukaryotes. Amoebozoa and Opisthokonta are sometimes grouped together in a high-level taxon, named Amorphea.
Amoebozoa includes many of the best-known amoeboid organisms, such as Chaos, Entamoeba, Pelomyxa and the genus Amoeba itself. Species of Amoebozoa may be either shelled (testate) or naked, and cells may possess flagella. Free-living species are common in both salt and freshwater as well as soil, moss and leaf litter. Some live as parasites or symbionts of other organisms, and some are known to cause disease in humans and other organisms.
While the majority of amoebozoan species are unicellular, the group also includes several clades of slime molds, which have a macroscopic, multicellular stage of life during which individual amoeboid cells remain together after multiple cell division to form a macroscopic plasmodium or, in cellular slime molds, aggregate to form one.
Amoebozoa vary greatly in size. Some are only 10–20 μm in diameter, while others are among the largest protozoa. The well-known species Amoeba proteus, which may reach 800 μm in length, is often studied in schools and laboratories as a representative cell or model organism, partly because of its convenient size. Multinucleate amoebae like Chaos and Pelomyxa may be several millimetres in length, and some multicellular amoebozoa, such as the "dog vomit" slime mold Fuligo septica, can cover an area of several square meters.
Morphology
Amoebozoa is a large and diverse group, but certain features are common to many of its members. The amoebozoan cell is typically divided into a granular central mass, called endoplasm, and a clear outer layer, called ectoplasm. During locomotion, the endoplasm flows forwards and the ectoplasm runs backwards along the outside of the cell. In motion, many amoebozoans have a clearly defined anterior and posterior and may assume a "monopodial" form, with the entire cell functioning as a single pseudopod. Large pseudopods may produce numerous clear projections called subpseudopodia (or determinate pseudopodia), which are extended to a certain length and then retracted, either for the purpose of locomotion or food intake. A cell may also form multiple indeterminate pseudopodia, through which the entire contents of the cell flow in the direction of locomotion. These are more or less tubular and are mostly filled with granular endoplasm. The cell mass flows into a leading pseudopod, and the others ultimately retract, unless the organism changes direction.
While most amoebozoans are "naked," like the familiar Amoeba and Chaos, or covered with a loose coat of minute scales, like Cochliopodium and Korotnevella, members of the order Arcellinida form rigid shells, or tests, equipped with a single aperture through which the pseudopods emerge. Arcellinid tests may be secreted from organic materials, as in Arcella, or built up from collected particles cemented together, as in Difflugia.
In all amoebozoa, the primary mode of nutrition is phagocytosis, in which the cell surrounds potential food particles with its pseudopods, sealing them into vacuoles within which they may be digested and absorbed. Some amoebozoans have a posterior bulb called a uroid, which may serve to accumulate waste, periodically detaching from the rest of the cell. When food is scarce, most species can form cysts, which may be carried aerially and introduce them to new environments. In slime moulds, these structures are called spores, and form on stalked structures called fruiting bodies or sporangia. Mixotrophic species living in a symbiotic relationship with microalgae of the genus Chlorella, which lives inside the cytoplasm of their host, have been found in Arcellinida and Mayorella.
The majority of Amoebozoa lack flagella and more generally do not form microtubule-supported structures except during mitosis. However, flagella do occur among the Archamoebae, and many slime moulds produce biflagellate gametes . The flagellum is generally anchored by a cone of microtubules, suggesting a close relationship to the opisthokonts. The mitochondria in amoebozoan cells characteristically have branching tubular cristae. However, among the Archamoebae, which are adapted to anoxic or microaerophilic habitats, mitochondria have been lost.
Classification
Place of Amoebozoa in the eukaryote tree
It appears (based on molecular genetics) that the members of Amoebozoa form a sister group to animals and fungi, diverging from this lineage after it had split from the other groups,
as illustrated below in a simplified diagram:
Strong similarities between Amoebozoa and Opisthokonts lead to the hypothesis that they form a distinct clade. Thomas Cavalier-Smith proposed the name "unikonts" (formally, Unikonta) for this branch, whose members were believed to have been descended from a common ancestor possessing a single emergent flagellum rooted in one basal body.[1][2] However, while the close relationship between Amoebozoa and Opisthokonta is robustly supported, recent work has shown that the hypothesis of a uniciliate ancestor is probably false. In their Revised Classification of Eukaryotes (2012), Adl et al. proposed Amorphea as a more suitable name for a clade of approximately the same composition, a sister group to the Diaphoretickes. More recent work places the members of Amorphea together with the malawimonids and collodictyonids in a proposed clade called Opimoda, which comprises one of two major lineages diverging at the root of the eukaryote tree of life, the other being Diphoda.
Subphyla within Amoebozoa: Lobosa and Conosa
Traditionally all amoebozoa with lobose pseudopods were grouped together in the class Lobosea, placed with other amoeboids in the phylum Sarcodina or Rhizopoda, but these were considered to be unnatural groups. Structural and genetic studies identified the percolozoans and several archamoebae as independent groups. In phylogenies based on rRNA their representatives were separate from other amoebae, and appeared to diverge near the base of eukaryotic evolution, as did most slime molds.
However, revised trees by Cavalier-Smith and Chao in 1996 suggested that the remaining lobosans do form a monophyletic group, to which the Archamoebae and Mycetozoa were closely related, although the percolozoans were not. Subsequently, they emended the phylum Amoebozoa to include both the subphylum Lobosa and a new subphylum Conosa, comprising the Archamoebae and the Mycetozoa.
Recent molecular genetic data appear to support this primary division of the Amoebozoa into Lobosa and Conosa. The former, as defined by Cavalier-Smith and his collaborators, consists largely of the classic Lobosea: non-flagellated amoebae with blunt, lobose pseudopods (Amoeba, Acanthamoeba, Arcella, Difflugia etc.). The latter is made up of both amoeboid and flagellated cells, characteristically with more pointed or slightly branching subpseudopodia (Archamoebae and the Mycetozoan slime molds).
Phylogeny and taxonomy within Amoebozoa
From older studies by Cavalier-Smith, Chao & Lewis 2016 and Silar 2016. Also recent phylogeny indicates the Lobosa are paraphyletic: Conosa is sister of the Cutosea.
Phylum Amoebozoa Lühe 1913 emend. Cavalier-Smith 1998 [Amoebobiota; Eumycetozoa Zopf 1884 emend Olive 1975]
Clade Discosea Cavalier-Smith 2004 stat. nov. Adl et al. 2018
Order ?Stereomyxida Grell 1971
Order ?Stygamoebida Smirnov & Cavalier-Smith 2011
Class Centramoebia Cavalier-Smith et al. 2016
Order Centramoebida Rogerson & Patterson 2002 em. Cavalier-Smith 2004
Order Himatismenida Page 1987 [Cochliopodiida]
Order Pellitida Page 1987 [Cochliopodiida]
Class Flabellinia Smirnov & Cavalier-Smith 2011 em. Kudryavtsev et al. 2014
Order Thecamoebida Schaeffer 1926 em. Smirnov & Cavalier-Smith 2011
Order Dermamoebida Cavalier-Smith 2004 em. Smirnov & Cavalier-Smith 2011
Order Vannellida Smirnov et al. 2005
Order Dactylopodida Smirnov et al. 2005
Clade Tevosa Kang et al. 2017
Clade Tubulinea Smirnov et al. 2005 stat. nov. Adl et al. 2018
Class Corycidia Kang et al. 2017 stat. nov. Adl et al. 2018
Order Trichosida Moebius 1889
Family Microcoryciidae de Saedeleer 1934
Class Echinamoebia Cavalier-Smith 2016 stat. nov. Adl et al. 2018
Order Echinamoebida Cavalier-Smith 2004 em. 2011
Class Elardia Kang et al. 2017 stat. nov. Adl et al. 2018
Subclass Leptomyxia Cavalier-Smith 2016
Order Leptomyxida Pussard & Pons 1976 em. Page 1987
Subclass Eulobosia Cavalier-Smith 2016
Order Euamoebida Lepşi 1960 em. Cavalier-Smith 2016
Order Arcellinida Kent 1880
Clade Evosea Kang et al. 2017 stat. nov. Adl et al. 2018
Clade Cutosa Cavalier-Smith 2016 stat. nov.
Class Cutosea Cavalier-Smith 2016
Order Squamocutida Cavalier-Smith 2016
Subphylum Conosa Cavalier-Smith 1998 stat. nov.
Infraphylum Archamoebae Cavalier-Smith 1993 stat. n. 1998
Class Archamoebea Cavalier-Smith 1983 stat. n. 2004
Family Tricholimacidae Cavalier-Smith 2013
Family Endamoebidae Calkins 1926
Order Entamoebida Cavalier-Smith 1993
Order Pelobiontida Page 1976 emend. Cavalier Smith 1987
Infraphylum Semiconosia Cavalier-Smith 2013
Class Variosea Cavalier-Smith et al. 2004
Order ?Flamellidae Cavalier-Smith 2016
Order ?Holomastigida Lauterborn 1895 [Artodiscida Cavalier-Smith 2013]
Order Phalansteriida Hibberd 1983
Order Ramamoebida Cavalier-Smith 2016
Order Profiliida Kang et al. 2017 [Protosteliida Olive & Stoianovitch 1966 em. Shadwick & Spiegel 2012]
Order Fractovitellida Lahr et al. 2011 em. Kang et al. 2017
Superclass Mycetozoa de Bary, 1859 ex Rostafinski, 1873
Class Dictyostelea Hawksworth et al. 1983
Order Acytosteliales Baldauf, Sheikh & Thulin 2017
Order Dictyosteliales Lister 1909 em. Olive 1970
Class Protostelea Shadwick & Spiegel et al. 2012
Order Protosteliida Shadwick & Spiegel et al. 2012
Class Ceratiomyxomycetes Hawksworth, Sutton & Ainsworth 1983
Order Protosporangiida Shadwick & Spiegel 2012
Order Ceratiomyxida Martin 1961 ex Farr & Alexopoulos
Class Myxomycetes Link 1833 em. Haeckel 1866
Subclass Lucisporomycetidae Leontyev et al. 2019
Superorder Cribrarianae Leontyev 2015
Order Cribrariales Macbr. 1922
Superorder Trichianae Leontyev 2015
Order Reticulariales Leontyev 2015
Order Liceales Jahn 1928
Order Trichiales Macbride 1922
Subclass Columellomycetidae Leontyev et al. 2019
Order ?Echinosteliopsidales Shchepin et al.
Superorder Echinostelianae Leontyev 2015
Order Echinosteliales Martin 1961
Superorder Stemonitanae Leontyev 2015 [Fuscisporida Cavalier-Smith 2012]
Order Clastodermatales Leontyev 2015
Order Meridermatales Leontyev 2015
Order Stemonitales Macbride 1922
Order Physarales Macbride 1922
Fossil record
Vase-shaped microfossils (VSMs) discovered around the world show that amoebozoans have existed since the Neoproterozoic Era. The fossil species Melanocyrillium hexodiadema, Palaeoarcella athanata, and Hemisphaeriella ornata come from rocks 750 million years old. All three VSMs share a hemispherical shape, invaginated aperture, and regular indentations, that strongly resemble modern arcellinids, which are shell-bearing amoebozoans belonging to the class Tubulinea. P. athanata in particular looks the same as the extant genus Arcella.
List of amoebozoan protozoa pathogenic to humans
Entamoeba histolytica
Acanthamoeba
Balamuthia mandrillaris
Endolimax
Meiosis
The recently available Acanthamoeba genome sequence revealed several orthologs of genes employed in meiosis of sexual eukaryotes. These genes included Spo11, Mre11, Rad50, Rad51, Rad52, Mnd1, Dmc1, Msh and Mlh. This finding suggests that Acanthamoeba is capable of some form of meiosis and may be able to undergo sexual reproduction.
In sexually reproducing eukaryotes, homologous recombination (HR) ordinarily occurs during meiosis. The meiosis-specific recombinase, Dmc1, is required for efficient meiotic HR, and Dmc1 is expressed in Entamoeba histolytica. The purified Dmc1 from E. histolytica forms presynaptic filaments and catalyzes ATP-dependent homologous DNA pairing and DNA strand exchange over at least several thousand base pairs. The DNA pairing and strand exchange reactions are enhanced by the eukaryotic meiosis-specific recombination accessory factor (heterodimer) Hop2-Mnd1. These processes are central to meiotic recombination, suggesting that E. histolytica undergoes meiosis.
Studies of Entamoeba invadens found that, during the conversion from the tetraploid uninucleate trophozoite to the tetranucleate cyst, homologous recombination is enhanced. Expression of genes with functions related to the major steps of meiotic recombination also increased during encystations. These findings in E. invadens, combined with evidence from studies of E. histolytica indicate the presence of meiosis in the Entamoeba. A comparative genetic analysis indicated that meiotic processes are present in all major amoebozoan lineages.
Since Amoebozoa diverged early from the eukaryotic family tree, these results also suggest that meiosis was present early in eukaryotic evolution.
Human health
Amoebiasis, also known as amebiasis or entamoebiasis, is an infection caused by any of the amoebozoans of the Entamoeba group. Symptoms are most common upon infection by Entamoeba histolytica. Amoebiasis can present with no, mild, or severe symptoms. Symptoms may include abdominal pain, mild diarrhoea, bloody diarrhea or severe colitis with tissue death and perforation. This last complication may cause peritonitis. People affected may develop anemia due to loss of blood.
Invasion of the intestinal lining causes amoebic bloody diarrhea or amoebic colitis. If the parasite reaches the bloodstream it can spread through the body, most frequently ending up in the liver where it causes amoebic liver abscesses. Liver abscesses can occur without previous diarrhea. Cysts of Entamoeba can survive for up to a month in soil or for up to 45 minutes under fingernails. It is important to differentiate between amoebiasis and bacterial colitis. The preferred diagnostic method is through faecal examination under microscope, but requires a skilled microscopist and may not be reliable when excluding infection. This method however may not be able to separate between specific types. Increased white blood cell count is present in severe cases, but not in mild ones. The most accurate test is for antibodies in the blood, but it may remain positive following treatment.
Prevention of amoebiasis is by separating food and water from faeces and by proper sanitation measures. There is no vaccine. There are two treatment options depending on the location of the infection. Amoebiasis in tissues is treated with either metronidazole, tinidazole, nitazoxanide, dehydroemetine or chloroquine, while luminal infection is treated with diloxanide furoate or iodoquinoline. For treatment to be effective against all stages of the amoeba may require a combination of medications. Infections without symptoms do not require treatment but infected individuals can spread the parasite to others and treatment can be considered. Treatment of other Entamoeba infections apart from E. histolytica is not needed.
Amoebiasis is present all over the world. About 480 million people are infected with what appears to be E. histolytica and these result in the death of between 40,000–110,000 people every year. Most infections are now ascribed to E. dispar. E. dispar is more common in certain areas and symptomatic cases may be fewer than previously reported. The first case of amoebiasis was documented in 1875 and in 1891 the disease was described in detail, resulting in the terms amoebic dysentery and amoebic liver abscess. Further evidence from the Philippines in 1913 found that upon ingesting cysts of E. histolytica volunteers developed the disease. It has been known since 1897 that at least one non-disease-causing species of Entamoeba existed (Entamoeba coli), but it was first formally recognized by the WHO in 1997 that E. histolytica was two species, despite this having first been proposed in 1925. In addition to the now-recognized E. dispar evidence shows there are at least two other species of Entamoeba that look the same in humans - E. moshkovskii and Entamoeba bangladeshi. The reason these species haven't been differentiated until recently is because of the reliance on appearance.
Gallery
| Biology and health sciences | Other organisms | null |
518513 | https://en.wikipedia.org/wiki/Thermogenesis | Thermogenesis | Thermogenesis is the process of heat production in organisms. It occurs in all warm-blooded animals, and also in a few species of thermogenic plants such as the Eastern skunk cabbage, the Voodoo lily (Sauromatum venosum), and the giant water lilies of the genus Victoria. The lodgepole pine dwarf mistletoe, Arceuthobium americanum, disperses its seeds explosively through thermogenesis.
Types
Depending on whether or not they are initiated through locomotion and intentional movement of the muscles, thermogenic processes can be classified as one of the following:
Exercise activity thermogenesis (EAT)
Non-exercise activity thermogenesis (NEAT), energy expended for everything that is not sleeping, eating or sports-like exercise.
Diet-induced thermogenesis (DIT)
Shivering
One method to raise temperature is through shivering. It produces heat because the conversion of the chemical energy of ATP into kinetic energy causes almost all of the energy to show up as heat. Shivering is the process by which the body temperature of hibernating mammals (such as some bats and ground squirrels) is raised as these animals emerge from hibernation.
Non-shivering
Non-shivering thermogenesis occurs in brown adipose tissue (brown fat) that is present in almost all eutherians (swine being the only exception currently known). Brown adipose tissue has a unique uncoupling protein (thermogenin, also known as uncoupling protein 1) that allows for the synthesis of ATP to be uncoupled from the production of protons (H+), thus enabling mitochondria to burn fatty acids and oxygen to generate heat. The atomic structure of human uncoupling protein 1 UCP1 has been solved by cryogenic-electron microscopy. The structure has the typical fold of a member of the SLC25 family. UCP1 is locked in a cytoplasmic-open state by guanosine triphosphate in a pH-dependent manner, preventing proton leak.
In this process, substances such as free fatty acids (derived from triacylglycerols) remove purine (ADP, GDP and others) inhibition of thermogenin, which causes an influx of H+ into the matrix of the mitochondrion and bypasses the ATP synthase channel. This uncouples oxidative phosphorylation, and the energy from the proton motive force is dissipated as heat rather than producing ATP from ADP, which would store chemical energy for the body's use. Thermogenesis can also be produced by leakage of the sodium-potassium pump and the Ca2+ pump. Thermogenesis is contributed to by futile cycles, such as the simultaneous occurrence of lipogenesis and lipolysis or glycolysis and gluconeogenesis. In a broader context, futile cycles can be influenced by activity/rest cycles such as the Summermatter cycle.
Acetylcholine stimulates muscle to raise metabolic rate.
The low demands of thermogenesis mean that free fatty acids draw, for the most part, on lipolysis as the method of energy production.
A comprehensive list of human and mouse genes regulating cold-induced thermogenesis (CIT) in living animals (in vivo) or tissue samples (ex vivo) has been assembled and is available in CITGeneDB.
Evolutionary history
In avians and eutherians
The biological processes which allow for thermogenesis in animals did not evolve from a singular, common ancestor. Rather, avian (birds) and eutherian (placental mammalian) lineages developed the ability to perform thermogenesis independently through separate evolutionary processes. The fact that the same evolutionary character evolved independently in two different lineages after their last known common ancestor means that thermogenic processes are classified as an example of convergent evolution. However, while both clades are capable of performing thermogenesis, the biological processes involved are different. The reason that avians and eutherians both developed the capacity to perform thermogenesis is a subject of ongoing study by evolutionary biologists, and two competing explanations have been proposed to explain why this character appears in both lineages.
One explanation for the convergence is the "aerobic capacity" model. This theory suggests that natural selection favored individuals with higher resting metabolic rates, and that as the metabolic capacity of birds and eutherians increased, they developed the capacity for endothermic thermogenesis. Researchers have linked high levels of oxygen consumption with high resting metabolic rates, suggesting that the two are directly correlated. Rather than animals developing the capacity to maintain high and stable body temperatures only to be able to thermoregulate without the aid of the environment, this theory suggests that thermogenesis is actually a by-product of natural selection for higher aerobic and metabolic capacities. These higher metabolic capacities may initially have evolved for the simple reason that animals capable of metabolizing more oxygen for longer periods of time would have been better suited to, for example, run from predators or gather food. This model explaining the development of thermogenesis is older and more widely accepted among evolutionary biologists who study thermogenesis.
The second explanation is the "parental care" model. This theory proposes that the convergent evolution of thermogenesis in birds and eutherians is based on shared behavioral traits. Specifically, birds and eutherians both provide high levels of parental care to young offspring. This high level of care is theorized to give new born or hatched animals the opportunity to mature more rapidly because they have to expend less energy to satisfy their food, shelter, and temperature needs. The "parental care" model thus proposes that higher aerobic capacity was selected for in parents as a means of meeting the needs of their offspring. While the "parental care" model does differ from the "aerobic capacity" model, it shares some similarities in that both explanations for the rise of thermogenesis rest on natural selection favoring individuals with higher aerobic capacities for one reason or another. The primary difference between the two theories is that the "parental care" model proposes that a specific biological function (childcare) resulted in selective pressure for higher metabolic rates.
Despite both relying on similar explanations for the process by which organisms gained the capacity to perform non-shivering thermogenesis, neither of these explanations has secured a large enough consensus to be considered completely authoritative on convergent evolution of NST in birds and mammals, and scientists continue to conduct studies which support both positions.
Non-shivering thermogenesis
Brown Adipose Tissue (BAT) thermogenesis is one of the two known forms of non-shivering thermogenesis (NST). This type of heat-generation occurs only in eutherians, not in birds or other thermogenic organisms. BAT NST occurs when Uncoupling Protein 1 (UCP1) performs oxidative phosphorylation in eutherians’ bodies resulting in the generation of heat (Berg et al., 2006, p. 1178). This process generally only begins in eutherians after they have been subjected to low temperatures for an extended period of time, after which the process allows an organism's body to maintain a high and stable temperature without a reliance on environmental thermoregulation mechanisms (such as sunlight/shade). Because eutherians are the only clade which store brown adipose tissue, scientists previously thought that UCP1 evolved in conjunction with brown adipose tissue. However, recent studies have shown that UCP1 can also be found in non-eutherians like fish, birds, and reptiles. This discovery means that UCP1 probably existed in a common ancestor before the radiation of the eutherian lineage. Since this evolutionary split, though, UCP1 has evolved independently in eutherians, through a process which scientists believe was not driven by natural selection, but rather by neutral processes like genetic drift.
Evolution of Skeletal-Muscle Non-Shivering Thermogenesis
The second form of NST occurs in skeletal muscle. While eutherians use both BAT and skeletal muscle NST for thermogenesis, birds only use the latter form. This process has also been shown to occur in rare instances in fish. In skeletal muscle NST, Calcium ions slip across muscle cells to generate heat. Even though BAT NST was originally thought to be the only process by which animals could maintain endothermy, scientists now suspect that skeletal muscle NST was the original form of the process and that BAT NST developed later. Though scientists once also believed that only birds maintained their body temperatures using skeletal muscle NST, research in the late 2010s showed that mammals and other eutherians also use this process when they do not have adequate stores of brown adipose tissue in their bodies.
Skeletal muscle NST might also be used to maintain body temperature in heterothermic mammals during states of torpor or hibernation. Given that early eutherians and the reptiles which later evolved into avian lineages were either heterothermic or ectothermic, both forms of NST are thought not to have developed fully until after the K-pg extinction roughly 66 million years ago. However, some estimates place the evolution of these characters earlier, at roughly 100 mya. It is most likely that the process of evolving the capacity for thermogenesis as it currently exists was a process which began prior to the K-pg extinction and ended well after. The fact that skeletal muscle NST is common among eutherians during periods of torpor and hibernation further supports the theory that this form of thermogenesis is older than BAT NST. This is because early eutherians would not have had the capacity for non-shivering thermogenesis as it currently exists, so they more frequently used torpor and hibernation as means of thermal regulation, relying on systems which, in theory, predate BAT NST. However, there remains no consensus among evolutionary biologists on the order in which the two processes evolved, nor an exact timeframe for their evolution.
Regulation
Non-shivering thermogenesis is regulated mainly by thyroid hormone and the sympathetic nervous system. Some hormones, such as norepinephrine and leptin, may stimulate thermogenesis by activating the sympathetic nervous system. Rising insulin levels after eating may be responsible for diet-induced thermogenesis (thermic effect of food).
Progesterone also increases body temperature.
Thermogenesis from white adipose tissue
A novel and interesting method named the thermogenin-like system (TLS) has recently been proposed to produce thermogenesis from white adipose tissue or from other substantial tissues (such as endothelial or muscle cells). Ultimately, this could lead to new therapeutic methods for treating morbid obesity or severe diabetes. The proposed model is purely theoretical and relies on the use of light-activated PoXeR pumps integrated into the inner membrane of mitochondria. These pumps allow the passage of protons in such a way that the proton motive force is reduced. This would enable greater consumption of blood glucose from white adipose, endothelial, or muscle cells, thereby potentially lowering blood glucose levels. The explanation is that glycolysis is accelerated when glucose enters the cells and undergoes the Krebs cycle in the mitochondria. Since muscle cells have many mitochondria, it is also interesting to express PoXeR pumps in this tissue.
However, the method is invasive, relies on gene therapy, and requires several clinical trials as well as hospitalization to integrate the system at the level of white or muscle adipose tissue in the abdominal fat. It is also a light-responsive system. Since light does not penetrate the skin from the outside, the system must include an under-skin component with alternating activation of green light for a certain duration, followed by deactivation for another period. This cycle repeats over several weeks, particularly to recharge the light system. To ensure that ATP levels do not drop too low (otherwise the cell dies), the system self-regulates. Indeed, for light to be activated in the system, it is necessary to have a mechanism that continuously provides light without significantly lowering ATP levels. As luciferase can emit light in exchange for ATP, if ATP levels decrease too drastically, the light stops, ATP levels rise again, and the light is reactivated to induce thermogenesis.
| Biology and health sciences | Basics | Biology |
518692 | https://en.wikipedia.org/wiki/Faraday%20effect | Faraday effect | The Faraday effect or Faraday rotation, sometimes referred to as the magneto-optic Faraday effect (MOFE), is a physical magneto-optical phenomenon. The Faraday effect causes a polarization rotation which is proportional to the projection of the magnetic field along the direction of the light propagation. Formally, it is a special case of gyroelectromagnetism obtained when the dielectric permittivity tensor is diagonal. This effect occurs in most optically transparent dielectric materials (including liquids) under the influence of magnetic fields.
Discovered by Michael Faraday in 1845, the Faraday effect was the first experimental evidence that light and electromagnetism are related. The theoretical basis of electromagnetic radiation (which includes visible light) was completed by James Clerk Maxwell in the 1860s. Maxwell's equations were rewritten in their current form in the 1870s by Oliver Heaviside.
The Faraday effect is caused by left and right circularly polarized waves propagating at slightly different speeds, a property known as circular birefringence. Since a linear polarization can be decomposed into the superposition of two equal-amplitude circularly polarized components of opposite handedness and different phase, the effect of a relative phase shift, induced by the Faraday effect, is to rotate the orientation of a wave's linear polarization.
The Faraday effect has applications in measuring instruments. For instance, the Faraday effect has been used to measure optical rotatory power and for remote sensing of magnetic fields (such as fiber optic current sensors). The Faraday effect is used in spintronics research to study the polarization of electron spins in semiconductors. Faraday rotators can be used for amplitude modulation of light, and are the basis of optical isolators and optical circulators; such components are required in optical telecommunications and other laser applications.
History
By 1845, it was known through the work of Augustin-Jean Fresnel, Étienne-Louis Malus, and others that different materials are able to modify the direction of polarization of light when appropriately oriented, making polarized light a very powerful tool to investigate the properties of transparent materials. Faraday firmly believed that light was an electromagnetic phenomenon, and as such should be affected by electromagnetic forces. He spent considerable effort looking for evidence of electric forces affecting the polarization of light through what are now known as electro-optic effects, starting with decomposing electrolytes. However, his experimental methods were not sensitive enough, and the effect was only measured thirty years later by John Kerr.
Faraday then attempted to look for the effects of magnetic forces on light passing through various substances. After several unsuccessful trials, he happened to test a piece of "heavy" glass, containing equal proportions of silica, boracic acid and lead oxide, that he had made during his earlier work on glass manufacturing. Faraday observed that when a beam of polarized light passed through the glass in the direction of an applied magnetic force, the polarization of light rotated by an angle that was proportional to the strength of the force. He used a Nicol prism to measure the polarization. He was later able to reproduce the effect in several other solids, liquids, and gases by procuring stronger electromagnets.
The discovery is well documented in Faraday's daily notebook. On 13 Sept. 1845, in paragraph #7504, under the rubric Heavy Glass, he wrote:
He summarized the results of his experiments on 30 Sept. 1845, in paragraph #7718, famously writing:
Physical interpretation
The linear polarized light that is seen to rotate in the Faraday effect can be seen as consisting of the superposition of a right- and a left- circularly polarized beam (this superposition principle is fundamental in many branches of physics). We can look at the effects of each component (right- or left-polarized) separately, and see what effect this has on the result.
In circularly polarized light the direction of the electric field rotates at the frequency of the light, either clockwise or counter-clockwise. In a material, this electric field causes a force on the charged particles that compose the material (because of their large charge to mass ratio, the electrons are most heavily affected). The motion thus effected will be circular, and circularly moving charges will create their own (magnetic) field in addition to the external magnetic field. There will thus be two different cases: the created field will be parallel to the external field for one (circular) polarization, and in the opposing direction for the other polarization direction – thus the net B field is enhanced in one direction and diminished in the opposite direction. This changes the dynamics of the interaction for each beam and one of the beams will be slowed more than the other, causing a phase difference between the left- and right-polarized beam. When the two beams are added after this phase shift, the result is again a linearly polarized beam, but with a rotation of the polarization vector.
The direction of polarization rotation depends on the properties of the material through which the light is shone. A full treatment would have to take into account the effect of the external and radiation-induced fields on the wave function of the electrons, and then calculate the effect of this change on the refractive index of the material for each polarization, to see whether the right- or left-circular polarization is slowed more.
Mathematical formulation
Formally, the magnetic permeability is treated as a non-diagonal tensor as expressed by the equation:
The relation between the angle of rotation of the polarization and the magnetic field in a transparent material is:
where
β is the angle of rotation (in radians)
B is the magnetic flux density in the direction of propagation (in teslas)
d is the length of the path (in meters) where the light and magnetic field interact
is the Verdet constant for the material. This empirical proportionality constant (in units of radians per tesla per meter) varies with wavelength and temperature and is tabulated for various materials.
A positive Verdet constant corresponds to L-rotation (anticlockwise) when the direction of propagation is parallel to the magnetic field and to R-rotation (clockwise) when the direction of propagation is anti-parallel. Thus, if a ray of light is passed through a material and reflected back through it, the rotation doubles.
Some materials, such as terbium gallium garnet (TGG) have extremely high Verdet constants (≈ for 632 nm light). By placing a rod of this material in a strong magnetic field, Faraday rotation angles of over 0.78 rad (45°) can be achieved. This allows the construction of Faraday rotators, which are the principal component of Faraday isolators, devices which transmit light in only one direction. The Faraday effect can, however, be observed and measured in a Terbium-doped glass with Verdet constant as low as (≈ for 632 nm light). Similar isolators are constructed for microwave systems by using ferrite rods in a waveguide with a surrounding magnetic field. A thorough mathematical description can be found here.
Examples
Interstellar medium
The effect is imposed on light over the course of its propagation from its origin to the Earth, through the interstellar medium. Here, the effect is caused by free electrons and can be characterized as a difference in the refractive index seen by the two circularly polarized propagation modes. Hence, in contrast to the Faraday effect in solids or liquids, interstellar Faraday rotation (β) has a simple dependence on the wavelength of light (λ), namely:
where the overall strength of the effect is characterized by RM, the rotation measure. This in turn depends on the axial component of the interstellar magnetic field B||, and the number density of electrons ne, both of which vary along the propagation path. In Gaussian cgs units the rotation measure is given by:
or in SI units:
where
ne(s) is the density of electrons at each point s along the path
B‖(s) is the component of the interstellar magnetic field in the direction of propagation at each point s along the path
e is the charge of an electron;
c is the speed of light in vacuum;
m is the mass of an electron;
is the vacuum permittivity;
The integral is taken over the entire path from the source to the observer.
Faraday rotation is an important tool in astronomy for the measurement of magnetic fields, which can be estimated from rotation measures given a knowledge of the electron number density. In the case of radio pulsars, the dispersion caused by these electrons results in a time delay between pulses received at different wavelengths, which can be measured in terms of the electron column density, or dispersion measure. A measurement of both the dispersion measure and the rotation measure therefore yields the weighted mean of the magnetic field along the line of sight. The same information can be obtained from objects other than pulsars, if the dispersion measure can be estimated based on reasonable guesses about the propagation path length and typical electron densities. In particular, Faraday rotation measurements of polarized radio signals from extragalactic radio sources occulted by the solar corona can be used to estimate both the electron density distribution and the direction and strength of the magnetic field in the coronal plasma.
Ionosphere
Radio waves passing through the Earth's ionosphere are likewise subject to the Faraday effect. The ionosphere consists of a plasma containing free electrons which contribute to Faraday rotation according to the above equation, whereas the positive ions are relatively massive and have little influence. In conjunction with the Earth's magnetic field, rotation of the polarization of radio waves thus occurs. Since the density of electrons in the ionosphere varies greatly on a daily basis, as well as over the sunspot cycle, the magnitude of the effect varies. However the effect is always proportional to the square of the wavelength, so even at the UHF television frequency of 500 MHz (λ = 60 cm), there can be more than a complete rotation of the axis of polarization. A consequence is that although most radio transmitting antennas are either vertically or horizontally polarized, the polarization of a medium or short wave signal after reflection by the ionosphere is rather unpredictable. However the Faraday effect due to free electrons diminishes rapidly at higher frequencies (shorter wavelengths) so that at microwave frequencies, used by satellite communications, the transmitted polarization is maintained between the satellite and the ground.
Semiconductors
Due to spin-orbit coupling, undoped GaAs single crystal exhibits much larger Faraday rotation than glass (SiO2). Considering the atomic arrangement is different along the (100) and (110) plane, one might think the Faraday rotation is polarization dependent. However, experimental work revealed an immeasurable anisotropy in the wavelength range from 880–1,600 nm. Based on the large Faraday rotation, one might be able to use GaAs to calibrate the B field of the terahertz electromagnetic wave which requires very fast response time. Around the band gap, the Faraday effect shows resonance behavior.
More generally, (ferromagnetic) semiconductors return both electro-gyration and a Faraday response in the high frequency domain. The combination of the two is described by gyroelectromagnetic media, for which gyroelectricity and gyromagnetism (Faraday effect) may occur at the same time.
Organic materials
In organic materials, Faraday rotation is typically small, with a Verdet constant in the visible wavelength region on the order of a few hundred degrees per Tesla per meter, decreasing proportional to in this region. While the Verdet constant of organic materials does increase around electronic transitions in the molecule, the associated light absorption makes most organic materials bad candidates for applications. There are however also isolated reports of large Faraday rotation in organic liquid crystals without associated absorption.
Plasmonic and magnetic materials
In 2009 γ-Fe2O3-Au core-shell nanostructures were synthesized to integrate magnetic (γ-Fe2O3) and plasmonic (Au) properties into one composite. Faraday rotation with and without the plasmonic materials was tested and rotation enhancement under 530 nm light irradiation was observed. Researchers claim that the magnitude of the magneto-optical enhancement is governed primarily by the spectral overlap of the magneto-optical transition and the plasmon resonance.
The reported composite magnetic/plasmonic nanostructure can be visualized to be a magnetic particle embedded in a resonant optical cavity. Because of the large density of photon states in the cavity, the interaction between the electromagnetic field of the light and the electronic transitions of the magnetic material is enhanced, resulting in a larger difference between the velocities of the right- and left-hand circularized polarization, therefore enhancing Faraday rotation.
| Physical sciences | Electromagnetic radiation | Physics |
518791 | https://en.wikipedia.org/wiki/Racemization | Racemization | In chemistry, racemization is a conversion, by heat or by chemical reaction, of an optically active compound into a racemic (optically inactive) form. This creates a 1:1 molar ratio of enantiomers and is referred to as a racemic mixture (i.e. contain equal amount of (+) and (−) forms). Plus and minus forms are called Dextrorotation and levorotation. The D and L enantiomers are present in equal quantities, the resulting sample is described as a racemic mixture or a racemate. Racemization can proceed through a number of different mechanisms, and it has particular significance in pharmacology inasmuch as different enantiomers may have different pharmaceutical effects.
Stereochemistry
Chiral molecules have two forms (at each point of asymmetry), which differ in their optical characteristics: The levorotatory form (the (−)-form) will rotate counter-clockwise on the plane of polarization of a beam of light, whereas the dextrorotatory form (the (+)-form) will rotate clockwise on the plane of polarization of a beam of light. The two forms, which are non-superposable when rotated in 3-dimensional space, are said to be enantiomers. The notation is not to be confused with D and L naming of molecules which refers to the similarity in structure to D-glyceraldehyde and L-glyceraldehyde. Also, (R)- and (S)- refer to the chemical structure of the molecule based on Cahn–Ingold–Prelog priority rules of naming rather than rotation of light. R/S notation is the primary notation used for +/- now because D and L notation are used primarily for sugars and amino acids.
Racemization occurs when one pure form of an enantiomer is converted into equal proportion of both enantiomers, forming a racemate. When there are both equal numbers of dextrorotating and levorotating molecules, the net optical rotation of a racemate is zero. Enantiomers should also be distinguished from diastereomers which are a type of stereoisomer that have different molecular structures around a stereocenter and are not mirror images.
Partial to complete racemization of stereochemistry in solutions are a result of SN1 mechanisms. However, when complete inversion of stereochemistry configuration occurs in a substitution reaction, an SN2 reaction is responsible.
Physical properties
In the solid state, racemic mixtures may have different physical properties from either of the pure enantiomers because of the differential intermolecular interactions (see Biological Significance section). The change from a pure enantiomer to a racemate can change its density, melting point, solubility, heat of fusion, refractive index, and its various spectra. Crystallization of a racemate can result in separate (+) and (−) forms, or a single racemic compound. However, in liquid and gaseous states, racemic mixtures will behave with physical properties that are identical, or near identical, to their pure enantiomers.
Biological significance
In general, most biochemical reactions are stereoselective, so only one stereoisomer will produce the intended product while the other simply does not participate or can cause side-effects. Of note, the L form of amino acids and the D form of sugars (primarily glucose) are usually the biologically reactive form. This is due to the fact that many biological molecules are chiral and thus the reactions between specific enantiomers produce pure stereoisomers. Also notable is the fact that all amino acid residues exist in the L form. However, bacteria produce D-amino acid residues that polymerize into short polypeptides which can be found in bacterial cell walls. These polypeptides are less digestible by peptidases and are synthesized by bacterial enzymes instead of mRNA translation which would normally produce L-amino acids.
The stereoselective nature of most biochemical reactions meant that different enantiomers of a chemical may have different properties and effects on a person. Many psychotropic drugs show differing activity or efficacy between isomers, e.g. amphetamine is often dispensed as racemic salts while the more active dextroamphetamine is reserved for refractory cases or more severe indications; another example is methadone, of which one isomer has activity as an opioid agonist and the other as an NMDA antagonist.
Racemization of pharmaceutical drugs can occur in vivo. Thalidomide as the (R) enantiomer is effective against morning sickness, while the (S) enantiomer is teratogenic, causing birth defects when taken in the first trimester of pregnancy. If only one enantiomer is administered to a human subject, both forms may be found later in the blood serum. The drug is therefore not considered safe for use by women of child-bearing age, and while it has other uses, its use is tightly controlled. Thalidomide can be used to treat multiple myeloma.
Another commonly used drug is ibuprofen which is only anti-inflammatory as one enantiomer while the other is biologically inert. Likewise, the (S) stereoisomer is much more reactive than the (R) enantiomer in citalopram (Celexa), an antidepressant which inhibits serotonin reuptake, is active. The configurational stability of a drug is therefore an area of interest in pharmaceutical research. The production and analysis of enantiomers in the pharmaceutical industry is studied in the field of chiral organic synthesis.
Formation of racemic mixtures
Racemization can be achieved by simply mixing equal quantities of two pure enantiomers. Racemization can also occur in a chemical interconversion. For example, when (R)-3-phenyl-2-butanone is dissolved in aqueous ethanol that contains NaOH or HCl, a racemate is formed. The racemization occurs by way of an intermediate enol form in which the former stereocenter becomes planar and hence achiral. An incoming group can approach from either side of the plane, so there is an equal probability that protonation back to the chiral ketone will produce either an R or an S form, resulting in a racemate.
Racemization can occur through some of the following processes:
Substitution reactions that proceed through a free carbocation intermediate, such as unimolecular substitution reactions, lead to non-stereospecific addition of substituents which results in racemization.
Although unimolecular elimination reactions also proceed through a carbocation, they do not result in a chiral center. They result instead in a set of geometric isomers in which trans/cis (E/Z) forms are produced, rather than racemates.
In a unimolecular aliphatic electrophilic substitution reaction, if the carbanion is planar or if it cannot maintain a pyramidal structure, then racemization should occur, though not always.
In a free radical substitution reaction, if the formation of the free radical takes place at a chiral carbon, then racemization is almost always observed.
The rate of racemization (from L-forms to a mixture of L-forms and D-forms) has been used as a way of dating biological samples in tissues with slow rates of turnover, forensic samples, and fossils in geological deposits. This technique is known as amino acid dating.
Discovery of optical activity
In 1843, Louis Pasteur discovered optical activity in paratartaric, or racemic, acid found in grape wine. He was able to separate two enantiomer crystals that rotated polarized light in opposite directions.
| Physical sciences | Stereochemistry | Chemistry |
518913 | https://en.wikipedia.org/wiki/Longhorn%20beetle | Longhorn beetle | The longhorn beetles (Cerambycidae), also known as long-horned or longicorns (whose larvae are often referred to as roundheaded borers), are a large family of beetles, with over 35,000 species described.
Most species are characterized by antennae as long as or longer than the beetle's body. A few species have short antennae (e.g., Neandra brunnea), making them difficult to distinguish from related families such as Chrysomelidae. "Cerambycidae" comes from a Greek mythological figure: after an argument with nymphs, the shepherd Cerambus is transformed into a large beetle with horns.
Longhorn beetles are found on all continents except Antarctica.
Description
Other than the typical long antennal length, the most consistently distinctive feature of adults of this family is that the antennal sockets are located on low tubercles on the face; other beetles with long antennae lack these tubercles, and cerambycids with short antennae still possess them. They otherwise vary greatly in size, shape, sculpture, and coloration. A number of species mimic ants, bees, and wasps, though a majority of species are cryptically colored. The titan beetle (Titanus giganteus) from northeastern South America is often considered the largest insect (though not the heaviest, and not the longest including legs), with a maximum known body length of just over .
Larvae are long, elongate in shape and lightly sclerotised. The prothorax is often enlarged and the sides of the body have lateral swellings (ampullae). The head is usually retracted into the prothorax and bears well-sclerotised mouthparts. Larval legs range from moderately developed to absent. The spiracles are always annular.
Biology
Diet
All known longhorn beetle larvae feed on plant tissue such as stems, trunks, or roots of both herbaceous and woody plants, often in injured or weak trees. A few species are serious pests. The larvae, called roundheaded borers, bore into wood, where they can cause extensive damage to either living trees or untreated lumber (or, occasionally, to wood in buildings; the old-house borer, Hylotrupes bajulus, is a particular problem indoors).
Many longhorns locate and recognize potential hosts by detecting chemical attractants, including monoterpenes (compounds released en masse by woody plants when stressed), ethanol (another compound emitted by damaged plant material), and even bark beetle pheromones. Many scolytine weevils share the cerambycid's niche of weakened or recently deceased trees; thus, by locating scolytinids, a suitable host can likely be located as well. The arrival of cerambycid larvae is often detrimental to a population of scolytinids, as the cerambycid larvae will typically either outcompete them with their greater size and mobility, or act as direct predators of them (this latter practice is less common, but has been observed in several species, notably Monochamus carolinensis). Cerambycids, in turn, have been found to play a role in attracting other wood-borers to a host. Borgemeister, et al. 1998, recorded that cerambycid activity in girdled twigs released volatiles attractive to some bostrichids, especially Prostephanus truncatus. A few cerambycids, such as Arhopalus sp., are adapted to take advantage of trees recently killed or injured by forest fires by detecting and pursuing smoke volatiles.
Adults of Lamiinae, most Lepturinae and some Cerambycinae also feed. Adults of Parandrinae, Prioninae and Spondylidinae do not feed. In those taxa with feeding adults, common foods are nectar, pollen, fruit and sap exudates. Some (mainly Lamiinae) feed on bark, plant stems, needles or developing cones. Roots are consumed by larvae and sometimes also adults of soil-dwelling Dorcadion. The genus Leiopus is known to feed on fungi. Lastly, the genus Elytroleptus is unusual in having carnivorous adults, which prey on lycid beetles.
Pollination
In addition to feeding on other plant tissue, some species feed on pollen or nectar and may act as pollinators. Assessing the efficacy of beetle pollinators is difficult. Even if pollination of one species by beetles is shown, that same beetle may also act as a flower predator toward other species. In some cases, beetles may act as both pollinators and predators on the same flowers.
Flowers specializing in pollination by beetles typically display a particular set of traits, but pollination by longhorn beetles is not limited to these cantharophilous flowers. A review of angiosperm pollination by beetles shows that Cerambycidae, along with Curculionidae and Scarabaeidae, contains many taxa that are pollinators for not only specialist but also generalist systems.
Beetles in the New Zealand genus Zorion are known to feed on pollen and have a specialized structure similar to that of pollen baskets found in bees. Species in this genus are thought to be important pollinator species for native plants such as harakeke.
Some orchid species have been found to be largely reliant on longhorn beetles for pollination. The species Alosterna tabacicolor was found to be the main pollinator of a rare orchid species (Dactylorhiza fuchsii) in Poland. Another rare orchid Disa forficaria, found in the Cape Floristic Region in South Africa, relies on the species Chorothyse hessei for pollination. D. forficaria uses sexual deception targeting male C. hessei, possibly indicating a long history of co-evolution with longhorn beetle pollinators.
The proportion of longhorn beetle species that act as pollinators is unknown. The fact that two species of longhorn species from distinct subfamilies (Lepturinae and Cerambycinae) found on different continents both with significant roles as pollinators could suggest that some capacity for pollination may be common among longhorn beetles.
Predators
Parasitoids
In North America some native cerambycids are the hosts of Ontsira mellipes (a parasitoid wasp in the family Braconidae). O. mellipes may be useful in controlling a forestry pest in this same family, Anoplophora glabripennis, that is invasive in North America.
Classification
As with many large families, different authorities have tended to recognize many different subfamilies, or sometimes split subfamilies off as separate families entirely (e.g., Disteniidae, Oxypeltidae, and Vesperidae); there is thus some instability and controversy regarding the constituency of the Cerambycidae. There are few truly defining features for the group as a whole, at least as adults, as there are occasional species or species groups which may lack any given feature; the family and its closest relatives, therefore, constitute a taxonomically difficult group, and relationships of the various lineages are still poorly understood. The oldest unambiguous fossils of the family are Cretoprionus and Sinopraecipuus from Yixian Formation of Inner Mongolia and Liaoning, China, dating to the Aptian stage of the Early Cretaceous, approximately 122 million years ago. The former genus was assigned to the subfamily Prioninae in its original description, while the latter could not be placed in any extant subfamily. Qitianniu from the mid-Cretaceous Burmese amber of Myanmar, dating to approximately 100 million years ago, also could not be placed in any extant subfamily.
Subfamilies
The subfamilies of Cerambycidae are:
Apatophyseinae Lacordaire, 1869 (included in Bouchard 2011 but not Švácha 2014)
Cerambycinae Latreille, 1802
Dorcasominae Lacordaire, 1869 (Švácha 2014 includes Apatophyseinae in Dorcasominae)
Lamiinae Latreille, 1825
Lepturinae Latreille, 1802
Necydalinae Latreille, 1825
Parandrinae Blanchard, 1845
Prioninae Latreille, 1802
Spondylidinae Audinet-Serville, 1832 (including former Aseminae Thomson, 1860)
Most species (90.5%) are concentrated in the Cerambycinae and Lamiinae subfamilies.
Notable genera and species
Acrocinus longimanus – harlequin beetle, a large species where the male has very long front legs
Anoplophora chinensis – citrus longhorn beetle, a major pest
Anoplophora glabripennis – Asian longhorn beetle, an invasive pest species
Aridaeus thoracicus – tiger longicorn (Australia)
Cacosceles newmannii - Southern African longhorn beetle that is a sugarcane pest
Derobrachus hovorei - palo verde longhorn beetle
Desmocerus californicus dimorphus – valley elderberry longhorn beetle, a threatened subspecies from California
Moneilema – cactus longhorn beetles, which are flightless
Onychocerus albitarsis – the only known beetle with a venomous sting
Petrognatha gigas – giant African longhorn beetle
Prionoplus reticularis – huhu beetle, the heaviest beetle in New Zealand
Rosalia alpina – Rosalia longhorn beetle, a threatened European species
Stictoleptura rubra – red-brown longhorn beetle
Tetraopes tetrophthalmus – red milkweed longhorn beetle, a toxic species with aposematic colors
Tetropium fuscum – brown spruce longhorn beetle, an invasive pest species
Titanus giganteus – titan beetle, one of the largest beetles in the world
Zorion guttigerum - flower longhorn beetle, an important pollinator species.
| Biology and health sciences | Beetles (Coleoptera) | Animals |
518971 | https://en.wikipedia.org/wiki/Vigna%20mungo | Vigna mungo | The black gram or urad bean (Vigna mungo) is a bean grown in South Asia. Like its relative, the mung bean, it has been reclassified from the Phaseolus to the Vigna genus. The product sold as black gram is usually the whole urad bean, whereas the split bean (the interior being white) is called white lentil. It should not be confused with the much smaller true black lentil (Lens culinaris).
Black gram originated in South Asia, where it has been in cultivation from ancient times and is one of the most highly prized pulses of India. It is very widely used in Indian cuisine. In India the black gram is one of the important pulses grown in both Kharif and Rabi seasons. This crop is extensively grown in the southern part of India and the northern part of Bangladesh and Nepal. In Bangladesh and Nepal it is known as mash daal. It is a popular daal (legume) side dish in South Asia that goes with curry and rice as a platter. Black gram has also been introduced to other tropical areas such as the Caribbean, Fiji, Mauritius, Myanmar and Africa mainly by Indian immigrants during the Indian indenture system.
Description
It is an erect, suberect or trailing, densely hairy, annual bush. The tap root produces a branched root system with smooth, rounded nodules. The pods are narrow, cylindrical and up to six centimetres long. The plant grows 30–100 cm with large hairy leaves and 4–6 cm seed pods. While the urad dal was, along with the mung bean, originally placed in Phaseolus, it has since been transferred to Vigna.
Cooking
Vigna mungo is popular in Northern India, largely used to make dal from the whole or split, dehusked seeds. The bean is boiled and eaten whole or, after splitting, made into dal; prepared like this it has an unusual mucilaginous texture.
Its usage is quite common in Dogra Cuisine of Jammu and Lower Himachal region. The key ingredient of Dal Maddhra or Maah Da Maddhra dish served in Dogri Dhaam of Jammu is Vigna mungo lentil. Similarly, another dish Teliya Maah popular in Jammu & Kangra uses this lentil. Traditionally, Vigna Mungo Lentil is used for preparing Dogra style Khichdi during Panj Bhikham and Makar Sankranti festival in Jammu and Lower Himachal. Besides, fermented Vigna Mungo paste is also used to prepare Lakhnapuri Bhalle or Lakhanpuri Laddu ( a popular street food of Jammu region).
In Uttarakhand Cuisine, Vigna mungo is used for preparing traditional dish called Chainsu or Chaisu.
In North Indian cuisine, it is used as an ingredient of Dal makhani, which is a Modern restaurant style adaptation of Traditional Sabut Urad Dal of Northern India.
In Bengal, it is used in kalai ruti, biulir dal. In Rajasthan, It is one of the ingredients of Panchmel dal which is usually consumed with bati. In Pakistan, it is called Dhuli Mash ki daal and used to make laddu Pethi walay and Bhalla.
It is also extensively used in South Indian culinary preparations. Black gram is one of the key ingredients in making idli and dosa batter, in which one part of black gram is mixed with three or four parts of idli rice to make the batter. Vada or udid vada also contain black gram and are made from soaked batter and deep-fried in cooking oil. The dough is also used in making papadum, in which white lentils are usually used.
In the Telugu states, it is eaten as a sweet in the form of laddoos called Sunnundallu or Minapa Sunnundallu.
Nutrition
It contains high levels of protein (25 g/100 g dry weight), potassium (983 mg/100 g), calcium (138 mg/100 g), iron (7.57 mg/100 g), niacin (1.447 mg/100 g), thiamine (0.273 mg/100 g), and riboflavin (0.254 mg/100 g). Black gram complements the essential amino acids provided in most cereals and plays an important role in the diets of the people of Nepal and India. Black gram is also very high in folate (628 μg/100 g raw, 216 μg/100 g cooked).
Use in medieval crucible construction
In medieval India, this bean was used in a technique to facilitate making crucibles impermeable.
Names
Vigna mungo is known by various names across South and Southeast Asia. Its name in most languages of India derives from Proto-Dravidian *uẓ-untu-, borrowed into Sanskrit as uḍida:
Caribbean Hindustani/Fiji Hindi: उरदी दाल (urdi dāl)
Gujarati: અળદ (aḷad), અડદ (aḍad)
Hindi: उड़द दाल (uṛad dāl), उरद दाल (urad dāl)
Kannada: ಉದ್ದು (uddu), ಉದ್ದಿನ ಬೇಳೆ (uddina bēḷe)
Marathi/Konkani: उडीद (uḍid)
Malayalam: ഉഴുന്ന് (uẓhunnu)
Tamil: உளுந்து (uḷuntu/uḷundu), உளுத்தம் பாருப்பு (uḷutham paruppu)
Telugu: మినుములు (minumulu) and ఉద్ది పాప్పు (uddi pappu) in Rayalaseema dialect
Tulu: ಉರ್ದು ಸಲೈ (urdu salāyi)
Its name in selected Indic languages, however, derives from Sanskrit masa (माष) :
Dogri: 𑠢𑠬𑠪𑠹 𑠛𑠮 𑠛𑠬𑠥 / माह् दी दाल (māh di dāl)
Assamese: মাটিমাহ (mātimāh), মাটিকলাই (mātikolāi)
Bengali: মাসকালাই ডাল (mashkālāi ḍāl)
Nepali: कालो दाल (kālo dāl ), मास (mās)
Punjabi : ਮਾਂਹ / ਮਾਸ਼ ਦੀ ਦਾਲ (mãha/māsh di dāl)
Urdu: ماش کی دال (māsh ki dāl)
Other names include:
Odia: ବିରି ଡାଲି (biri ḍāli)
Meitei: ꯁꯒꯣꯜ ꯍꯋꯥꯏ (sagol hawāi)
Sinhala : උඳු (undu)
Myanmar: မတ်ပဲ (matpe)
Vietnamese: (đậu muồng ăn)
Thai: ถั่วดำ (thua dam)
Varieties
Pant Urd 31 (PU-31)
Lam Black Gram 884 (LBG 884)
Trombay Urd (TU 40)
Pant U-13
JU-2
Type-9
Barkha
Gwalior-2
Mutant varieties:CO-1 and Sarla.
Spring season varieties:Prabha and AKU-4.
First urad bean variety developed in – T9(1948).
| Biology and health sciences | Pulses | Plants |
518989 | https://en.wikipedia.org/wiki/Glia | Glia | Glia, also called glial cells (gliocytes) or neuroglia, are non-neuronal cells in the central nervous system (the brain and the spinal cord) and in the peripheral nervous system that do not produce electrical impulses. The neuroglia make up more than one half the volume of neural tissue in the human body. They maintain homeostasis, form myelin, and provide support and protection for neurons. In the central nervous system, glial cells include oligodendrocytes (that produce myelin), astrocytes, ependymal cells and microglia, and in the peripheral nervous system they include Schwann cells (that produce myelin), and satellite cells.
Function
They have four main functions:
to surround neurons and hold them in place
to supply nutrients and oxygen to neurons
to insulate one neuron from another
to destroy pathogens and remove dead neurons.
They also play a role in neurotransmission and synaptic connections, and in physiological processes such as breathing. While glia were thought to outnumber neurons by a ratio of 10:1, studies using newer methods and reappraisal of historical quantitative evidence suggests an overall ratio of less than 1:1, with substantial variation between different brain tissues.
Glial cells have far more cellular diversity and functions than neurons, and can respond to and manipulate neurotransmission in many ways. Additionally, they can affect both the preservation and consolidation of memories.
Glia were discovered in 1856, by the pathologist Rudolf Virchow in his search for a "connective tissue" in the brain. The term derives from Greek γλία and γλοία "glue" ( or ), and suggests the original impression that they were the glue of the nervous system.
Types
Macroglia
Derived from ectodermal tissue.
Microglia
Microglia are specialized macrophages capable of phagocytosis that protect neurons of the central nervous system. They are derived from the earliest wave of mononuclear cells that originate in yolk sac blood islands early in development, and colonize the brain shortly after the neural precursors begin to differentiate.
These cells are found in all regions of the brain and spinal cord. Microglial cells are small relative to macroglial cells, with changing shapes and oblong nuclei. They are mobile within the brain and multiply when the brain is damaged. In the healthy central nervous system, microglia processes constantly sample all aspects of their environment (neurons, macroglia and blood vessels). In a healthy brain, microglia direct the immune response to brain damage and play an important role in the inflammation that accompanies the damage. Many diseases and disorders are associated with deficient microglia, such as Alzheimer's disease, Parkinson's disease and ALS.
Other
Pituicytes from the posterior pituitary are glial cells with characteristics in common to astrocytes. Tanycytes in the median eminence of the hypothalamus are a type of ependymal cell that descend from radial glia and line the base of the third ventricle. Drosophila melanogaster, the fruit fly, contains numerous glial types that are functionally similar to mammalian glia but are nonetheless classified differently.
Total number
In general, neuroglial cells are smaller than neurons. There are approximately 85 billion glia cells in the human brain, about the same number as neurons. Glial cells make up about half the total volume of the brain and spinal cord. The glia to neuron-ratio varies from one part of the brain to another. The glia to neuron-ratio in the cerebral cortex is 3.72 (60.84 billion glia (72%); 16.34 billion neurons), while that of the cerebellum is only 0.23 (16.04 billion glia; 69.03 billion neurons). The ratio in the cerebral cortex gray matter is 1.48, with 3.76 for the gray and white matter combined. The ratio of the basal ganglia, diencephalon and brainstem combined is 11.35.
The total number of glia cells in the human brain is distributed into the different types with oligodendrocytes being the most frequent (45–75%), followed by astrocytes (19–40%) and microglia (about 10% or less).
Development
Most glia are derived from ectodermal tissue of the developing embryo, in particular the neural tube and crest. The exception is microglia, which are derived from hematopoietic stem cells. In the adult, microglia are largely a self-renewing population and are distinct from macrophages and monocytes, which infiltrate an injured and diseased CNS.
In the central nervous system, glia develop from the ventricular zone of the neural tube. These glia include the oligodendrocytes, ependymal cells, and astrocytes. In the peripheral nervous system, glia derive from the neural crest. These PNS glia include Schwann cells in nerves and satellite glial cells in ganglia.
Capacity to divide
Glia retain the ability to undergo cell divisions in adulthood, whereas most neurons cannot. The view is based on the general inability of the mature nervous system to replace neurons after an injury, such as a stroke or trauma, where very often there is a substantial proliferation of glia, or gliosis, near or at the site of damage. However, detailed studies have found no evidence that 'mature' glia, such as astrocytes or oligodendrocytes, retain mitotic capacity. Only the resident oligodendrocyte precursor cells seem to keep this ability once the nervous system matures.
Glial cells are known to be capable of mitosis. By contrast, scientific understanding of whether neurons are permanently post-mitotic, or capable of mitosis, is still developing. In the past, glia had been considered to lack certain features of neurons. For example, glial cells were not believed to have chemical synapses or to release transmitters. They were considered to be the passive bystanders of neural transmission. However, recent studies have shown this to not be entirely true.
Functions
Some glial cells function primarily as the physical support for neurons. Others provide nutrients to neurons and regulate the extracellular fluid of the brain, especially surrounding neurons and their synapses. During early embryogenesis, glial cells direct the migration of neurons and produce molecules that modify the growth of axons and dendrites. Some glial cells display regional diversity in the CNS and their functions may vary between the CNS regions.
Neuron repair and development
Glia are crucial in the development of the nervous system and in processes such as synaptic plasticity and synaptogenesis. Glia have a role in the regulation of repair of neurons after injury. In the central nervous system (CNS), glia suppress repair. Glial cells known as astrocytes enlarge and proliferate to form a scar and produce inhibitory molecules that inhibit regrowth of a damaged or severed axon. In the peripheral nervous system (PNS), glial cells known as Schwann cells (or also as neuri-lemmocytes) promote repair. After axonal injury, Schwann cells regress to an earlier developmental state to encourage regrowth of the axon. This difference between the CNS and the PNS, raises hopes for the regeneration of nervous tissue in the CNS. For example, a spinal cord may be able to be repaired following injury or severance.
Myelin sheath creation
Oligodendrocytes are found in the CNS and resemble an octopus: they have bulbous cell bodies with up to fifteen arm-like processes. Each process reaches out to an axon and spirals around it, creating a myelin sheath. The myelin sheath insulates the nerve fiber from the extracellular fluid and speeds up signal conduction along the nerve fiber. In the peripheral nervous system, Schwann cells are responsible for myelin production. These cells envelop nerve fibers of the PNS by winding repeatedly around them. This process creates a myelin sheath, which not only aids in conductivity but also assists in the regeneration of damaged fibers.
Neurotransmission
Astrocytes are crucial participants in the tripartite synapse. They have several crucial functions, including clearance of neurotransmitters from within the synaptic cleft, which aids in distinguishing between separate action potentials and prevents toxic build-up of certain neurotransmitters such as glutamate, which would otherwise lead to excitotoxicity. Furthermore, astrocytes release gliotransmitters such as glutamate, ATP, and D-serine in response to stimulation.
Clinical significance
While glial cells in the PNS frequently assist in regeneration of lost neural functioning, loss of neurons in the CNS does not result in a similar reaction from neuroglia. In the CNS, regrowth will only happen if the trauma was mild, and not severe. When severe trauma presents itself, the survival of the remaining neurons becomes the optimal solution. However, some studies investigating the role of glial cells in Alzheimer's disease are beginning to contradict the usefulness of this feature, and even claim it can "exacerbate" the disease. In addition to affecting the potential repair of neurons in Alzheimer's disease, scarring and inflammation from glial cells have been further implicated in the degeneration of neurons caused by amyotrophic lateral sclerosis.
In addition to neurodegenerative diseases, a wide range of harmful exposure, such as hypoxia, or physical trauma, can lead to the result of physical damage to the CNS. Generally, when damage occurs to the CNS, glial cells cause apoptosis among the surrounding cellular bodies. Then, there is a large amount of microglial activity, which results in inflammation, and, finally, there is a heavy release of growth inhibiting molecules.
History
Although glial cells and neurons were probably first observed at the same time in the early 19th century, unlike neurons whose morphological and physiological properties were directly observable for the first investigators of the nervous system, glial cells had been considered to be merely "glue" that held neurons together until the mid-20th century.
Glia were first described in 1856 by the pathologist Rudolf Virchow in a comment to his 1846 publication on connective tissue. A more detailed description of glial cells was provided in the 1858 book 'Cellular Pathology' by the same author.
When markers for different types of cells were analyzed, Albert Einstein's brain was discovered to contain significantly more glia than normal brains in the left angular gyrus, an area thought to be responsible for mathematical processing and language. However, out of the total of 28 statistical comparisons between Einstein's brain and the control brains, finding one statistically significant result is not surprising, and the claim that Einstein's brain is different is not scientific (c.f. multiple comparisons problem).
Not only does the ratio of glia to neurons increase through evolution, but so does the size of the glia. Astroglial cells in human brains have a volume 27 times greater than in mouse brains.
These important scientific findings may begin to shift the neurocentric perspective into a more holistic view of the brain which encompasses the glial cells as well. For the majority of the twentieth century, scientists had disregarded glial cells as mere physical scaffolds for neurons. Recent publications have proposed that the number of glial cells in the brain is correlated with the intelligence of a species. Moreover, evidences are demonstrating the active role of glia, in particular astroglia, in cognitive processes like learning and memory.
| Biology and health sciences | Nervous system | Biology |
518999 | https://en.wikipedia.org/wiki/European%20badger | European badger | The European badger (Meles meles), also known as the Eurasian badger, is a badger species in the family Mustelidae native to Europe and West Asia and parts of Central Asia. It is classified as least concern on the IUCN Red List, as it has a wide range and a large, stable population size which is thought to be increasing in some regions. Several subspecies are recognized, with the nominate subspecies (M. m. meles) predominating in most of Europe. In Europe, where no other badger species commonly occurs, it is generally just called the "badger".
The European badger is a powerfully built, black, white, brown, and grey animal with a small head, a stocky body, small black eyes, and a short tail. Its weight varies, being (15–29 lb) in spring, but building up to in autumn before the winter sleep period. It is nocturnal and is a social, burrowing animal that sleeps during the day in one of several setts in its territorial range. These burrows have multiple chambers and entrances, and are extensive systems of underground passages of length. They house several badger families that use these setts for decades. Badgers are fussy over the cleanliness of their burrow, carrying in fresh bedding and removing soiled material, and they defecate in latrines strategically situated outside their setts or en route to other setts.
Although taxonomically classified as a carnivoran, the European badger is an omnivore, feeding on a wide variety of plant and animal foods, including earthworms, large insects, small mammals, carrion, cereals, and tubers. Litters of up to five cubs are produced in spring. The young are weaned a few months later, but usually remain within the family group. The European badger has been known to share its burrow with other species, such as rabbits, red foxes, and raccoon dogs, but it can be ferocious when provoked, a trait which has been exploited in the now-illegal blood sport of badger-baiting. Like many wild and domesticated species of mammals, badgers can be carriers of bovine tuberculosis, which can spread between species and can be particularly detrimental to cattle. In England, badger populations are culled to try to reduce the incidence of bovine tuberculosis in cattle, although the efficacy of this practice is strongly disputed, and badger culls are widely considered cruel and inhumane.
Nomenclature
The source of the word "badger" is uncertain. The Oxford English Dictionary states it probably derives from "badge" + -ard, a reference to the white mark on its forehead that resembles a badge, and may date to the early 16th century. The French word ('digger') has also been suggested as a source. A male badger is a boar, a female is a sow, and a young badger is a cub. A badger's home is called a sett. Badger colonies are often called clans.
The far older name "brock" (), () is a Celtic loanword (cf. Gaelic and Welsh , from Proto-Celtic ) meaning 'grey'. The Proto-Germanic term was (cf. German , Dutch , Norwegian ; Early Modern English dasse), probably from the PIE root 'to construct', which suggests that the badger was named after its digging of setts (tunnels); the Germanic term became or , in Latin glosses, replacing ('marten' or 'badger'), and from these words the common Romance terms for the animal evolved (Italian , French —now is more common—, Catalan , Spanish , Portuguese ) except Asturian .
Until the mid-18th century, European badgers were variously known in English as brock, pate, grey, and bawson. The name "bawson" is derived from "bawsened", which refers to something striped with white. "Pate" is a local name that was once popular in northern England. The name "badget" was once common, but only used in Norfolk, while "earth dog" was used in southern Ireland. The badger is commonly referred to in Welsh as a ('earth pig').
Taxonomy
Ursus meles was the scientific name used by Carl Linnaeus in 1758, who described the badger in his work Systema Naturae.
Evolution
The species likely evolved from the Chinese Meles thorali of the early Pleistocene. The modern species originated during the early Middle Pleistocene, with fossil sites occurring in Episcopia, Grombasek, Süssenborn, Hundsheim, Erpfingen, Koněprusy, Mosbach 2, and Stránská Skála. A comparison between fossil and living specimens shows a marked progressive adaptation to omnivory, namely in the increase in the molars' surface areas and the modification of the carnassials. Occasionally, badger bones are discovered in earlier strata, due to the burrowing habits of the species.
Subspecies
In the 19th and 20th centuries, several badger type specimens were described and proposed as subspecies. , eight subspecies were recognized as valid taxa, but four (canescens, arcalus, rhodius, severzovi) are now considered to belong to a distinct species, the Caucasian badger (M. canescens).
Description
European badgers are powerfully built animals with small heads, thick, short necks, stocky, wedge-shaped bodies and short tails. Their feet are plantigrade or semidigitigrade and short, with five toes on each foot. The limbs are short and massive, with naked lower surfaces on the feet. The claws are strong, elongated and have an obtuse end, which assists in digging. The claws are not retractable, and the hind claws wear with age. Old badgers sometimes have their hind claws almost completely worn away from constant use. Their snouts, which are used for digging and probing, are muscular and flexible. The eyes are small and the ears short and tipped with white. Whiskers are present on the snout and above the eyes.
Boars typically have broader heads, thicker necks and narrower tails than sows, which are sleeker, have narrower, less domed heads and fluffier tails. The guts of badgers are longer than those of red foxes, reflecting their omnivorous diet. The small intestine has a mean length of and lacks a cecum. Both sexes have three pairs of nipples but these are more developed in females. European badgers cannot flex their backs as martens, polecats and wolverines can, nor can they stand fully erect like honey badgers, though they can move quickly at full gallop.
Adults measure in shoulder height, in body length, in tail length, in hind foot length and in ear height. Males (or boars) slightly exceed females (or sows) in measurements, but can weigh considerably more. Their weights vary seasonally, growing from spring to autumn and reaching a peak just before the winter. During the summer, European badgers commonly weigh and in autumn.
The average weight of adults in the Białowieża Forest was in spring but up to in autumn, 46% higher than the spring low mass. In Woodchester Park, England, adults in spring weighed on average and in fall average . In Doñana National Park, average weight of adult badgers is reported as , perhaps in accordance with Bergmann's rule, that its size decreases in relatively warmer climates. Sows can attain a top autumn weight of around , while exceptionally large boars have been reported in autumn. The heaviest verified was , though unverified specimens have been reported to and even (if so, the heaviest weight for any terrestrial mustelid). If average weights are used, the European badger ranks as the second largest terrestrial mustelid, behind only the wolverine. Although their sense of smell is acute, their eyesight is monochromatic as has been shown by their lack of reaction to red lanterns. Only moving objects attract their attention. Their hearing is no better than that of humans.
European badger skulls are quite massive, heavy and elongated. Their braincases are oval in outline, while the facial part of their skulls is elongated and narrow. Adults have prominent sagittal crests which can reach 15 mm tall in old males, and are more strongly developed than those of honey badgers. Aside from anchoring the jaw muscles, the thickness of the crests protect their skulls from hard blows. Similar to martens, the dentition of European badgers is well-suited for their omnivorous diets. Their incisors are small and chisel-shaped, their canine teeth are prominent and their carnassials are not overly specialized. Their molars are flattened and adapted for grinding. Their jaws are powerful enough to crush most bones; a provoked badger was once reported as biting down on a man's wrist so severely that his hand had to be amputated. The dental formula is .
Scent glands are present below the base of the tail and on the anus. The subcaudal gland secretes a musky-smelling, cream-coloured fatty substance, while the anal glands secrete a stronger-smelling, yellowish-brown fluid.
Fur
In winter, the fur on the back and flanks is long and coarse, consisting of bristly guard hairs with a sparse, soft undercoat. The belly fur consists of short, sparse hairs, with skin being visible in the inguinal region. Guard hair length on the middle of the back is in winter. Prior to the winter, the throat, lower neck, chest and legs are black. The belly is of a lighter, brownish tint, while the inguinal region is brownish-grey. The general colour of the back and sides is light silvery-grey, with straw-coloured highlights on the sides. The tail has long and coarse hairs, and is generally the same colour as the back. Two black bands pass along the head, starting from the upper lip and passing upwards to the whole base of the ears. The bands sometimes extend along the neck and merge with the colour of the upper body. The front parts of the bands are , and widen to in the ear region. A wide, white band extends from the nose tip through the forehead and crown. White markings occur on the lower part of the head, and extend backwards to a great part of the neck's length. The summer fur is much coarser, shorter and sparser, and is deeper in colour, with the black tones becoming brownish, sometimes with yellowish tinges. Partial melanism in badgers is known, and albinos and leucists are not uncommon. Albino badgers can be pure white or yellowish with pink eyes, while leucistic ones are the same but with normal eyes instead. Erythristic badgers are more common than the former, being characterized by having a sandy-red colour on the usually black parts of the body. Yellow badgers are also known.
Distribution and habitat
The European badger is native to most of Europe. Its range includes Albania, Armenia, Austria, Belarus, Belgium, Bosnia and Herzegovina, Bulgaria, Crete, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Great Britain, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Macedonia, Moldova, Montenegro, Netherlands, Norway, Poland, Portugal, Romania, Russia, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland and Ukraine.
The distributional boundary between the ranges of European and Asian badgers is the Volga River, the European species being situated on the western bank. The boundary between the ranges of the European and Caucasian badgers is in the North Caucasus, but a clear boundary has not been defined, and they are sympatric in some regions, potentially forming a hybrid zone. They are common in European Russia, with 30,000 individuals having been recorded there in 1990. They are abundant and increasing throughout their range, partly due to a reduction in rabies in Central Europe. In the UK, badgers experienced a 77% increase in numbers during the 1980s and 1990s. The badger population in Great Britain in 2012 is estimated to be 300,000.
The European badger is found in deciduous and mixed woodlands, clearings, spinneys, pastureland and scrub, including Mediterranean maquis shrubland. It has adapted to life in suburban areas and urban parks, although not to the extent of red foxes. In mountainous areas it occurs up to an altitude of .
Behaviour and ecology
Social and territorial behaviour
European badgers are the most social of badgers, forming groups of six adults on average, though larger associations of up to 23 individuals have been recorded. Group size may be related to habitat composition. Under optimal conditions, badger territories can be as small as , but may be as large as in marginal areas. Badger territories can be identified by the presence of communal latrines and well-worn paths. It is mainly males that are involved in territorial aggression. A hierarchical social system is thought to exist among badgers and large powerful boars seem to assert dominance over smaller males. Large boars sometimes intrude into neighbouring territories during the main mating season in early spring.
Sparring and more vicious fights generally result from territorial defense in the breeding season. However, in general, animals within and outside a group show considerable tolerance of each other. Boars tend to mark their territories more actively than sows, with their territorial activity increasing during the mating season in early spring. Badgers groom each other very thoroughly with their claws and teeth. Grooming may have a social function. They are crepuscular and nocturnal in habits. Aggression among badgers is largely associated with territorial defence and mating. When fighting, they bite each other on the neck and rump, while running and chasing each other and injuries incurred in such fights can be severe and sometimes fatal. When attacked by dogs or sexually excited, badgers may raise their tails and fluff up their fur.
European badgers have an extensive vocal repertoire. When threatened, they emit deep growls and, when fighting, make low kekkering noises. They bark when surprised, whicker when playing or in distress, and emit a piercing scream when alarmed or frightened.
Denning behaviour
Like other badger species, European badgers are burrowing animals. However, the dens they construct (called setts) are the most complex, and are passed on from generation to generation. The number of exits in one sett can vary from a few to fifty. These setts can be vast, and can sometimes accommodate multiple families. When this happens, each family occupies its own passages and nesting chambers. Some setts may have exits which are only used in times of danger or play. A typical passage has a wide base and a height. Three sleeping chambers occur in a family unit, some of which are open at both ends. The nesting chamber is located from the opening, and is situated more than a underground, in some cases . Generally, the passages are long. The nesting chamber is on average , and are high.
Badgers dig and collect bedding throughout the year, particularly in autumn and spring. Sett maintenance is usually carried out by subordinate sows and dominant boars. The chambers are frequently lined with bedding, brought in on dry nights, which consists of grass, bracken, straw, leaves and moss. Up to 30 bundles can be carried to the sett on a single night. European badgers are fastidiously clean animals which regularly clear out and discard old bedding. During the winter, they may take their bedding outside on sunny mornings and retrieve it later in the day. Spring cleaning is connected with the birth of cubs, and may occur several times during the summer to prevent parasite levels building up.
If a badger dies within the sett, its conspecifics will seal off the chamber and dig a new one. Some badgers will drag their dead out of the sett and bury them outside. A sett is almost invariably located near a tree, which is used by badgers for stretching or claw scraping. Badgers defecate in latrines, which are located near the sett and at strategic locations on territorial boundaries or near places with abundant food supplies.
In extreme cases, when there is a lack of suitable burrowing grounds, badgers may move into haystacks in winter. They may share their setts with red foxes or European rabbits. The badgers may provide protection for the rabbits against other predators. The rabbits usually avoid predation by the badgers by inhabiting smaller, hard to reach chambers.
Reproduction and development
Estrus in European badgers lasts four to six days and may occur throughout the year, though there is a peak in spring. Sexual maturity in boars is usually attained at the age of twelve to fifteen months but this can range from nine months to two years. Males are normally fecund during January–May, with spermatogenesis declining in summer. Sows usually begin ovulating in their second year, though some exceptionally begin at nine months. They can mate at any time of the year, though the main peak occurs in February–May, when mature sows are in postpartal estrus and young animals experience their first estrus. Matings occurring outside this period typically occur in sows which either failed to mate earlier in the year or matured slowly. Badgers are usually monogamous; boars typically mate with one female for life, whereas sows have been known to mate with more than one male. Mating lasts for fifteen to sixty minutes, though the pair may briefly copulate for a minute or two when the sow is not in estrus. A delay of two to nine months precedes the fertilized eggs implanting into the wall of the uterus, though matings in December can result in immediate implantation. Ordinarily, implantation happens in December, with a gestation period lasting seven weeks. Cubs are usually born in mid-January to mid-March within underground chambers containing bedding. In areas where the countryside is waterlogged, cubs may be born above ground in buildings. Typically, only dominant sows can breed, as they suppress the reproduction of subordinate females.
The average litter consists of one to five cubs. Although many cubs are sired by resident males, up to 54% can be fathered by boars from different colonies. Dominant sows may kill the cubs of subordinates. Cubs are born pink, with greyish, silvery fur and fused eyelids. Neonatal badgers are in body length on average and weigh , with cubs from large litters being smaller. By three to five days, their claws become pigmented, and individual dark hairs begin to appear. Their eyes open at four to five weeks and their milk teeth erupt about the same time. They emerge from their setts at eight weeks of age, and begin to be weaned at twelve weeks, though they may still suckle until they are four to five months old. Subordinate females assist the mother in guarding, feeding and grooming the cubs. Cubs fully develop their adult coats at six to nine weeks. In areas with medium to high badger populations, dispersal from the natal group is uncommon, though badgers may temporarily visit other colonies. Badgers can live for up to about fifteen years in the wild.
Winter sleep
Badgers begin to prepare for winter sleep during late summer by accumulating fat reserves, which reach a peak in October. During this period, the sett is cleaned and the nesting chamber is filled with bedding. Upon retiring to sleep, badgers block their sett entrances with dry leaves and earth. They typically stop leaving their setts once snow has fallen. In Russia and the Nordic countries, European badgers retire for winter sleep from late October to mid-November and emerge from their setts in March and early April. In areas such as England and Transcaucasia, where winters are less harsh, badgers either forgo winter sleep entirely or spend long periods underground, emerging in mild spells.
Diet
European badgers are among the least carnivorous members of the Carnivora; they are highly adaptable and opportunistic omnivores, whose diet encompasses a wide range of animals and plants. Earthworms are their most important food source, followed by large insects, carrion, cereals, fruit and small mammals, including rabbits, mice, rats, voles, shrews, moles and hedgehogs. Insect prey includes chafers, dung and ground beetles, caterpillars, leatherjackets, and the nests of wasps and bumblebees. They are able to destroy wasp nests, consuming the occupants, combs, and envelope, such as that of Vespula rufa nests, since their thick skin and body hair protect the badgers from stings. Cereal food includes wheat, oats, maize and occasionally barley. Fruits include windfall apples, pears, plums, blackberries, bilberries, raspberries, cherries, strawberries, acorns, beechmast, pignuts and wild arum corms.
Occasionally, they feed on medium to large birds, amphibians, fish, small reptiles including tortoises and lizards, snails, slugs, fungi, tubers and green food such as clover and grass, particularly in winter and during droughts. Badgers characteristically capture large numbers of one food type in each hunt. Generally, they do not eat more than of food per day, with young specimens yet to attain one year of age eating more than adults. An adult badger weighing eats a quantity of food equal to 3.4% of its body weight. Badgers typically eat prey on the spot, and rarely transport it to their setts. Surplus killing has been observed in chicken coops.
Badgers prey on rabbits throughout the year, especially during times when their young are available. They catch young rabbits by locating their position in their nest by scent, then dig vertically downwards to them. In mountainous or hilly districts, where vegetable food is scarce, badgers rely on rabbits as a principal food source. Adult rabbits are usually avoided, unless they are wounded or caught in traps. They consume them by turning them inside out and eating the meat, leaving the inverted skin uneaten. Hedgehogs are eaten in a similar manner. In areas where badgers are common, hedgehogs are scarce. Some rogue badgers may kill lambs, though this is very rare; they may be erroneously implicated in lamb killings through the presence of discarded wool and bones near their setts, though foxes, which occasionally live alongside badgers, are often the culprits, as badgers do not transport food to their setts. They typically kill lambs by biting them behind the shoulder. Poultry and game birds are also taken only rarely. Some badgers may build their setts in close proximity to poultry or game farms without ever causing damage. In the rare instances in which badgers do kill reared birds, the killings usually occur in February–March, when food is scarce due to harsh weather and increases in badger populations. Badgers can easily breach bee hives with their jaws, and are mostly indifferent to bee stings, even when set upon by swarms.
Relationships with other non-human predators
European badgers have few natural enemies. While normally docile, badgers can become extremely aggressive and ferocious when cornered, making it dangerous for predators to target them. Grey wolves (Canis lupus), Eurasian lynxes (Lynx lynx) and brown bears (Ursus arctos), Europe's three largest remaining land predators, and large domestic dogs (C. familiaris) can pose a threat to adult badgers, though deaths caused by them are quantitatively rare as these predators are often limited in population due to human persecution and usually prefer easier, larger prey like ungulates, while badgers may fight viciously if aware of a predator and cornered without an escape route. They may live alongside red foxes (Vulpes vulpes) in isolated sections of large burrows. The two species possibly tolerate each other out of commensalism; foxes provide badgers with food scraps, while badgers maintain the shared burrow's cleanliness. However, cases are known of badgers driving vixens from their dens and destroying their litters without eating them. In turn, red foxes are known to have killed badger cubs in spring. Golden eagles (Aquila chrysaetos) are known predators of European badgers and attacks by them on badger cubs are not infrequent, including cases where they have been pulled out directly from below the legs of their mothers, and even adult badgers may be attacked by this eagle species when emerging weak and hungry from hibernation. Eurasian eagle owls (Bubo bubo) may also take an occasional cub and other large raptors such as white-tailed eagles (Haliaeetus albicilla) and greater spotted eagle (Clanga clanga) are considered potential badger cub predators. Raccoon dogs may extensively use badger setts for shelter. There are many known cases of badgers and raccoon dogs wintering in the same hole, possibly because badgers enter hibernation two weeks earlier than the latter, and leave two weeks later. In exceptional cases, badger and raccoon dog cubs may coexist in the same burrow. Badgers may drive out or kill raccoon dogs if they overstay their welcome.
Diseases and parasites
Bovine tuberculosis (bovine TB) caused by Mycobacterium bovis is a major mortality factor in badgers, though infected badgers can live and successfully breed for years before succumbing. The disease was first observed in badgers in 1951 in Switzerland where they were believed to have contracted it from chamois (Rupicapra rupicapra) or roe deer (Capreolus capreolus). It was detected in the United Kingdom in 1971 where it was linked to an outbreak of bovine TB in cows. The evidence appears to indicate that the badger is the primary reservoir of infection for cattle in the southwest of England, Wales and Ireland. Since then there has been considerable controversy as to whether culling badgers will effectively reduce or eliminate bovine TB in cattle.
Badgers are vulnerable to the mustelid herpesvirus-1, as well as rabies and canine distemper, though the latter two are absent in Great Britain. Other diseases found in European badgers include arteriosclerosis, pneumonia, pleurisy, nephritis, enteritis, polyarthritis and lymphosarcoma.
Internal parasites of badgers include trematodes, nematodes and several species of tapeworm. Ectoparasites carried by them include the fleas Paraceras melis (the badger flea), Chaetopsylla trichosa and Pulex irritans (the human flea), the lice Trichodectes melis and the ticks Ixodes ricinus, I. canisuga, I. hexagonus, I. reduvius and I. melicula. They also suffer from mange. They spend much time grooming, individuals concentrating on their own ventral areas, alternating one side with the other, while social grooming occurs with one individual grooming another on its dorsal surface. Fleas tried to avoid the scratching, retreating rapidly downwards and backwards through the fur. This was in contrast to fleas away from their host, which ran upwards and jumped when disturbed. The grooming seems to disadvantage fleas rather than merely having a social function.
Conservation
The International Union for Conservation of Nature rates the European badger as being of least concern. This is because it is a relatively common species with a wide range and populations are generally stable. In Central Europe it has become more abundant in recent decades due to a reduction in the incidence of rabies. In other areas it has also fared well, with increases in numbers in Western Europe and the United Kingdom. However, in some areas of intensive agriculture it has reduced in numbers due to loss of habitat and in others it is hunted as a pest.
Cultural significance
Badgers play a part in European folklore and are featured in modern literature. In Irish mythology, badgers are portrayed as shape-shifters and kinsmen to Tadg, the king of Tara and foster father of Cormac mac Airt. In one story, Tadg berates his adopted son for having killed and prepared some badgers for dinner. In German folklore, the badger is portrayed as a cautious, peace-loving Philistine, who loves more than anything his home, family and comfort, though he can become aggressive if surprised. He is a cousin of Reynard the Fox, whom he uselessly tries to convince to return to the path of righteousness.
In Kenneth Grahame's The Wind in the Willows, Mr. Badger is depicted as a gruff, solitary figure who "simply hates society", yet is a good friend to Mole and Ratty. As a friend of Toad's now-deceased father, he is often firm and serious with Toad, but at the same time generally patient and well-meaning towards him. He can be seen as a wise hermit, a good leader and gentleman, embodying common sense. He is also brave and a skilled fighter, and helps rid Toad Hall of invaders from the wild wood.
The "Frances" series of children's books by Russell and Lillian Hoban depicts an anthropomorphic badger family.
In T. H. White's Arthurian series The Once and Future King, the young King Arthur is transformed into a badger by Merlin as part of his education. He meets with an older badger who tells him "I can only teach you two things – to dig, and love your home."
A villainous badger named Tommy Brock appears in Beatrix Potter's 1912 book The Tale of Mr. Tod. He is shown kidnapping the children of Benjamin Bunny and his wife Flopsy, and hiding them in an oven at the home of Mr. Tod the fox, whom he fights at the end of the book. The portrayal of the badger as a filthy animal which appropriates fox dens was criticized from a naturalistic viewpoint, though the inconsistencies are few and employed to create individual characters rather than evoke an archetypical fox and badger. A wise old badger named Trufflehunter appears in C. S. Lewis' Prince Caspian, where he aids Caspian X in his struggle against King Miraz.
A badger takes a prominent role in Colin Dann's The Animals of Farthing Wood series as second in command to Fox. The badger is also the house symbol for Hufflepuff in the Harry Potter book series. The Redwall series also has the Badger Lords, who rule the extinct volcano fortress of Salamandastron and are renowned as fierce warriors. The children's television series Bodger & Badger was popular on CBBC during the 1990s and was set around the mishaps of a mashed potato-loving badger and his human companion.
An unnamed badger is part of Bosnian Serb writer Petar Kočić's satirical play Badger on Tribunal in which local farmer David Štrbac attempts to sue a badger for eating his crops. It is actually highly critical towards Austro-Hungarian rule in Bosnia and Herzegovina at the beginning of the 20th century. In honor of Kočić and his Badger, satirical theater in Banja Luka is named Jazavac (Badger).
Heraldry
European badger appears on the coat of arms of the municipality of Luhanka in Central Finland, referring to the former importance of the fur trade in the locality. The badger is also the title animal of the Nurmijärvi municipality in Uusimaa, Finland, where it is a very common mammal.
Hunting
European badgers are of little significance to hunting economies, though they may be actively hunted locally. Methods used for hunting badgers include catching them in jaw traps, ambushing them at their setts with guns, smoking them out of their earths and through the use of specially bred dogs such as Fox Terriers and Dachshunds to dig them out. Badgers are, however, notoriously durable animals; their skins are thick, loose and covered in long hair which acts as protection, and their heavily ossified skulls allow them to shrug off most blunt traumas, as well as shotgun pellets.
Badger-baiting
Badger-baiting was once a popular blood sport, in which badgers were captured alive, placed in boxes, and attacked with dogs. In the UK, this was outlawed by the Cruelty to Animals Act of 1835 and again by the Protection of Animals Act of 1911. Moreover, the cruelty towards and killing of the badger constitute offences under the Protection of Badgers Act 1992, and further offences under this act are inevitably committed to facilitate badger-baiting (such as interfering with a sett, or the taking or the very possession of a badger for purposes other than nursing an injured animal to health). If convicted, badger-baiters may face a sentence of up to six months in jail, a fine of up to £5,000, and other punitive measures, such as community service or a ban from owning dogs.
Culling
Many badgers in Europe were gassed during the 1960s and 1970s to control rabies. Until the 1980s, badger culling in the United Kingdom was undertaken in the form of gassing, to control the spread of bovine tuberculosis (bTB). Limited culling resumed in 1998 as part of a 10-year randomized trial cull which was considered by John Krebs and others to show that culling was ineffective. Some groups called for a selective cull, while others favoured a programme of vaccination, and vets support the cull on compassionate grounds as they say that the illness causes much suffering in badgers. In 2012, the government authorized a limited cull led by the Department for Environment, Food and Rural Affairs (Defra), however, this was later deferred with a wide range of reasons given. In August 2013, a full culling programme began where about 5,000 badgers were killed over six weeks in West Somerset and Gloucestershire by marksmen with high-velocity rifles using a mixture of controlled shooting and free shooting (some badgers were trapped in cages first). The cull caused many protests with emotional, economic and scientific reasons being cited. The badger is considered an iconic species of the British countryside, though is not endangered. It was claimed by shadow ministers that "The government's own figures show it will cost more than it saves...", and Lord Krebs, who led the Randomised Badger Culling Trial in the 1990s, said the two pilots "will not yield any useful information". A scientific study of culling from 2013 to 2017 has shown a reduction of 36–55% incidence of bovine tuberculosis in cattle.
Tameability
There are several accounts of European badgers being tamed. Tame badgers can be affectionate pets, and can be trained to come to their owners when their names are called. They are easily fed, as they are not fussy eaters, and will instinctively unearth rats, moles and young rabbits without training, though they do have a weakness for pork. Although there is one record of a tame badger befriending a fox, they generally do not tolerate the presence of cats and dogs, and will chase them.
Uses
Badger meat is eaten in some districts of the former Soviet Union, though in most cases it is discarded. Smoked hams made from badgers were once highly esteemed in England, Wales and Ireland.
Some badger products have been used for medical purposes; badger expert Ernest Neal, quoting from an 1810 edition of The Sporting Magazine, wrote;
The flesh, blood and grease of the badger are very useful for oils, ointments, salves and powders, for shortness of breath, the cough of the lungs, for the stone, sprained sinews, collachs etc. The skin being well dressed is very warm and comfortable for ancient people who are troubled with paralytic disorders.
The hair of the European badger has been used for centuries for making sporrans and shaving brushes. Sporrans are traditionally worn as part of male Scottish highland dress. They form a bag or pocket made from a pelt and a badger or other animal's mask may be used as a flap. The pelt was also formerly used for pistol furniture.
| Biology and health sciences | Carnivora | null |
519182 | https://en.wikipedia.org/wiki/Projection%20%28linear%20algebra%29 | Projection (linear algebra) | In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself (an endomorphism) such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once (i.e. is idempotent). It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
Definitions
A projection on a vector space is a linear operator such that .
When has an inner product and is complete, i.e. when is a Hilbert space, the concept of orthogonality can be used. A projection on a Hilbert space is called an orthogonal projection if it satisfies for all . A projection on a Hilbert space that is not orthogonal is called an oblique projection.
Projection matrix
A square matrix is called a projection matrix if it is equal to its square, i.e. if .
A square matrix is called an orthogonal projection matrix if for a real matrix, and respectively for a complex matrix, where denotes the transpose of and denotes the adjoint or Hermitian transpose of .
A projection matrix that is not an orthogonal projection matrix is called an oblique projection matrix.
The eigenvalues of a projection matrix must be 0 or 1.
Examples
Orthogonal projection
For example, the function which maps the point in three-dimensional space to the point is an orthogonal projection onto the xy-plane. This function is represented by the matrix
The action of this matrix on an arbitrary vector is
To see that is indeed a projection, i.e., , we compute
Observing that shows that the projection is an orthogonal projection.
Oblique projection
A simple example of a non-orthogonal (oblique) projection is
Via matrix multiplication, one sees that
showing that is indeed a projection.
The projection is orthogonal if and only if because only then
Properties and classification
Idempotence
By definition, a projection is idempotent (i.e. ).
Open map
Every projection is an open map onto it's image, meaning that it maps each open set in the domain to an open set in the subspace topology of the image. That is, for any vector and any ball (with positive radius) centered on , there exists a ball (with positive radius) centered on that is wholly contained in the image .
Complementarity of image and kernel
Let be a finite-dimensional vector space and be a projection on . Suppose the subspaces and are the image and kernel of respectively. Then has the following properties:
is the identity operator on :
We have a direct sum . Every vector may be decomposed uniquely as with and , and where
The image and kernel of a projection are complementary, as are and . The operator is also a projection as the image and kernel of become the kernel and image of and vice versa. We say is a projection along onto (kernel/image) and is a projection along onto .
Spectrum
In infinite-dimensional vector spaces, the spectrum of a projection is contained in as
Only 0 or 1 can be an eigenvalue of a projection. This implies that an orthogonal projection is always a positive semi-definite matrix. In general, the corresponding eigenspaces are (respectively) the kernel and range of the projection. Decomposition of a vector space into direct sums is not unique. Therefore, given a subspace , there may be many projections whose range (or kernel) is .
If a projection is nontrivial it has minimal polynomial , which factors into distinct linear factors, and thus is diagonalizable.
Product of projections
The product of projections is not in general a projection, even if they are orthogonal. If two projections commute then their product is a projection, but the converse is false: the product of two non-commuting projections may be a projection.
If two orthogonal projections commute then their product is an orthogonal projection. If the product of two orthogonal projections is an orthogonal projection, then the two orthogonal projections commute (more generally: two self-adjoint endomorphisms commute if and only if their product is self-adjoint).
Orthogonal projections
When the vector space has an inner product and is complete (is a Hilbert space) the concept of orthogonality can be used. An orthogonal projection is a projection for which the range and the kernel are orthogonal subspaces. Thus, for every and in , . Equivalently:
A projection is orthogonal if and only if it is self-adjoint. Using the self-adjoint and idempotent properties of , for any and in we have , , and
where is the inner product associated with . Therefore, and are orthogonal projections. The other direction, namely that if is orthogonal then it is self-adjoint, follows from the implication from to
for every and in ; thus .
The existence of an orthogonal projection onto a closed subspace follows from the Hilbert projection theorem.
Properties and special cases
An orthogonal projection is a bounded operator. This is because for every in the vector space we have, by the Cauchy–Schwarz inequality:
Thus .
For finite-dimensional complex or real vector spaces, the standard inner product can be substituted for .
Formulas
A simple case occurs when the orthogonal projection is onto a line. If is a unit vector on the line, then the projection is given by the outer product
(If is complex-valued, the transpose in the above equation is replaced by a Hermitian transpose). This operator leaves u invariant, and it annihilates all vectors orthogonal to , proving that it is indeed the orthogonal projection onto the line containing u. A simple way to see this is to consider an arbitrary vector as the sum of a component on the line (i.e. the projected vector we seek) and another perpendicular to it, . Applying projection, we get
by the properties of the dot product of parallel and perpendicular vectors.
This formula can be generalized to orthogonal projections on a subspace of arbitrary dimension. Let be an orthonormal basis of the subspace , with the assumption that the integer , and let denote the matrix whose columns are , i.e., . Then the projection is given by:
which can be rewritten as
The matrix is the partial isometry that vanishes on the orthogonal complement of , and is the isometry that embeds into the underlying vector space. The range of is therefore the final space of . It is also clear that is the identity operator on .
The orthonormality condition can also be dropped. If is a (not necessarily orthonormal) basis with , and is the matrix with these vectors as columns, then the projection is:
The matrix still embeds into the underlying vector space but is no longer an isometry in general. The matrix is a "normalizing factor" that recovers the norm. For example, the rank-1 operator is not a projection if After dividing by we obtain the projection onto the subspace spanned by .
In the general case, we can have an arbitrary positive definite matrix defining an inner product , and the projection is given by . Then
When the range space of the projection is generated by a frame (i.e. the number of generators is greater than its dimension), the formula for the projection takes the form: . Here stands for the Moore–Penrose pseudoinverse. This is just one of many ways to construct the projection operator.
If is a non-singular matrix and (i.e., is the null space matrix of ), the following holds:
If the orthogonal condition is enhanced to with non-singular, the following holds:
All these formulas also hold for complex inner product spaces, provided that the conjugate transpose is used instead of the transpose. Further details on sums of projectors can be found in Banerjee and Roy (2014). Also see Banerjee (2004) for application of sums of projectors in basic spherical trigonometry.
Oblique projections
The term oblique projections is sometimes used to refer to non-orthogonal projections. These projections are also used to represent spatial figures in two-dimensional drawings (see oblique projection), though not as frequently as orthogonal projections. Whereas calculating the fitted value of an ordinary least squares regression requires an orthogonal projection, calculating the fitted value of an instrumental variables regression requires an oblique projection.
A projection is defined by its kernel and the basis vectors used to characterize its range (which is a complement of the kernel). When these basis vectors are orthogonal to the kernel, then the projection is an orthogonal projection. When these basis vectors are not orthogonal to the kernel, the projection is an oblique projection, or just a projection.
A matrix representation formula for a nonzero projection operator
Let be a linear operator such that and assume that is not the zero operator. Let the vectors form a basis for the range of , and assemble these vectors in the matrix . Then , otherwise and is the zero operator. The range and the kernel are complementary spaces, so the kernel has dimension . It follows that the orthogonal complement of the kernel has dimension . Let form a basis for the orthogonal complement of the kernel of the projection, and assemble these vectors in the matrix . Then the projection (with the condition ) is given by
This expression generalizes the formula for orthogonal projections given above. A standard proof of this expression is the following. For any vector in the vector space , we can decompose , where vector is in the image of , and vector So , and then is in the kernel of , which is the null space of In other words, the vector is in the column space of so for some dimension vector and the vector satisfies by the construction of . Put these conditions together, and we find a vector so that . Since matrices and are of full rank by their construction, the -matrix is invertible. So the equation gives the vector In this way, for any vector and hence .
In the case that is an orthogonal projection, we can take , and it follows that . By using this formula, one can easily check that . In general, if the vector space is over complex number field, one then uses the Hermitian transpose and has the formula . Recall that one can express the Moore–Penrose inverse of the matrix by since has full column rank, so .
Singular values
is also an oblique projection. The singular values of and can be computed by an orthonormal basis of . Let
be an orthonormal basis of and let be the orthogonal complement of . Denote the singular values of the matrix
by the positive values . With this, the singular values for are:
and the singular values for are
This implies that the largest singular values of and are equal, and thus that the matrix norm of the oblique projections are the same. However, the condition number satisfies the relation , and is therefore not necessarily equal.
Finding projection with an inner product
Let be a vector space (in this case a plane) spanned by orthogonal vectors . Let be a vector. One can define a projection of onto as
where repeated indices are summed over (Einstein sum notation). The vector can be written as an orthogonal sum such that . is sometimes denoted as . There is a theorem in linear algebra that states that this is the smallest distance (the orthogonal distance) from to and is commonly used in areas such as machine learning.
Canonical forms
Any projection on a vector space of dimension over a field is a diagonalizable matrix, since its minimal polynomial divides , which splits into distinct linear factors. Thus there exists a basis in which has the form
where is the rank of . Here is the identity matrix of size , is the zero matrix of size , and is the direct sum operator. If the vector space is complex and equipped with an inner product, then there is an orthonormal basis in which the matrix of P is
where . The integers and the real numbers are uniquely determined. . The factor corresponds to the maximal invariant subspace on which acts as an orthogonal projection (so that P itself is orthogonal if and only if ) and the -blocks correspond to the oblique components.
Projections on normed vector spaces
When the underlying vector space is a (not necessarily finite-dimensional) normed vector space, analytic questions, irrelevant in the finite-dimensional case, need to be considered. Assume now is a Banach space.
Many of the algebraic results discussed above survive the passage to this context. A given direct sum decomposition of into complementary subspaces still specifies a projection, and vice versa. If is the direct sum , then the operator defined by is still a projection with range and kernel . It is also clear that . Conversely, if is projection on , i.e. , then it is easily verified that . In other words, is also a projection. The relation implies and is the direct sum .
However, in contrast to the finite-dimensional case, projections need not be continuous in general. If a subspace of is not closed in the norm topology, then the projection onto is not continuous. In other words, the range of a continuous projection must be a closed subspace. Furthermore, the kernel of a continuous projection (in fact, a continuous linear operator in general) is closed. Thus a continuous projection gives a decomposition of into two complementary closed subspaces: .
The converse holds also, with an additional assumption. Suppose is a closed subspace of . If there exists a closed subspace such that , then the projection with range and kernel is continuous. This follows from the closed graph theorem. Suppose and . One needs to show that . Since is closed and , y lies in , i.e. . Also, . Because is closed and , we have , i.e. , which proves the claim.
The above argument makes use of the assumption that both and are closed. In general, given a closed subspace , there need not exist a complementary closed subspace , although for Hilbert spaces this can always be done by taking the orthogonal complement. For Banach spaces, a one-dimensional subspace always has a closed complementary subspace. This is an immediate consequence of Hahn–Banach theorem. Let be the linear span of . By Hahn–Banach, there exists a bounded linear functional such that . The operator satisfies , i.e. it is a projection. Boundedness of implies continuity of and therefore is a closed complementary subspace of .
Applications and further considerations
Projections (orthogonal and otherwise) play a major role in algorithms for certain linear algebra problems:
QR decomposition (see Householder transformation and Gram–Schmidt decomposition);
Singular value decomposition
Reduction to Hessenberg form (the first step in many eigenvalue algorithms)
Linear regression
Projective elements of matrix algebras are used in the construction of certain K-groups in Operator K-theory
As stated above, projections are a special case of idempotents. Analytically, orthogonal projections are non-commutative generalizations of characteristic functions. Idempotents are used in classifying, for instance, semisimple algebras, while measure theory begins with considering characteristic functions of measurable sets. Therefore, as one can imagine, projections are very often encountered in the context of operator algebras. In particular, a von Neumann algebra is generated by its complete lattice of projections.
Generalizations
More generally, given a map between normed vector spaces one can analogously ask for this map to be an isometry on the orthogonal complement of the kernel: that be an isometry (compare Partial isometry); in particular it must be onto. The case of an orthogonal projection is when W is a subspace of V. In Riemannian geometry, this is used in the definition of a Riemannian submersion.
| Mathematics | Linear algebra | null |
519280 | https://en.wikipedia.org/wiki/Intelligence | Intelligence | Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
The term rose to prominence during the early 1900s. Most psychologists believe that intelligence can be divided into various domains or competencies.
Intelligence has been long-studied in humans, and across numerous disciplines. It has also been observed in the cognition of non-human animals. Some researchers have suggested that plants exhibit forms of intelligence, though this remains controversial.
Intelligence in computers or other machines is called artificial intelligence.
Etymology
The word intelligence derives from the Latin nouns intelligentia or intellēctus, which in turn stem from the verb intelligere, to comprehend or perceive. In the Middle Ages, the word intellectus became the scholarly technical term for understanding and a translation for the Greek philosophical term nous. This term, however, was strongly linked to the metaphysical and cosmological theories of teleological scholasticism, including theories of the immortality of the soul, and the concept of the active intellect (also known as the active intelligence). This approach to the study of nature was strongly rejected by early modern philosophers such as Francis Bacon, Thomas Hobbes, John Locke, and David Hume, all of whom preferred "understanding" (in place of "intellectus" or "intelligence") in their English philosophical works. Hobbes for example, in his Latin De Corpore, used "intellectus intelligit", translated in the English version as "the understanding understandeth", as a typical example of a logical absurdity. "Intelligence" has therefore become less common in English language philosophy, but it has later been taken up (with the scholastic theories that it now implies) in more contemporary psychology.
Definitions
There is controversy over how to define intelligence. Scholars describe its constituent abilities in various ways, and differ in the degree to which they conceive of intelligence as quantifiable.
A consensus report called Intelligence: Knowns and Unknowns, published in 1995 by the Board of Scientific Affairs of the American Psychological Association, states:
Psychologists and learning researchers also have suggested definitions of intelligence such as the following:
Human
Human intelligence is the intellectual power of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness. Intelligence enables humans to remember descriptions of things and use those descriptions in future behaviors. It gives humans the cognitive abilities to learn, form concepts, understand, and reason, including the capacities to recognize patterns, innovate, plan, solve problems, and employ language to communicate. These cognitive abilities can be organized into frameworks like fluid vs. crystallized and the Unified Cattell-Horn-Carroll model, which contains abilities like fluid reasoning, perceptual speed, verbal abilities, and others.
Intelligence is different from learning. Learning refers to the act of retaining facts and information or abilities and being able to recall them for future use. Intelligence, on the other hand, is the cognitive ability of someone to perform these and other processes.
Intelligence quotient (IQ)
There have been various attempts to quantify intelligence via psychometric testing. Prominent among these are the various Intelligence Quotient (IQ) tests, which were first developed in the early 20th century to screen children for intellectual disability. Over time, IQ tests became more pervasive, being used to screen immigrants, military recruits, and job applicants. As the tests became more popular, belief that IQ tests measure a fundamental and unchanging attribute that all humans possess became widespread.
An influential theory that promoted the idea that IQ measures a fundamental quality possessed by every person is the theory of General Intelligence, or g factor. The g factor is a construct that summarizes the correlations observed between an individual's scores on a range of cognitive tests.
Today, most psychologists agree that IQ measures at least some aspects of human intelligence, particularly the ability to thrive in an academic context. However, many psychologists question the validity of IQ tests as a measure of intelligence as a whole.
There is debate about the heritability of IQ, that is, what proportion of differences in IQ test performance between individuals are explained by genetic or environmental factors. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.
Emotional
Emotional intelligence is thought to be the ability to convey emotion to others in an understandable way as well as to read the emotions of others accurately. Some theories imply that a heightened emotional intelligence could also lead to faster generating and processing of emotions in addition to the accuracy. In addition, higher emotional intelligence is thought to help us manage emotions, which is beneficial for our problem-solving skills. Emotional intelligence is important to our mental health and has ties to social intelligence.
Social
Social intelligence is the ability to understand the social cues and motivations of others and oneself in social situations. It is thought to be distinct to other types of intelligence, but has relations to emotional intelligence. Social intelligence has coincided with other studies that focus on how we make judgements of others, the accuracy with which we do so, and why people would be viewed as having positive or negative social character. There is debate as to whether or not these studies and social intelligence come from the same theories or if there is a distinction between them, and they are generally thought to be of two different schools of thought.
Moral
Moral intelligence is the capacity to understand right from wrong and to behave based on the value that is believed to be right. It is considered a distinct form of intelligence, independent to both emotional and cognitive intelligence.
Book smart and street smart
Concepts of "book smarts" and "street smart" are contrasting views based on the premise that some people have knowledge gained through academic study, but may lack the experience to sensibly apply that knowledge, while others have knowledge gained through practical experience, but may lack accurate information usually gained through study by which to effectively apply that knowledge. Artificial intelligence researcher Hector Levesque has noted that:
Nonhuman animal
Although humans have been the primary focus of intelligence researchers, scientists have also attempted to investigate animal intelligence, or more broadly, animal cognition. These researchers are interested in studying both mental ability in a particular species, and comparing abilities between species. They study various measures of problem solving, as well as numerical and verbal reasoning abilities. Some challenges include defining intelligence so it has the same meaning across species, and operationalizing a measure that accurately compares mental ability across species and contexts.
Wolfgang Köhler's research on the intelligence of apes is an example of research in this area, as is Stanley Coren's book, The Intelligence of Dogs. Non-human animals particularly noted and studied for their intelligence include chimpanzees, bonobos (notably the language-using Kanzi) and other great apes, dolphins, elephants and to some extent parrots, rats and ravens.
Cephalopod intelligence provides an important comparative study. Cephalopods appear to exhibit characteristics of significant intelligence, yet their nervous systems differ radically from those of backboned animals. Vertebrates such as mammals, birds, reptiles and fish have shown a fairly high degree of intellect that varies according to each species. The same is true with arthropods.
g factor in non-humans
Evidence of a general factor of intelligence has been observed in non-human animals. First described in humans, the g factor has since been identified in a number of non-human species.
Cognitive ability and intelligence cannot be measured using the same, largely verbally dependent, scales developed for humans. Instead, intelligence is measured using a variety of interactive and observational tools focusing on innovation, habit reversal, social learning, and responses to novelty. Studies have shown that g is responsible for 47% of the individual variance in cognitive ability measures in primates and between 55% and 60% of the variance in mice (Locurto, Locurto). These values are similar to the accepted variance in IQ explained by g in humans (40–50%).
Plant
It has been argued that plants should also be classified as intelligent based on their ability to sense and model external and internal environments and adjust their morphology, physiology and phenotype accordingly to ensure self-preservation and reproduction.
A counter argument is that intelligence is commonly understood to involve the creation and use of persistent memories as opposed to computation that does not involve learning. If this is accepted as definitive of intelligence, then it includes the artificial intelligence of robots capable of "machine learning", but excludes those purely autonomic sense-reaction responses that can be observed in many plants. Plants are not limited to automated sensory-motor responses, however, they are capable of discriminating positive and negative experiences and of "learning" (registering memories) from their past experiences. They are also capable of communication, accurately computing their circumstances, using sophisticated cost–benefit analysis and taking tightly controlled actions to mitigate and control the diverse environmental stressors.
Artificial
Scholars studying artificial intelligence have proposed definitions of intelligence that include the intelligence demonstrated by machines. Some of these definitions are meant to be general enough to encompass human and other animal intelligence as well. An intelligent agent can be defined as a system that perceives its environment and takes actions which maximize its chances of success. Kaplan and Haenlein define artificial intelligence as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation". Progress in artificial intelligence can be demonstrated in benchmarks ranging from games to practical tasks such as protein folding. Existing AI lags humans in terms of general intelligence, which is sometimes defined as the "capacity to learn how to carry out a huge range of tasks".
Mathematician Olle Häggström defines intelligence in terms of "optimization power", an agent's capacity for efficient cross-domain optimization of the world according to the agent's preferences, or more simply the ability to "steer the future into regions of possibility ranked high in a preference ordering". In this optimization framework, Deep Blue has the power to "steer a chessboard's future into a subspace of possibility which it labels as 'winning', despite attempts by Garry Kasparov to steer the future elsewhere." Hutter and Legg, after surveying the literature, define intelligence as "an agent's ability to achieve goals in a wide range of environments". While cognitive ability is sometimes measured as a one-dimensional parameter, it could also be represented as a "hypersurface in a multidimensional space" to compare systems that are good at different intellectual tasks. Some skeptics believe that there is no meaningful way to define intelligence, aside from "just pointing to ourselves".
| Biology and health sciences | Psychology | null |
8521120 | https://en.wikipedia.org/wiki/Winter%20solstice | Winter solstice | The winter solstice, or hibernal solstice, occurs when either of Earth's poles reaches its maximum tilt away from the Sun. This happens twice yearly, once in each hemisphere (Northern and Southern). For that hemisphere, the winter solstice is the day with the shortest period of daylight and longest night of the year, and when the Sun is at its lowest daily maximum elevation in the sky. Each polar region experiences continuous darkness or twilight around its winter solstice. The opposite event is the summer solstice.
The winter solstice occurs during the hemisphere's winter. In the Northern Hemisphere, this is the December solstice (December 21 or 22) and in the Southern Hemisphere, this is the June solstice (June 20 or 21). Although the winter solstice itself lasts only a moment, the term also refers to the day on which it occurs. Traditionally, in many temperate regions, the winter solstice is seen as the middle of winter, "midwinter" is another name for the winter solstice, although it carries other meanings as well. Other names are the "extreme of winter", or the "shortest day".
Since prehistory, the winter solstice has been a significant time of year in many cultures and has been marked by festivals and rites. This is because it is the point when the shortening of daylight hours is reversed and the daytime begins to lengthen again. In parts of Europe it was seen as the symbolic death and rebirth of the Sun. Some ancient monuments such as Newgrange, Stonehenge, and Cahokia Woodhenge are aligned with the sunrise or sunset on the winter solstice.
History and cultural significance
There is evidence that the winter solstice was deemed an important moment of the annual cycle for some cultures as far back as the Neolithic (New Stone Age). Astronomical events were often used to guide farming activities, such as the mating of animals, the sowing of crops and the monitoring of winter reserves of food. Livestock were slaughtered so they would not have to be fed during the winter, so it was almost the only time of year when a plentiful supply of fresh meat was available.
The winter solstice is the reversal of the Sun's apparent ebbing in the sky; the daytime stops becoming shorter and begins to lengthen again. In parts of ancient Europe, this was symbolized as the death and rebirth of the Sun, or of a Sun god.
Neolithic Europe
Some important Neolithic and early Bronze Age archaeological sites in Europe are associated with the winter solstice, such as Stonehenge in England and Newgrange in Ireland. The primary axes of both of these monuments seem to have been carefully aligned on a sight-line pointing to the winter solstice sunrise (Newgrange) and the winter solstice sunset (Stonehenge). It is significant that at Stonehenge the Great Trilithon was oriented outwards from the middle of the monument, i.e. its smooth flat face was turned towards the midwinter sunset.
Ancient Egypt
Several ancient Egyptian temples are aligned with the winter solstice sunrise, including the Temple of Amun-Ra at Karnak, the chapel of Ra-Horakhty at Abu Simbel, and the Mortuary temple of Hatshepsut at Luxor.
Plutarch wrote in the Moralia (first century AD) that the Egyptians believed the goddess Isis gave birth to Harpocrates (Horus the Child) at the winter solstice. Macrobius wrote in the fourth century that the Sun appears small at the winter solstice, and on this shortest day, the Egyptians brought a child Sun god out of a shrine. In his Panarion, also from the fourth century, Epiphanius of Salamis wrote that the winter solstice was celebrated on 25 December in Alexandria as the Kikellia. Epiphanius says that thirteen days after the solstice, on 5–6 January, they celebrated the birth of Aion, son of the virgin goddess Kore. At the temple of Kore (the Koreion) in Alexandria, an all-night vigil was held, and at dawn an idol of the child god was brought out of an underground chapel. This idol was carried around the temple seven times, accompanied by music, hymns and revelry.
Ancient Roman world
In the ancient Roman calendar, December 25 was the date of the winter solstice. Marcus Terentius Varro wrote in the first century BC that this was regarded as the middle of winter. In the same century, Ovid wrote in the Fasti that the winter solstice is the first day of the "new Sun". The Calendar of Antiochus of Athens, second century AD, marks it as the "birthday of the Sun". In AD 274, the emperor Aurelian made this the date of the festival , the birthday of Sol Invictus or the 'Invincible Sun'. Gary Forsythe, Professor of Ancient History, says "This celebration would have formed a welcome addition to the seven-day period of the Saturnalia (December 17–23), Rome's most joyous holiday season since Republican times, characterized by parties, banquets, and exchanges of gifts".
Liturgical historians generally accept that the winter solstice had some influence on the choice of December 25 as the date of Christmas. A widely-held theory is that the Church chose it as Christ's birthday () specifically to appropriate the Roman festival of the sun god's birthday (). According to C. Philipp E. Nothaft, a professor at Trinity College Dublin, a historically Protestant University, though this "is nowadays used as the default explanation for the choice of 25 December as Christ's birthday, few advocates of this theory seem to be aware of how paltry the available evidence actually is".
Germanic
Discussing the Heruli, the Greek historian Procopius wrote in the sixth century that the people of Scandinavia (which he calls Thule) held their greatest festival shortly after the winter solstice, to celebrate the return of daylight.
In Anglo-Saxon England the winter solstice was generally deemed to be December 25, and in Old English, midwinter could mean both the winter solstice and Christmas. In the eighth century, Bede wrote that the pagan Anglo-Saxons had celebrated the festival ('Mothers' Night') at the winter solstice, which marked the start of the Anglo-Saxon year.
The North Germanic peoples celebrated a winter holiday called Yule. The Heimskringla, written in the 13th century by the Icelander Snorri Sturluson, describes a Yule feast hosted by the Norwegian king Haakon the Good (c. 920–961). According to Snorri, the Christian Haakon had moved Yule from "midwinter" and aligned it with the Christian Christmas celebration. Historically, this has made some scholars believe that Yule originally was a sun festival on the winter solstice. Modern scholars generally do not believe this, as midwinter in medieval Iceland was a date about four weeks after the solstice. During the Christianisation of the Germanic peoples, Yule was incorporated into the Christmas celebrations and the term and its cognates remain used to refer to Christmas in modern Northern European languages such as Swedish.
Albanian
Albanian traditional festivities around the winter solstice celebrate the return of the Sun (Dielli) for summer and the lengthening of the days. The Albanian traditional rites during the winter solstice period are pagan, and very ancient. Albanologist Johann Georg von Hahn (1811 – 1869) reported that Christian clergy, during his time and before, have vigorously fought the pagan rites that were practiced by Albanians to celebrate this festivity, but without success.
The old rites of this festivity were accompanied by collective fires (zjarre) based on the house, kinship or neighborhood, a practice performed in order to give strength to the Sun according to the old beliefs. The rites related to the cult of vegetation, which expressed the desire for increased production in agriculture and animal husbandry, were accompanied by animal sacrifices to the fire, lighting pine trees at night, luck divination tests with crackling in the fire or with coins in ritual bread, making and consuming ritual foods, performing various magical ritualistic actions in livestock, fields, vineyards and orchards, and so on.
Nata e Buzmit, "Yule log's night", is celebrated between December 22 and January 6. Buzmi is a ritualistic piece of wood (or several pieces of wood) that is put to burn in the fire (zjarri) of the hearth (vatër) on the night of a winter celebration that falls after the return of the Sun for summer (after the winter solstice), sometimes on the night of Kërshëndella on December 24 (Christmas Eve), sometimes on the night of kolendra, or sometimes on New Year's Day or on any other occasion around the same period, a tradition that is originally related to the cult of the Sun.
East Asian
In East Asia, the winter solstice has been celebrated as one of the Twenty-four Solar Terms, called Dongzhi (冬至) in Chinese. In Japan, in order not to catch cold in the winter, there is a custom to soak oneself in a yuzu hot bath ( = Yuzuyu).
Indian
Makara Sankranti, also known as Makara Sankrānti (Sanskrit: मकर संक्रांति) or Maghi, is a festival day in the Hindu calendar, in reference to deity Surya (sun). It is observed each year in January. It marks the first day of Sun's transit into Makara (Capricorn), marking the end of the month with the winter solstice and the start of longer days.
Iranian
Iranian people celebrate the night of the Northern Hemisphere's winter solstice as, "Yalda night", which is known to be the "longest and darkest night of the year". Yalda night celebration, or as some call it "Shabe Chelleh" ("the 40th night"), is one of the oldest Iranian traditions that has been present in Persian culture from ancient times. In this night all the family gather together, usually at the house of the eldest, and celebrate it by eating, drinking and reciting poetry (esp. Hafez). Nuts, pomegranates and watermelons are particularly served during this festival.
Judaic
An Aggadic legend found in tractate Avodah Zarah 8a puts forth the talmudic hypothesis that Adam first established the tradition of fasting before the winter solstice, and rejoicing afterward, which festival later developed into the Roman Saturnalia and Kalendae.
Observation
Although the instant of the solstice can be calculated, direct observation of the moment by visual perception is elusive. The Sun moves too slowly or appears to stand still (the meaning of "solstice"). However, by use of astronomical data tracking, the precise timing of its occurrence is now public knowledge. The precise instant of the solstice cannot be directly detected (by definition, people cannot observe that an object has stopped moving until it is later observed that it has not moved further from the preceding spot, or that it has moved in the opposite direction). To be precise to a single day, observers must be able to view a change in azimuth or elevation less than or equal to about 1/60 of the angular diameter of the Sun. Observing that it occurred within a two-day period is easier, requiring an observation precision of only about 1/16 of the angular diameter of the Sun. Thus, many observations are of the day of the solstice rather than the instant. This is often done by observing sunrise and sunset or using an astronomically aligned instrument that allows a ray of light to be cast on a certain point around that time. The earliest sunset and latest sunrise dates differ from winter solstice, however, and these depend on latitude, due to the variation in the solar day throughout the year caused by the Earth's elliptical orbit (see earliest and latest sunrise and sunset).
Holidays celebrated on the winter solstice
Alban Arthan (Welsh)
Blue Christmas (holiday) (Western Christian)
Brumalia (Ancient Rome)
Dongzhi Festival (East Asia)
Inti Raymi (Inca)
Koliada and Korochun (Slavic)
Midwinter Day (Antarctica)
Sanghamitta Day (Theravada Buddhism)
Shabe Yalda (Iran)
Shalako (Zuni)
Uttarayana (India)
We Tripantu (Mapuche)
Cæpporsæ (Ossetia)
Willkakuti (Aymara)
Yaldā (Western and Central Asia)
Yule in the Northern Hemisphere (Germanic)
Ziemassvētki (ancient Latvia)
Other related festivals
Cejna Êzî/Êzîd (Yazidi): Feast of Êzîd – Celebrated on the last Friday before winter solstice
Saturnalia (Ancient Rome): Celebrated shortly before winter solstice
Saint Lucy's Day (Christian): Used to coincide with the winter solstice day, now celebrated on December 13
Cold Food Festival (Korea, Greater China): 105 days after winter solstice
Makar Sankranti (India): Harvest Festival – Marks the end of the cold months and start of the new month with longer days.
Winter at Tantora Festival (Saudi Arabia): Cultural festival marking the beginning of the winter harvest season.
Saint John's Eve, in southern hemisphere
| Physical sciences | Celestial sphere: General | Astronomy |
8526433 | https://en.wikipedia.org/wiki/Substrate%20%28chemistry%29 | Substrate (chemistry) | In chemistry, the term substrate is highly context-dependent. Broadly speaking, it can refer either to a chemical species being observed in a chemical reaction, or to a surface on which other chemical reactions or microscopy are performed.
In the former sense, a reagent is added to the substrate to generate a product through a chemical reaction. The term is used in a similar sense in synthetic and organic chemistry, where the substrate is the chemical of interest that is being modified. In biochemistry, an enzyme substrate is the material upon which an enzyme acts. When referring to Le Chatelier's principle, the substrate is the reagent whose concentration is changed.
Spontaneous reaction
Where S is substrate and P is product.
Catalysed reaction
Where S is substrate, P is product and C is catalyst.
In the latter sense, it may refer to a surface on which other chemical reactions are performed or play a supporting role in a variety of spectroscopic and microscopic techniques, as discussed in the first few subsections below.
Microscopy
In three of the most common nano-scale microscopy techniques, atomic force microscopy (AFM), scanning tunneling microscopy (STM), and transmission electron microscopy (TEM), a substrate is required for sample mounting. Substrates are often thin and relatively free of chemical features or defects. Typically silver, gold, or silicon wafers are used due to their ease of manufacturing and lack of interference in the microscopy data. Samples are deposited onto the substrate in fine layers where it can act as a solid support of reliable thickness and malleability. Smoothness of the substrate is especially important for these types of microscopy because they are sensitive to very small changes in sample height.
Various other substrates are used in specific cases to accommodate a wide variety of samples. Thermally-insulating substrates are required for AFM of graphite flakes for instance, and conductive substrates are required for TEM. In some contexts, the word substrate can be used to refer to the sample itself, rather than the solid support on which it is placed.
Spectroscopy
Various spectroscopic techniques also require samples to be mounted on substrates, such as powder diffraction. This type of diffraction, which involves directing high-powered X-rays at powder samples to deduce crystal structures, is often performed with an amorphous substrate such that it does not interfere with the resulting data collection. Silicon substrates are also commonly used because of their cost-effective nature and relatively little data interference in X-ray collection.
Single-crystal substrates are useful in powder diffraction because they are distinguishable from the sample of interest in diffraction patterns by differentiating by phase.
Atomic layer deposition
In atomic layer deposition, the substrate acts as an initial surface on which reagents can combine to precisely build up chemical structures. A wide variety of substrates are used depending on the reaction of interest, but they frequently bind the reagents with some affinity to allow sticking to the substrate.
The substrate is exposed to different reagents sequentially and washed in between to remove excess. A substrate is critical in this technique because the first layer needs a place to bind to such that it is not lost when exposed to the second or third set of reagents.
Biochemistry
In biochemistry, the substrate is a molecule upon which an enzyme acts. Enzymes catalyze chemical reactions involving the substrate(s). In the case of a single substrate, the substrate bonds with the enzyme active site, and an enzyme-substrate complex is formed. The substrate is transformed into one or more products, which are then released from the active site. The active site is then free to accept another substrate molecule. In the case of more than one substrate, these may bind in a particular order to the active site, before reacting together to produce products. A substrate is called 'chromogenic' if it gives rise to a coloured product when acted on by an enzyme. In histological enzyme localization studies, the colored product of enzyme action can be viewed under a microscope, in thin sections of biological tissues. Similarly, a substrate is called 'fluorogenic' if it gives rise to a fluorescent product when acted on by an enzyme.
For example, curd formation (rennet coagulation) is a reaction that occurs upon adding the enzyme rennin to milk. In this reaction, the substrate is a milk protein (e.g., casein) and the enzyme is rennin. The products are two polypeptides that have been formed by the cleavage of the larger peptide substrate. Another example is the chemical decomposition of hydrogen peroxide carried out by the enzyme catalase. As enzymes are catalysts, they are not changed by the reactions they carry out. The substrate(s), however, is/are converted to product(s). Here, hydrogen peroxide is converted to water and oxygen gas.
Where E is enzyme, S is substrate, and P is product
While the first (binding) and third (unbinding) steps are, in general, reversible, the middle step may be irreversible (as in the rennin and catalase reactions just mentioned) or reversible (e.g. many reactions in the glycolysis metabolic pathway).
By increasing the substrate concentration, the rate of reaction will increase due to the likelihood that the number of enzyme-substrate complexes will increase; this occurs until the enzyme concentration becomes the limiting factor.
Substrate promiscuity
Although enzymes are typically highly specific, some are able to perform catalysis on more than one substrate, a property termed enzyme promiscuity. An enzyme may have many native substrates and broad specificity (e.g. oxidation by cytochrome p450s) or it may have a single native substrate with a set of similar non-native substrates that it can catalyse at some lower rate. The substrates that a given enzyme may react with in vitro, in a laboratory setting, may not necessarily reflect the physiological, endogenous substrates of the enzyme's reactions in vivo. That is to say that enzymes do not necessarily perform all the reactions in the body that may be possible in the laboratory. For example, while fatty acid amide hydrolase (FAAH) can hydrolyze the endocannabinoids 2-arachidonoylglycerol (2-AG) and anandamide at comparable rates in vitro, genetic or pharmacological disruption of FAAH elevates anandamide but not 2-AG, suggesting that 2-AG is not an endogenous, in vivo substrate for FAAH. In another example, the N-acyl taurines (NATs) are observed to increase dramatically in FAAH-disrupted animals, but are actually poor in vitro FAAH substrates.
Sensitivity
Sensitive substrates, also known as sensitive index substrates, are drugs that demonstrate an increase in AUC of ≥5-fold with strong index inhibitors of a given metabolic pathway in clinical drug-drug interaction (DDI) studies.
Moderate sensitive substrates are drugs that demonstrate an increase in AUC of ≥2 to <5-fold with strong index inhibitors of a given metabolic pathway in clinical DDI studies.
Interaction between substrates
Metabolism by the same cytochrome P450 isozyme can result in several clinically significant drug-drug interactions.
| Physical sciences | Reaction | Chemistry |
6554225 | https://en.wikipedia.org/wiki/Eris%20%28dwarf%20planet%29 | Eris (dwarf planet) | Eris (minor-planet designation: 136199 Eris) is the most massive and second-largest known dwarf planet in the Solar System. It is a trans-Neptunian object (TNO) in the scattered disk and has a high-eccentricity orbit. Eris was discovered in January 2005 by a Palomar Observatory–based team led by Mike Brown and verified later that year. It was named in September 2006 after the GrecoRoman goddess of strife and discord. Eris is the ninth-most massive known object orbiting the Sun and the sixteenth-most massive overall in the Solar System (counting moons). It is also the largest known object in the solar system that has not been visited by a spacecraft. Eris has been measured at in diameter; its mass is 0.28% that of the Earth and 27% greater than that of Pluto, although Pluto is slightly larger by volume. Both Eris and Pluto have a surface area that is comparable to that of Russia or South America.
Eris has one large known moon, Dysnomia. In February 2016, Eris's distance from the Sun was , more than three times that of Neptune or Pluto. With the exception of long-period comets, Eris and Dysnomia were the most distant known natural objects in the Solar System until the discovery of and in 2018.
Because Eris appeared to be larger than Pluto, NASA initially described it as the Solar System's tenth planet. This, along with the prospect of other objects of similar size being discovered in the future, motivated the International Astronomical Union (IAU) to define the term planet for the first time. Under the IAU definition approved on August 24, 2006, Eris, Pluto and Ceres are "dwarf planets", reducing the number of known planets in the Solar System to eight, the same as before Pluto's discovery in 1930. Observations of a stellar occultation by Eris in 2010 showed that it was slightly smaller than Pluto, which was measured by New Horizons as having a mean diameter of in July 2015.
Discovery
Eris was discovered by the team of Mike Brown, Chad Trujillo, and David Rabinowitz on January 5, 2005, from images taken on October 21, 2003. The discovery was announced on July 29, 2005, the same day as and two days after , due in part to events that would later lead to controversy about Haumea. The search team had been systematically scanning for large outer Solar System bodies for several years, and had been involved in the discovery of several other large TNOs, including the dwarf planets , , and .
Routine observations were taken by the team on October 21, 2003, using the 1.2 m Samuel Oschin Schmidt telescope at Palomar Observatory, California, but the image of Eris was not discovered at that point due to its very slow motion across the sky: The team's automatic image-searching software excluded all objects moving at less than 1.5 arcseconds per hour to reduce the number of false positives returned. When Sedna was discovered in 2003, it was moving at 1.75 arcsec/h, and in light of that the team reanalyzed their old data with a lower limit on the angular motion, sorting through the previously excluded images by eye. In January 2005, the re-analysis revealed Eris's slow orbital motion against the background stars.
Follow-up observations were then carried out to make a preliminary determination of Eris's orbit, which allowed the object's distance to be estimated. The team had planned to delay announcing their discoveries of the bright objects Eris and Makemake until further observations and calculations were complete, but announced them both on July 29 when the discovery of another large TNO they had been tracking—Haumea—was controversially announced on July 27 by a different team in Spain.
Precovery images of Eris have been identified back to September 3, 1954.
More observations released in October 2005 revealed that Eris has a moon, later named Dysnomia. Observations of Dysnomia's orbit permitted scientists to determine the mass of Eris, which in June 2007 was calculated to be , greater than Pluto's.
Name
Eris is named after the Greek goddess Eris (Greek ), a personification of strife and discord. The name was proposed by the Caltech team on September 6, 2006, and it was assigned on September 13, 2006, following an unusually long period in which the object was known by the provisional designation , which was granted automatically by the IAU under their naming protocols for minor planets.
The name Eris has two competing pronunciations, with a "long" or with a "short" e, analogous to the two competing pronunciations of the word era. Perhaps the more common form in English, used i.a. by Brown and his students, is with disyllabic laxing and a short e. However, the classical English pronunciation of the goddess is , with a long e.
The Greek and Latin oblique stem of the name is Erid-, as can be seen in Italian Eride and Russian Эрида Erida, so the adjective in English is Eridian .
Xena
Due to uncertainty over whether the object would be classified as a planet or a minor planet, because varying nomenclature procedures apply to these classes of objects, the decision on what to name the object had to wait until after the August 24, 2006, IAU ruling. For a period of time, the object became known to the wider public as Xena. "Xena" was an informal name used internally by the discovery team, inspired by the title character of the television series Xena: Warrior Princess. The discovery team had reportedly saved the nickname "Xena" for the first body they discovered that was larger than Pluto. According to Brown,
Brown said in an interview that the naming process was stalled:
Choosing an official name
According to science writer Govert Schilling, Brown initially wanted to call the object "Lila", after a concept in Hindu mythology that described the cosmos as the outcome of a game played by Brahman. The name would be pronounced the same as "Lilah", the name of Brown's newborn daughter. Brown was mindful of not making his name public before it had been officially accepted. He had done so with Sedna a year previously, and had been heavily criticized. However, no objection was raised to the Sedna name other than the breach of protocol, and no competing names were suggested.
He listed the address of his personal web page announcing the discovery as /~mbrown/planetlila and in the chaos following the controversy over the discovery of Haumea, forgot to change it. Rather than needlessly anger more of his fellow astronomers, he simply said that the webpage had been named for his daughter and dropped "Lila" from consideration.
Brown had also considered Persephone, the wife of the god Pluto, as a name for the object. The name had been used several times for planets in science fiction and was popular with the public, having handily won a poll conducted by New Scientist magazine. ("Xena", despite only being a nickname, came fourth.) This choice was not possible because there was already a minor planet with that name, 399 Persephone, and Eris was being named as a minor planet.
The discovery team proposed Eris on September 6, 2006. Brown decided that, because the object had been considered a planet for so long, it deserved a name from Greek or Roman mythology like the other planets. The asteroids had taken the vast majority of Graeco-Roman names. Eris, whom Brown described as his favorite goddess, had fortunately escaped inclusion. "Eris caused strife and discord by causing quarrels among people," said Brown in 2006, "and that's what this one has done too." The name was accepted by the IAU on September 13, 2006.
Although the usage of planetary symbols is generally discouraged in astronomy, NASA has used the Hand of Eris, (U+2BF0), for Eris. This symbol was taken from Discordianism, a religion concerned with the goddess Eris.
The Sternberg Astronomical Institute at Moscow State University has used (U+24C0), presumably either the "all rights reversed" symbol of the Principia Discordia or a simplification of the Apple of Discord inscribed with the Greek word Kallisti, , which had been suggested as a symbol for the dwarf planet on the Discordian discussion board that eventually settled on .
Most astrologers use the Hand of Eris, though other symbols are occasionally seen, such as (U+2BF1).
Classification
Eris is a trans-Neptunian dwarf planet. Its orbital characteristics more specifically categorize it as a scattered-disk object (SDO), or a TNO that has been "scattered" from the Kuiper belt into more-distant and unusual orbits following gravitational interactions with Neptune as the Solar System was forming. Although its high orbital inclination is unusual among the known SDOs, theoretical models suggest that objects that were originally near the inner edge of the Kuiper belt were scattered into orbits with higher inclinations than objects from the outer belt.
Because Eris was initially thought to be larger than Pluto, it was described as the "tenth planet" by NASA and in media reports of its discovery. In response to the uncertainty over its status, and because of ongoing debate over whether Pluto should be classified as a planet, the IAU delegated a group of astronomers to develop a sufficiently precise definition of the term planet to decide the issue. This was announced as the IAU's Definition of a Planet in the Solar System, adopted on August 24, 2006. At this time, both Eris and Pluto were classified as dwarf planets, a category distinct from the new definition of planet. Brown has since stated his approval of this classification. The IAU subsequently added Eris to its Minor Planet Catalogue, designating it (136199) Eris.
Orbit
Eris has an orbital period of 559 years. Its maximum possible distance from the Sun (aphelion) is 97.5 AU, and its closest (perihelion) is 38 AU. As the time of perihelion is defined at the epoch chosen using an unperturbed two-body solution, the further the epoch is from the date of perihelion, the less accurate the result. Numerical integration is required to predict the time of perihelion accurately. Numerical integration by JPL Horizons shows that Eris came to perihelion around 1699, to aphelion around 1977, and will return to perihelion around December 2257. Unlike those of the eight planets, whose orbits all lie roughly in the same plane as the Earth's, Eris's orbit is highly inclined: it is tilted at an angle of about 44 degrees to the ecliptic. When discovered, Eris and its moon were the most distant known objects in the Solar System, apart from long-period comets and space probes. It retained this distinction until the discovery of in 2018.
As of 2008, there were approximately forty known TNOs, most notably , and , that are currently closer to the Sun than Eris, even though their semimajor axis is larger than that of Eris (67.8 AU).
The Eridian orbit is highly eccentric, and brings Eris to within 37.9 AU of the Sun, a typical perihelion for scattered objects. This is within the orbit of Pluto, but still safe from direct interaction with Neptune (~37 AU). Pluto, on the other hand, like other plutinos, follows a less inclined and less eccentric orbit and, protected by orbital resonance, can cross Neptune's orbit. In about 800 years, Eris will be closer to the Sun than Pluto for some time (see the graph at the left).
As of 2007, Eris has an apparent magnitude of 18.7, making it bright enough to be detectable to some amateur telescopes. A telescope with a CCD can detect Eris under favorable conditions. The reason it had not been noticed until now is its steep orbital inclination; searches for large outer Solar System objects tend to concentrate on the ecliptic plane, where most bodies are found.
Because of the high inclination of its orbit, Eris passes through only a few constellations of the traditional Zodiac; it is now in the constellation Cetus. It was in Sculptor from 1876 until 1929 and Phoenix from roughly 1840 until 1875. In 2036, it will enter Pisces and stay there until 2065, when it will enter Aries. It will then move into the northern sky, entering Perseus in 2128 and Camelopardalis (where it will reach its northernmost declination) in 2173.
Size, mass and density
In November 2010, Eris was the subject of one of the most distant stellar occultations yet from Earth. Preliminary data from this event cast doubt on previous size estimates. The teams announced their final results from the occultation in October 2011, with an estimated diameter of .
This makes Eris a little smaller than Pluto by area and diameter, which is across, although Eris is more massive. It also indicates a geometric albedo of 0.96. It is speculated that the high albedo is due to the surface ices being replenished because of temperature fluctuations as Eris's eccentric orbit takes it closer and farther from the Sun.
The mass of Eris can be calculated with much greater precision. Based on the accepted value for Dysnomia's period at the time—15.774 days—Eris is 27% more massive than Pluto. Using the 2011 occultation results, Eris has a density of , substantially denser than Pluto, and thus must be composed largely of rocky materials.
Models of internal heating via radioactive decay suggest that Eris could have a subsurface ocean of liquid water at the mantle–core boundary. Tidal heating of Eris by its moon Dysnomia may additionally contribute to the preservation of its possible subsurface ocean. More research concluded that Eris, Pluto and Makemake could harbor active subsurface oceans and show active geothermal activity.
In July 2015, after nearly a decade of Eris being thought to be the ninth-largest known object to directly orbit the Sun, close-up imagery from the New Horizons mission determined the volume of Pluto to be slightly larger than that of Eris. Eris is now understood to be the tenth-largest known object to directly orbit the Sun by volume, but remains the ninth-largest by mass.
Surface and atmosphere
The discovery team followed up their initial identification of Eris with spectroscopic observations made at the 8 m Gemini North Telescope in Hawaii on January 25, 2005. Infrared light from the object revealed the presence of methane ice, indicating that the surface may be similar to that of Pluto, which at the time was the only TNO known to have surface methane, and of Neptune's moon Triton, which also has methane on its surface. In 2022, near-infrared spectroscopy of Eris by the James Webb Space Telescope (JWST) revealed the presence of deuterated methane ice on its surface, at abundances lower than those in Jupiter-family comets like 67P/Churyumov–Gerasimenko. Eris's comparatively low deuterium abundance suggests that its methane is not primordial and instead may have been produced from subsurface geochemical processes. Substantial quantities of nitrogen ice on Eris was also detected by the JWST, and it is presumed to have originated from subsurface processes similar to Eris's likely non-primordial methane. The abundance of nitrogen ice on Eris is estimated to be one-third of that of methane by volume.
Unlike the somewhat reddish and variegated surfaces of Pluto and Triton, the surface of Eris appears almost white and uniform. Pluto's reddish color is thought to be due to deposits of tholins on its surface, and where these deposits darken the surface, the lower albedo leads to higher temperatures and the evaporation of methane deposits. In contrast, Eris is far enough from the Sun that methane can condense onto its surface even where the albedo is low. The condensation of methane uniformly over the surface reduces any albedo contrasts and would cover up any deposits of red tholins. This methane sublimation and condensation cycle could produce bladed terrain on Eris, similar to those on Pluto. Alternatively, Eris's surface could be refreshed through radiogenic convection of a global methane and nitrogen ice glacier, similar to Pluto's Sputnik Planitia. Spectroscopic observations by the JWST support the idea that Eris's surface is continually refreshing, as no signs of ethane, a byproduct of radiolyzed methane, were detected on Eris's surface.
Due to the distant and eccentric orbit of Eris, its surface temperature is estimated to vary from about . Even though Eris can be up to three times farther from the Sun than Pluto, it approaches close enough that some of the ices on the surface might warm enough to sublime to form an atmosphere. Because methane and nitrogen are both highly volatile, their presence shows either that Eris has always resided in the distant reaches of the Solar System, where it is cold enough for methane and nitrogen ice to persist, or that the celestial body has an internal source to replenish gas that escapes from its atmosphere. This contrasts with observations of another discovered TNO, , which reveal the presence of water ice but not methane.
Rotation
Eris displays very little variation in brightness as it rotates due to its uniform surface, making measurement of its rotation period difficult. Precise long-term monitoring of Eris's brightness indicates that it is tidally locked to its moon Dysnomia, with a rotation period synchronous with the moon's orbital period of 15.78 Earth days. Dysnomia is also tidally locked to Eris, which makes the Eris–Dysnomia system the second known case of double-synchronous rotation, after Pluto and Charon. Previous measurements of Eris's rotation period obtained highly uncertain values ranging tens of hours to several days due to insufficient long-term coverage of Eris's rotation. The axial tilt of Eris has not been measured, but it can be reasonably assumed that it is the same as Dysnomia's orbital inclination, which would be about 78 degrees with respect to the ecliptic. If this were the case, most of Eris's northern hemisphere would be illuminated by sunlight, with 30% of the hemisphere experiencing constant illumination in 2018.
Satellite
In 2005, the adaptive optics team at the Keck telescopes in Hawaii carried out observations of the four brightest TNOs (Pluto, Makemake, Haumea, and Eris), using the newly commissioned laser guide star adaptive optics system. Images taken on September 10 revealed a moon in orbit around Eris. In keeping with the "Xena" nickname already in use for Eris, Brown's team nicknamed the moon "Gabrielle", after the television warrior princess's sidekick. When Eris received its official name from the IAU, the moon received the name Dysnomia, after the Greek goddess of lawlessness who was Eris's daughter. Brown says he picked it for similarity to his wife's name, Diane. The name also retains an oblique reference to Eris's old informal name Xena, portrayed on television by Lucy Lawless, though the connection was unintentional.
Exploration
Eris was observed from afar by the outbound New Horizons spacecraft in May 2020, as part of its extended mission following its successful Pluto flyby in 2015. Although Eris was farther from New Horizons (112 AU) than it was from Earth (96 AU), the spacecraft's unique vantage point inside the Kuiper belt permitted observations of Eris at high phase angles that are otherwise unobtainable from Earth, enabling the determination of the light scattering properties and phase curve behavior of the Eridian surface.
In the 2010s, there were multiple studies for follow-on missions to explore the Kuiper belt, among which Eris was evaluated as a candidate. It was calculated that a flyby mission to Eris would take 24.66 years using a Jupiter gravity-assist, based on launch dates of April 3, 2032, or April 7, 2044. Eris would be 92.03 or 90.19 AU from the Sun when the spacecraft arrives.
| Physical sciences | Solar System | null |
3680464 | https://en.wikipedia.org/wiki/Chemical%20law | Chemical law | Chemical laws are those laws of nature relevant to chemistry. The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics.
The laws of stoichiometry, that is, the gravimetric proportions by which chemical elements participate in chemical reactions, elaborate on the law of conservation of mass. Joseph Proust's law of definite composition says that pure chemicals are composed of elements in a definite formulation.
Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers (i.e. 1:2 O:H in water); although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. Such compounds are known as non-stoichiometric compounds.
The third stoichiometric law is the law of reciprocal proportions, which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element.
More modern laws of chemistry define the relationship between energy and transformations.
In equilibrium, molecules exist in mixture defined by the transformations possible on the timescale of the equilibrium, and are in a ratio defined by the intrinsic energy of the molecules—the lower the intrinsic energy, the more abundant the molecule.
Transforming one structure to another requires the input of energy to cross an energy barrier; this can come from the intrinsic energy of the molecules themselves, or from an external source which will generally accelerate transformations. The higher the energy barrier, the slower the transformation occurs.
There is a transition state (TS), that corresponds to the structure at the top of the energy barrier. The Hammond-Leffler Postulate states that this state looks most similar to the product or starting material which has intrinsic energy closest to that of the energy barrier. Stabilizing this transition state through chemical interaction is one way to achieve catalysis.
All chemical processes are reversible (law of microscopic reversibility) although some processes have such an energy bias, they are essentially irreversible.
| Physical sciences | Basics: General | Chemistry |
866988 | https://en.wikipedia.org/wiki/Refractive%20error | Refractive error | Refractive error is a problem with focusing light accurately on the retina due to the shape of the eye and/or cornea. The most common types of refractive error are near-sightedness, far-sightedness, astigmatism, and presbyopia. Near-sightedness results in far away objects being blurry, far-sightedness and presbyopia result in close objects being blurry, and astigmatism causes objects to appear stretched out or blurry. Other symptoms may include double vision, headaches, and eye strain.
Near-sightedness is due to the length of the eyeball being too long, far-sightedness the eyeball too short, astigmatism, the cornea being the wrong shape, and presbyopia aging of the lens of the eye such that it cannot change shape sufficiently. Some refractive errors occur more often among those whose parents are affected. Diagnosis is by eye examination.
Refractive errors are corrected with eyeglasses, contact lenses, or surgery. Eyeglasses are the easiest and safest method of correction. Contact lenses can provide a wider field of vision; however they are associated with a risk of infection. Refractive surgery permanently changes the shape of the cornea.
The number of people globally with refractive errors has been estimated at one to two billion. Rates vary between regions of the world with about 25% of Europeans and 80% of Asians affected. Near-sightedness is the most common disorder. Rates among adults are between 15 and 49% while rates among children are between 1.2 and 42%. Far-sightedness more commonly affects young children and the elderly. Presbyopia affects most people over the age of 35.
The number of people with refractive errors that have not been corrected was estimated at 660 million (10 per 100 people) in 2013. Of these 9.5 million were blind due to the refractive error. It is one of the most common causes of vision loss along with cataracts, macular degeneration, and vitamin A deficiency.
Classification
Refractive error – sometimes called "ametropia" – is when the refractive power of an eye does not match the length of the eye, so the image is focused away from the central retina, instead of directly on it.
Types of refractive error include myopia, hyperopia, presbyopia, and astigmatism.
Myopia or Nearsightedness: When the refractive power is too strong for the length of the eyeball, this is called myopia or nearsightedness. People with myopia typically have blurry vision when viewing distant objects because the eye is refracting more than necessary. Myopia can be corrected with a concave lens, which causes the divergence of light rays before they reach the cornea.
Hyperopia or Farsightedness: When the refractive power is too weak for the length of the eyeball, one has hyperopia or farsightedness. People with hyperopia have blurry vision when viewing near objects because the eye is unable to focus the light sufficiently. This can be corrected with convex lenses, which cause light rays to converge prior to hitting the cornea.
Presbyopia: When the flexibility of the lens declines, typically due to age. The individual would experience difficulty in near vision, often relieved by reading glasses, bifocal, or progressive lenses.
Astigmatism is when the refractive power of the eye is not uniform across the surface of the cornea because of asymmetry. In other words, the eye focuses light more strongly in one direction than another, leading to distortion of the image.
Children are typically born hyperopic and shift toward emmetropia or myopia as their eyes lengthen through childhood.
Other terminology include anisometropia, when the two eyes have unequal refractive power, and aniseikonia which is when the magnification power between the eyes differ.
Refractive errors are typically measured using three numbers: sphere, cylinder, and axis.
Sphere: This number denotes the strength of the lens needed to correct your vision. A "–" indicates nearsightedness while a "+" indicates farsightedness. Higher numbers indicate more power in either direction.
Cylinder: This number denotes the amount of astigmatism, if any.
Axis: This number notes the direction of the astigmatism and is written in degrees between 1 and 180.
An eye that has no refractive error when viewing distant objects is said to have emmetropia or be emmetropic meaning the eye is in a state in which it can focus parallel rays of light (light from distant objects) on the retina, without using any accommodation. A distant object, in this case, is defined as an object located beyond 6 meters, or 20 feet, from the eye, since the light from those objects arrives as essentially parallel rays when considering the limitations of human perception.
Risk factors
Genetics
There is evidence to suggest genetic predilection for refractive error. Individuals that have parents with certain refractive errors are more likely to have similar refractive errors.
The Online Mendelian Inheritance in Man (OMIM) database has listed 261 genetic disorders in which myopia is one of the symptoms. Myopia may be present in heritable connective tissue disorders such as: Knobloch syndrome (OMIM 267750); Marfan syndrome (OMIM 154700); and Stickler syndrome (type 1, OMIM 108300; type 2, OMIM 604841). Myopia has also been reported in X-linked disorders caused by mutations in loci involved in retinal photoreceptor function (NYX, RP2, MYP1) such as: autosomal recessive congenital stationary night blindness (CSNB; OMIM 310500); retinitis pigmentosa 2 (RP2; OMIM 312600); Bornholm eye disease (OMIM 310460).
Many genes that have been associated with refractive error are clustered into common biological networks involved in connective tissue growth and extracellular matrix organization. Although a large number of chromosomal localisations have been associated with myopia (MYP1-MYP17), few specific genes have been identified.
Environmental
In studies of the genetic predisposition of refractive error, there is a correlation between environmental factors and the risk of developing myopia. Myopia has been observed in individuals with visually intensive occupations. Reading has also been found to be a predictor of myopia in children. It has been reported that children with myopia spent significantly more time reading than non-myopic children who spent more time playing outdoors. Additionally, focusing on near objects for long periods of time - such as when reading, looking at close screens, or writing - has been associated with myopia. Socioeconomic status and higher levels of education have also been reported to be a risk factor for myopia. Blepharoptosis can also induce refractive errors.
Normal Refraction
In order to see a clear image, the eye must focus rays of light on to the light-sensing part of the eye – the retina, which is located in the back of the eye. This focusing – called refraction – is performed mainly by the cornea and the lens, which are located at the front of the eye, the anterior segment.
When an eye focuses light correctly on to the retina when viewing distant objects, this is called emmetropia or being emmetropic. This means that the refractive power of the eye matches what is needed to focus parallel rays of light onto the retina. A distant object is defined as an object located beyond 6 meters (20 feet) from the eye.
When an object is located close to the eye, the rays of light from this object no longer approach the eye parallel to each other. Consequently, the eye must increase its refractive power to bring those rays of light together on the retina. This is called accommodation, and is accomplished by the eye thickening the lens.
Diagnosis
Blurry vision may result from any number of conditions not necessarily related to refractive errors. The diagnosis of a refractive error is usually confirmed by an eye care professional during an eye examination using a large number of lenses of different optical powers, and often a retinoscope (a procedure entitled retinoscopy) to measure objectively in which the person views a distant spot while the clinician changes the lenses held before the person's eye and watches the pattern of reflection of a small light shone on the eye. Following that "objective refraction" the clinician typically shows the person lenses of progressively higher or weaker powers in a process known as subjective refraction.
Cycloplegic agents are frequently used to more accurately determine the amount of refractive error, particularly in children
An automated refractor is an instrument that is sometimes used in place of retinoscopy to objectively estimate a person's refractive error. Shack–Hartmann wavefront sensor and its inverse can also be used to characterize eye aberrations in a higher level of resolution and accuracy.
Vision defects caused by refractive error can be distinguished from other problems using a pinhole occluder, which will improve vision only in the case of refractive error.
Screening
When refractive errors in children are not treated, the child may be at risk of developing ambylopia, where vision may remain permanently blurry. Because young children typically do not complain of blurry vision, the American Academy of Pediatrics recommends that children have yearly vision screening starting at three years old so that unknown refractive errors or other ophthalmic conditions can be found and treated if deemed necessary by healthcare professionals.
Management
The management of refractive error is done post-diagnosis of the condition by either optometrists, ophthalmologists, refractionists, or ophthalmic medical practitioners.
How refractive errors are treated or managed depends upon the amount and severity of the condition. Those who possess mild amounts of refractive error may elect to leave the condition uncorrected, particularly if the person is asymptomatic. For those who are symptomatic, glasses, contact lenses, refractive surgery, or a combination are typically used.
Glasses
These are the most effective ways of correcting the refractive error. However, the availability and affordability of eyeglasses can present a difficulty for people in many low income settings of the world. Glasses also pose a challenge to children to whom they are prescribed to, due to children's tendency to not wear them as consistently as recommended.
As mentioned earlier refractive errors are because of the improper focusing of the light in the retina. Eyeglasses work as an added lens of the eye serving to bend the light to bring it to focus on the retina. Depending on the eyeglasses, they serve many functions.
Reading glasses These are general over-the-counter glasses which can be worn for easier reading, especially for defective vision due to aging called presbyopia.
Single vision prescription lenses They can correct only one form of defective vision, either far-sightedness or near-sightedness.
Multifocal lenses The multifocal lenses can correct defective vision in multiple focus, for example: near-vision as well as far-vision. This are particularly beneficial for presbyobia.
Contact lenses
Alternatively, many people choose to wear contact lenses. One style is hard contact lenses, which can distort the shape of the cornea to a desired shape. Another style, soft contact lenses, are made of silicone or hydrogel. Depending on the duration they are designed for, they may be worn daily or may be worn for an extended period of time, such as for weeks.
There are a number of complication associated with contact lenses. Typically the ones that are used daily.
If redness, itching, and difficulty in vision develops, the use of the lenses should be stopped immediately and the consultation of ophthalmologists may be sought.
Surgery
Laser in situ keratomileusis (LASIK) and photo-refractive keratectomy (PRK) are popular procedures; while use of laser epithelial keratomileusis (LASEK) is increasing. Other surgical treatments for severe myopia include insertion of implants after clear lens extraction (refractive lens exchange). Full thickness corneal graft may be a final option for patients with advanced kerataconus although currently there is interest in new techniques that involve collagen crosslinking. As with any surgical procedure complications may arise post-operatively Post-operative monitoring is normally undertaken by the specialist ophthalmic surgical clinic and optometry services. Patients are usually informed pre-operatively about what to expect and where to go if they suspect complications. Any patient reporting pain and redness after surgery should be referred urgently to their ophthalmic surgeon.
Medical treatment
Atropine has believed to slow the progression of near-sightedness and is administered in combination with multifocal lenses. These, however, need further research.
Prevention
Strategies being studied to slow worsening include adjusting working conditions, increasing the time children spend outdoors, and special types of contact lenses. In children special contact lenses appear to slow worsening of nearsightedness.
A number of questionnaires exist to determine quality of life impact of refractive errors and their correction.
Epidemiology
It is estimated that at least 2 billion people in the world have refractive errors. The number of people globally with refractive errors that have not been corrected was estimated at 660 million (10 per 100 people) in 2013.
Refractive errors are the first common cause of visual impairment and second most common cause of visual loss . The assessment of refractive error is now done in DALY (Disability Adjusted Life Years) which showed an 8% increase from 1990 to 2019.
The number of people globally with significant refractive errors has been estimated at one to two billion. Rates vary between regions of the world with about 25% of Europeans and 80% of Asians affected. Near-sightedness is one of the most prevalent disorders of the eye. Rates among adults are between 15 and 49% while rates among children are between 1.2 and 42%. Far-sightedness more commonly affects young children, whose eyes have yet to grow to their full length, and the elderly, who have lost the ability to compensate with their accommodation system. Presbyopia affects most people over the age of 35, and nearly 100% of people by the ages of 55–65. Uncorrected refractive error is responsible for visual impairment and disability for many people worldwide. It is one of the most common causes of vision loss along with cataracts, macular degeneration, and vitamin A deficiency.
Cost
The yearly cost of correcting refractive errors is estimated at 3.9 to 7.2 billion dollars in the United States.
| Biology and health sciences | Disabilities | Health |
867089 | https://en.wikipedia.org/wiki/Galinstan | Galinstan | Galinstan is a brand name for an alloy composed of gallium, indium, and tin which melts at and is thus liquid at room temperature. In scientific literature, galinstan is also used to denote the eutectic alloy of gallium, indium, and tin, which melts at around . The commercial product Galinstan is not a eutectic alloy, but a near eutectic alloy. Additionally, it likely has added flux to improve flowability, to reduce melting temperature, and to reduce surface tension.
Eutectic galinstan is composed of 68.5%Ga, 21.5%In, and 10.0%Sn (by weight).
Due to the low toxicity and low reactivity of its component metals, galinstan has replaced the toxic liquid mercury or the reactive alloy NaK in many applications.
Name
The name "galinstan" is a portmanteau of gallium, indium, and stannum (Latin for "tin"). The brand name "Galinstan" is a registered trademark of the German company .
Physical properties
Boiling point: >1300°C
Vapour pressure: <10−8Torr (at 500°C)
Solubility: Insoluble in water or organic solvents
Viscosity: 0.0024Pa·s (at 20°C)
Thermal conductivity: 16.5W·m−1·K−1
Electrical conductivity: 3.46×106 S/m (at 20°C)
Surface tension: s = 0.535–0.718 N/m (at 20°C, dependent on producer)
In the presence of oxygen at concentrations above 1 ppm, the surface of bulk galinstan oxidizes to Ga2O3. Unlike mercury, galinstan tends to wet and adheres to many materials, including glass, due to its surface oxide. This can limit its use as a direct replacement material in some situations, but can also be utilized in some situations.
Uses
Galinstan can replace mercury in thermometers at moderate temperatures.
Galinstan has higher reflectivity and lower density than mercury. In astronomy, it can replace mercury in liquid-mirror telescopes.
Metals or alloys like galinstan that are liquids at room temperature are often used by overclockers and enthusiasts as a thermal interface for computer hardware cooling, where their higher thermal conductivity compared to thermal pastes and thermal epoxies can allow slightly higher clock speeds and CPU processing power achieved in demonstrations and competitive overclocking. Two examples are Thermal Grizzly Conductonaut and Coolaboratory Liquid Ultra, with thermal conductivities of 73 and 38.4 W/mK respectively. Unlike ordinary thermal compounds which are easy to apply and present a low risk of damaging hardware, galinstan is electrically conductive and causes liquid metal embrittlement in many metals including aluminum which is commonly used in heatsinks. Despite these challenges the users who are successful with their application do report good results. In August 2020, Sony Interactive Entertainment patented a galinstan-based thermal interface solution suitable for mass production, for use on the PlayStation 5.
Galinstan is difficult to use for cooling fission-based nuclear reactors, because indium has a high absorption cross section for thermal neutrons, efficiently absorbing them and inhibiting the fission reaction. Conversely, it is being investigated as a possible coolant for fusion reactors. Its nonreactivity makes it safer than other liquid metals, such as lithium and mercury.
The wetting characteristics of galinstan can be utilized to fabricate conductive patterns, allowing it to be used as a liquid, deformable conductor in soft robotics and stretchable electronics. Galinstan can be used to replace wires, interconnects, and electrodes as well as the conductive element in inductor coils and dielectric composites for soft capacitors.
X-ray equipment
Extremely high-intensity sources may be obtained with an X-ray source that uses a liquid-metal galinstan anode of 9.25 keV X-rays (gallium K-alpha line) for X-ray phase microscopy of fixed tissue (such as mouse brain), from a focal spot about 10 μm × 10 μm, and 3-D voxels of about one cubic micrometer. The metal flows from a nozzle downward at a high speed, and the high-intensity electron source is focused upon it. The rapid flow of metal carries current, but the physical flow prevents a great deal of anode heating (due to forced-convective heat removal), and the high boiling point of galinstan inhibits vaporization of the anode.
| Physical sciences | Specific alloys | Chemistry |
867542 | https://en.wikipedia.org/wiki/Topical%20medication | Topical medication | A topical medication is a medication that is applied to a particular place on or in the body. Most often topical medication means application to body surfaces such as the skin or mucous membranes to treat ailments via a large range of classes including creams, foams, gels, lotions, and ointments. Many topical medications are epicutaneous, meaning that they are applied directly to the skin. Topical medications may also be inhalational, such as asthma medications, or applied to the surface of tissues other than the skin, such as eye drops applied to the conjunctiva, or ear drops placed in the ear, or medications applied to the surface of a tooth. The word topical derives from Greek τοπικός topikos, "of a place".
Justification
Topical drug delivery is a route of administering drugs via the skin to provide topical therapeutic effects. As skin is one of the largest and most superficial organs in the human body, pharmacists utilise it to deliver various drugs. This system usually provides a local effect on certain positions of the body. In ancient times, people used herbs to put on wounds for relieving the inflammatory effect or as pain relievers. The use of topical drug delivery system is much broader now, from smoking cessation to beauty purposes. Nowadays, there are numerous dosage forms that can be used topically, including cream, ointment, lotion, patches, dusting powder and much more. There are many advantages for this drug delivery system – avoiding first pass metabolism which can increase its bioavailability, being convenient and easy to apply to a large area, being easy to terminate the medication and avoiding gastro-intestinal irritations. All these can increase patient compliance. However, there are several disadvantages to this system – causing skin irritations and symptoms like rashes and itchiness may occur. Also, only small particles can pass through the skin, which limits the choice of drugs. Since skin is the main medium of the topical drug delivery system, its conditions determine the rate of skin penetration leading to affecting the pharmacokinetics of the drug. The temperature, pH value, and dryness of the skin need to be considered. There are some novel topical drugs in the market which can utilise the system as much as possible.
This localized system provides topical therapeutic effects via skin, eyes, nose and vagina to treat diseases. The most common usage is for local skin infection problems. Dermatological products have various formulations and range in consistency though the most popular dermal products are semisolid dosage forms to provide topical treatment.
Factor affecting topical drug absorption
Topical drug absorption depends on two major factors – biological and physicochemical properties.
The first factor concerns body structure effects on the drugs. The degradation of drugs can be affected by the site of applications. Some studies discovered different Percutaneous absorption patterns. Apart from the place, age also affects the absorption as the skin structure changes with age. The lowered collagen and broadened blood capillary networks happen with aging. These features alter the effectiveness of absorption of both hydrophilic and lipophilic substances into stratum corneum underneath the surface of the skin. The skin surface integrity can also affect the permeability of drugs such as the density of hair follicles, sweat glands or disintegrated by inflammation or dehydration.
The other factor concerns metabolism of medications on skin. When the percutaneous drug is applied on skin, it will be gradually absorbed down the skin. Normally, when the drugs are absorbed, they will be metabolised by various enzymes in our body and the amount will be lower. The exact amount delivered to the target action site determines the potency and bioavailability of the drugs. If the concentration is too low, the therapeutic effect is impeded; if the concentration is too high, drug toxicity may happen to cause side effects or even do harm to our body. For the topical drug delivery way, degradation of drugs in skin is very low compared to liver. The metabolism of drugs is mainly by metabolic enzyme cytochrome P450, and this enzyme is not active in skin. The CYP450 actively metabolized drugs can then maintain high concentration when being applied on skin. Despite CYP450 enzyme action, the partition coefficient (K) determines the activity of topical drugs. The ability of drug particles to go through the skin layer also affects the absorption of drugs. For transdermal activity, medicines with higher K value are harder to get rid of the lipid layer of skin cells. The trapped molecules then cannot penetrate into the skin. This reduces the efficacy of the transdermal drugs. The drugs target cells underneath the skin or need to diffuse into blood capillary to exert their effect. Meanwhile, the size of particles affects this transdermal process. The smaller the drug molecules, the faster the rate of penetration. Polarity of the drugs can affect this diffusion rate too. If the drug shows lower degree of ionization, it is less polar. Therefore, it can have a faster absorption rate.
Local versus systemic effect
The definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local.
In other cases, topical is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution. Such medications are generally hydrophobic chemicals, such as steroid hormones. Specific types include transdermal patches which have become a popular means of administering some drugs for birth control, hormone replacement therapy, and prevention of motion sickness. One example of an antibiotic that may be applied topically is chloramphenicol.
If defined strictly as having a local effect, the topical route of administration can also include enteral administration of medications that are poorly absorbable by the gastrointestinal tract. One poorly absorbable antibiotic is vancomycin, which is recommended by mouth as a treatment for severe Clostridioides difficile colitis.
Choice of base formulation
A medication's potency often is changed with its base. For example, some topical steroids will be classified one or two strengths higher when moving from cream to ointment. As a rule of thumb, an ointment base is more occlusive and will drive the medication into the skin more rapidly than a solution or cream base.
The manufacturer of each topical product has total control over the content of the base of a medication. Although containing the same active ingredients, one manufacturer's cream might be more acidic than the next, which could cause skin irritation or change its absorption rate. For example, a vaginal formulation of miconazole antifungal cream might irritate the skin less than an athlete's foot formulation of miconazole cream. These variations can, on occasion, result in different clinical outcomes, even though the active ingredient is the same. No comparative potency labeling exists to ensure equal efficacy between brands of topical steroids (percentage of oil vs water dramatically affect the potency of topical steroid). Studies have confirmed that the potency of some topical steroid products may differ according to manufacturer or brand. An example of this is the case of brand name Valisone cream and Kenalog cream in clinical studies have demonstrated significantly better vasoconstrictions than some forms of this drug produced by generic drug manufacturers. However, in a simple base like an ointment, much less variation between manufacturers is common.
In dermatology, the base of a topical medication is often as important as the medication itself. It is extremely important to receive a medication in the correct base, before applying to the skin. A pharmacist should not substitute an ointment for a cream, or vice versa, as the potency of the medication can change. Some physicians use a thick ointment to replace the waterproof barrier of the inflamed skin in the treatment of eczema, and a cream might not accomplish the same clinical intention.
Formulations
There are many general classes, with no clear dividing line among similar formulations. As a result, what the manufacturer's marketing department chooses to list on the label of a topical medication might be completely different from what the form would normally be called.
Cream
A cream is an emulsion of oil and water in approximately equal proportions. It penetrates the stratum corneum outer layer of skin wall. Cream is thicker than lotion, and maintains its shape when removed from its container. It tends to be moderate in moisturizing tendency. For topical steroid products, oil-in-water emulsions are common. Creams have a significant risk of causing immunological sensitization due to preservatives and have a high rate of acceptance by patients. There is a great variation in ingredients, composition, pH, and tolerance among generic brands.
Foam
Topical corticosteroid foams are suitable for treating a range of skin conditions that respond to corticosteroids. These foams are typically simple to apply, which can lead to better patient compliance and, in turn, improve treatment results for those who favor a more convenient and cleaner topical option. Foam can be typically seen with topical steroids marketed for the scalp.
Gel
Gels are thicker than liquids. Gels are often a semisolid emulsion and sometimes use alcohol as a solvent for the active ingredient; some gels liquefy at body temperature. Gel tends to be cellulose cut with alcohol or acetone. Gels tend to be self-drying, tend to have greatly variable ingredients between brands, and carry a significant risk of inducing hypersensitivity due to fragrances and preservatives. Gel is useful for hairy areas and body folds. In applying gel one should avoid fissures in the skin, due to the stinging effect of the alcohol base. Gel enjoys a high rate of acceptance due to its cosmetic elegance.
Lotion
Lotions are similar to solution but are thicker and tend to be more emollient in nature than the solution. They are usually oil mixed with water, and more often than not have less alcohol than solution. Lotions can be drying if they contain a high amount of alcohol.
Ointment
An ointment is a homogeneous, viscous, semi-solid preparation; most commonly a greasy, thick water-in-oil emulsion (80% oil, 20% water) having a high viscosity, that is intended for external application to the skin or mucous membranes. Ointments have a water number that defines the maximum amount of water that they can contain. They are used as emollients or for the application of active ingredients to the skin for protective, therapeutic, or prophylactic purposes and where a degree of occlusion is desired.
Ointments are used topically on a variety of body surfaces. These include the skin and the mucous membranes of the eye (an eye ointment), chest, vulva, anus, and nose. An ointment may or may not be medicated.
Ointments are usually very moisturizing, and good for dry skin. They have a low risk of sensitization due to having few ingredients beyond the base oil or fat, and low irritation risk. There is typically little variability between brands of drugs. They are often disliked by patients due to greasiness.
The vehicle of an ointment is known as the ointment base. The choice of a base depends upon the clinical indication for the ointment. The different types of ointment bases are:
Absorption bases, e.g., beeswax and wool fat
Emulsifying bases, e.g., cetrimide and emulsifying wax
Hydrocarbon bases, e.g., ceresine, microcrystalline wax, hard paraffin, and soft paraffin
Vegetable oil bases, e.g., almond oil, coconut oil, olive oil, peanut oil, and sesame oil
Water-soluble bases, e.g., macrogols 200, 300, 400
The medicaments are dispersed in the base and are divided after penetrating the living cells of the skin.
The water number of an ointment is the maximum quantity of water that 100g of a base can contain at 20 °C.
Ointments are formulated using hydrophobic, hydrophilic, or water-emulsifying bases to provide preparations that are immiscible, miscible, or emulsifiable with skin secretions. They can also be derived from hydrocarbon (fatty), absorption, water-removable, or water-soluble bases.
Evaluation of ointments:
Drug content
Release of medicament from base
Medicament penetration
Consistency of the preparation
Absorption of medicament into blood stream
Irritant effect
Properties which affect choice of an ointment base are:
Stability
Penetrability
Solvent property
Irritant effects
Ease of application and removal
Methods of preparation of ointments:
Fusion: In this method the ingredients are melted together in descending order of their melting points and stirred to ensure homogeneity.
Trituration: In this finely subdivided insoluble medicaments are evenly distributed by grinding with a small amount of the base followed by dilution with gradually increasing amounts of the base.
Paste
Paste combines three agents – oil, water, and powder. It is an ointment in which a powder is suspended.
Powder
Powder is either the pure drug by itself (talcum powder), or is made of the drug mixed in a carrier such as corn starch or corn cob powder (Zeosorb AF – miconazole powder). Can be used as an inhaled topical (cocaine powder used in nasal surgery).
Shake lotion
A shake lotion is a mixture that separates into two or three parts over time. Frequently, an oil mixed with a water-based solution needs to be shaken into suspension before use and includes the instructions: "Shake well before use".
Solid
Medication may be placed in a solid form. Examples are deodorants, antiperspirants, astringents, and hemostatic agents. Some solids melt when they reach body temperature (e.g. rectal suppositories).
Sponge
Certain contraceptive methods rely on sponge as a carrier of liquid medicine. Lemon juice embedded in a sponge has been used as a primitive contraception in some cultures.
Tape
Cordran tape is an example of a topical steroid applied under occlusion by tape. This greatly increases the potency and absorption of the topical steroid and is used to treat inflammatory skin diseases.
Tincture
A tincture is a skin preparation that has a high percentage of alcohol. It would normally be used as a drug vehicle if drying of the area is desired.
Topical solution
Topical solutions can be marketed as drops, rinses, or sprays, are generally of low viscosity, and often use alcohol or water in the base. These are usually a powder dissolved in alcohol, water, and sometimes oil; although a solution that uses alcohol as a base ingredient, as in topical steroids, can cause drying of the skin. There is significant variability among brands, and some solutions may cause irritation, depending on the preservative(s) and fragrances used in the base.
Some examples of topical solutions are given below:
Aluminium acetate topical solution: This is colorless, with a faint acetous odour and sweetish taste. It is applied topically as an astringent after dilution with 10-40 parts of water. This is used in many types of dermatologic creams, lotions, and pastes. Commercial premeasured and packed tablets and powders are available for this preparation.
Povidone iodine topical solution: This is a chemical complex of iodine with polyvinylpyrrolidone. The agent is a polymer with an average molecular weight of 40,000. The povidone iodine contains 10% available iodine, slowly released when applied to skin. This preparation is employed topically as a surgical scrub and non irritating antiseptic solution; its effectiveness is directly attributed to the presence and release of iodine from the complex. Commercial product: Betadine solution.
Transdermal patch
Transdermal patches can be a very precise time released method of delivering a drug. Cutting a patch in half might affect the dose delivered. The release of the active component from a transdermal delivery system (patch) may be controlled by diffusion through the adhesive which covers the whole patch, by diffusion through a membrane which may only have adhesive on the patch rim or drug release may be controlled by release from a polymer matrix. Cutting a patch might cause rapid dehydration of the base of the medicine and affect the rate of diffusion.
Vapor
Some medications are applied as an ointment or gel, and reach the mucous membrane via vaporization. Examples are nasal topical decongestants and smelling salt.
Topical Drug Classification System (TCS)
Topical drug classification system (TCS) is proposed by the FDA. It is designed from the Biopharmaceutics Classification System (BCS) for oral immediate release solid drug products which is very successful for decades. There are 3 aspects to assess and 4 classes in total. The 3 aspects include qualitative (Q1), quantitative (Q2) and similarity of in vitro release (IVR) rate (Q3).
Advantages of topical drug delivery systems
In the early 1970s, the Alza Corporation, through their founder Alejandro Zaffaroni, filed the first US patents describing transdermal delivery systems for scopolamine, nitroglycerin and nicotine. People found that applying medicines on the body surfaces is beneficial in many aspects. Skin medicines can give faster onset and local effect on our body as the surface cream can bypass first pass metabolism such as hepatic and intestinal metabolism. Apart from the absorption, dermal drugs effectively prevent oral delivery limitations such as nausea and vomiting and poor appliances due to unpalatable tastes of the drugs . Topical application is an easy way for patients to tackle skin infections in a painless and non-invasive way. From a patient perspective, applying drugs on skin also provides stable dosage in blood so as to give the optimal bioavailability and therapeutic effects. In case of overdose or unwanted side effects, patients can take off or wash out the medicines quickly to eliminate toxicity by simply removing the patch to stop the delivery of drugs.
Disadvantages of topical drug delivery systems
The site of putting the patches for topical drugs may get irritated and have rashes and feel itchy. Hence, some topical drugs including nicotine patches for smoking cessation are advised to change places for each application to avoid continuous irritation of the skin. Also, since the drug needs to penetrate the skin, some drugs may not be able to pass through the skin. Some drugs are then “wasted” and the bioavailability of the drug will decrease.
Challenges for designing topical dosage form
Skin penetration is the main challenge for any topical dosage form. The drug needs to penetrate the skin in order to get into the body to apply its function. The drug follows Fick's first law of diffusion. One of the most common versions of Fick's first law of diffusion is:
where
is the diffusion flux.
is the diffusion coefficient.
is the concentration gradient.
For is described by the Stokes–Einstein equation. The equation is:
where
is the gas constant.
is the temperature.
is the viscosity.
is the radius of the solute.
is the Avogadro constant.
Assuming concentration gradient is constant for all newly applied topical drugs and the temperature is constant (normal body temperature: 37 °C), the viscosity and radius of the drug determine the flux of diffusion. The higher the viscosity or the larger the radius of the drug is, the lower the diffusion flux of the drug is.
New developments
There are many factors for drug developers to consider in developing new topical formulations.
The first one is the effect of the drug vehicle. The medium to carry the topical drugs can affect the penetration of the drug active ingredient and efficacy. For example, this carrier can have a cooling, drying, emollient or protective action to suit the required conditions of the application site such as applying gel or lotion for hairy areas. Meanwhile, scientists need to match the type of preparation with the type of lesions. For example, they need to avoid oily ointments for acute weepy dermatitis. Chemists also need to consider the irritation or any sensitization potential to ensure that the topical application can be stable during storage and transport to maintain its efficacy. Another potential material is nanofiber-based dispersion to improve the adhesion of active ingredients on the skin.
In order to enhance drug penetration into the skin, scientists have several ways to achieve their purposes by using chemical, biochemical, physical, and super saturation enhancement. Advanced Emulgel technology is a breakthrough in painkilling topical drugs. It helps the gel to enter deeply down the skin layer to strengthen the delivery of diclofenac to the point of pain so as to achieve better therapeutic effects by modifying the above properties.
| Biology and health sciences | General concepts_2 | Health |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.