id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
17940 | https://en.wikipedia.org/wiki/Lipid | Lipid | Lipids are a broad group of organic compounds which include fats, waxes, sterols, fat-soluble vitamins (such as vitamins A, D, E and K), monoglycerides, diglycerides, phospholipids, and others. The functions of lipids include storing energy, signaling, and acting as structural components of cell membranes. Lipids have applications in the cosmetic and food industries, and in nanotechnology.
Lipids may be broadly defined as hydrophobic or amphiphilic small molecules; the amphiphilic nature of some lipids allows them to form structures such as vesicles, multilamellar/unilamellar liposomes, or membranes in an aqueous environment. Biological lipids originate entirely or in part from two distinct types of biochemical subunits or "building-blocks": ketoacyl and isoprene groups. Using this approach, lipids may be divided into eight categories: fatty acyls, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits).
Although the term "lipid" is sometimes used as a synonym for fats, fats are a subgroup of lipids called triglycerides. Lipids also encompass molecules such as fatty acids and their derivatives (including tri-, di-, monoglycerides, and phospholipids), as well as other sterol-containing metabolites such as cholesterol. Although humans and other mammals use various biosynthetic pathways both to break down and to synthesize lipids, some essential lipids cannot be made this way and must be obtained from the diet.
History
In 1815, Henri Braconnot classified lipids (graisses) in two categories, suifs (solid greases or tallow) and huiles (fluid oils). In 1823, Michel Eugène Chevreul developed a more detailed classification, including oils, greases, tallow, waxes, resins, balsams and volatile oils (or essential oils).
The first synthetic triglyceride was reported by Théophile-Jules Pelouze in 1844, when he produced tributyrin by treating butyric acid with glycerin in the presence of concentrated sulfuric acid. Several years later, Marcellin Berthelot, one of Pelouze's students, synthesized tristearin and tripalmitin by reaction of the analogous fatty acids with glycerin in the presence of gaseous hydrogen chloride at high temperature.
In 1827, William Prout recognized fat ("oily" alimentary matters), along with protein ("albuminous") and carbohydrate ("saccharine"), as an important nutrient for humans and animals.
For a century, chemists regarded "fats" as only simple lipids made of fatty acids and glycerol (glycerides), but new forms were described later. Theodore Gobley (1847) discovered phospholipids in mammalian brain and hen egg, called by him as "lecithins". Thudichum discovered in human brain some phospholipids (cephalin), glycolipids (cerebroside) and sphingolipids (sphingomyelin).
The terms lipoid, lipin, lipide and lipid have been used with varied meanings from author to author. In 1912, Rosenbloom and Gies proposed the substitution of "lipoid" by "lipin". In 1920, Bloor introduced a new classification for "lipoids": simple lipoids (greases and waxes), compound lipoids (phospholipoids and glycolipoids), and the derived lipoids (fatty acids, alcohols, sterols).
The word lipide, which stems etymologically from Greek λίπος, lipos 'fat', was introduced in 1923 by the French pharmacologist Gabriel Bertrand. Bertrand included in the concept not only the traditional fats (glycerides), but also the "lipoids", with a complex constitution. The word lipide was unanimously approved by the international commission of the Société de Chimie Biologique during the plenary session on July 3, 1923. The word lipide was later anglicized as lipid because of its pronunciation ('lɪpɪd). In French, the suffix -ide, from Ancient Greek -ίδης (meaning 'son of' or 'descendant of'), is always pronounced (ɪd).
In 1947, T. P. Hilditch defined "simple lipids" as greases and waxes (true waxes, sterols, alcohols).
Categories
Lipids have been classified into eight categories by the Lipid MAPS consortium as follows:
Fatty acyls
Fatty acyls, a generic term for describing fatty acids, their conjugates and derivatives, are a diverse group of molecules synthesized by chain-elongation of an acetyl-CoA primer with malonyl-CoA or methylmalonyl-CoA groups in a process called fatty acid synthesis. They are made of a hydrocarbon chain that terminates with a carboxylic acid group; this arrangement confers the molecule with a polar, hydrophilic end, and a nonpolar, hydrophobic end that is insoluble in water. The fatty acid structure is one of the most fundamental categories of biological lipids and is commonly used as a building-block of more structurally complex lipids. The carbon chain, typically between four and 24 carbons long, may be saturated or unsaturated, and may be attached to functional groups containing oxygen, halogens, nitrogen, and sulfur. If a fatty acid contains a double bond, there is the possibility of either a cis or trans geometric isomerism, which significantly affects the molecule's configuration. Cis-double bonds cause the fatty acid chain to bend, an effect that is compounded with more double bonds in the chain. Three double bonds in 18-carbon linolenic acid, the most abundant fatty-acyl chains of plant thylakoid membranes, render these membranes highly fluid despite environmental low-temperatures, and also makes linolenic acid give dominating sharp peaks in high resolution 13-C NMR spectra of chloroplasts. This in turn plays an important role in the structure and function of cell membranes. Most naturally occurring fatty acids are of the cis configuration, although the trans form does exist in some natural and partially hydrogenated fats and oils.
Examples of biologically important fatty acids include the eicosanoids, derived primarily from arachidonic acid and eicosapentaenoic acid, that include prostaglandins, leukotrienes, and thromboxanes. Docosahexaenoic acid is also important in biological systems, particularly with respect to sight. Other major lipid classes in the fatty acid category are the fatty esters and fatty amides. Fatty esters include important biochemical intermediates such as wax esters, fatty acid thioester coenzyme A derivatives, fatty acid thioester ACP derivatives and fatty acid carnitines. The fatty amides include N-acyl ethanolamines, such as the cannabinoid neurotransmitter anandamide.
Glycerolipids
Glycerolipids are composed of mono-, di-, and tri-substituted glycerols, the best-known being the fatty acid triesters of glycerol, called triglycerides. The word "triacylglycerol" is sometimes used synonymously with "triglyceride". In these compounds, the three hydroxyl groups of glycerol are each esterified, typically by different fatty acids. Because they function as an energy store, these lipids comprise the bulk of storage fat in animal tissues. The hydrolysis of the ester bonds of triglycerides and the release of glycerol and fatty acids from adipose tissue are the initial steps in metabolizing fat.
Additional subclasses of glycerolipids are represented by glycosylglycerols, which are characterized by the presence of one or more sugar residues attached to glycerol via a glycosidic linkage. Examples of structures in this category are the digalactosyldiacylglycerols found in plant membranes and seminolipid from mammalian sperm cells.
Glycerophospholipids
Glycerophospholipids, usually referred to as phospholipids (though sphingomyelins are also classified as phospholipids), are ubiquitous in nature and are key components of the lipid bilayer of cells, as well as being involved in metabolism and cell signaling. Neural tissue (including the brain) contains relatively high amounts of glycerophospholipids, and alterations in their composition has been implicated in various neurological disorders. Glycerophospholipids may be subdivided into distinct classes, based on the nature of the polar headgroup at the sn-3 position of the glycerol backbone in eukaryotes and eubacteria, or the sn-1 position in the case of archaebacteria.
Examples of glycerophospholipids found in biological membranes are phosphatidylcholine (also known as PC, GPCho or lecithin), phosphatidylethanolamine (PE or GPEtn) and phosphatidylserine (PS or GPSer). In addition to serving as a primary component of cellular membranes and binding sites for intra- and intercellular proteins, some glycerophospholipids in eukaryotic cells, such as phosphatidylinositols and phosphatidic acids are either precursors of or, themselves, membrane-derived second messengers. Typically, one or both of these hydroxyl groups are acylated with long-chain fatty acids, but there are also alkyl-linked and 1Z-alkenyl-linked (plasmalogen) glycerophospholipids, as well as dialkylether variants in archaebacteria.
Sphingolipids
Sphingolipids are a complicated family of compounds that share a common structural feature, a sphingoid base backbone that is synthesized de novo from the amino acid serine and a long-chain fatty acyl CoA, then converted into ceramides, phosphosphingolipids, glycosphingolipids and other compounds. The major sphingoid base of mammals is commonly referred to as sphingosine. Ceramides (N-acyl-sphingoid bases) are a major subclass of sphingoid base derivatives with an amide-linked fatty acid. The fatty acids are typically saturated or mono-unsaturated with chain lengths from 16 to 26 carbon atoms.
The major phosphosphingolipids of mammals are sphingomyelins (ceramide phosphocholines), whereas insects contain mainly ceramide phosphoethanolamines and fungi have phytoceramide phosphoinositols and mannose-containing headgroups. The glycosphingolipids are a diverse family of molecules composed of one or more sugar residues linked via a glycosidic bond to the sphingoid base. Examples of these are the simple and complex glycosphingolipids such as cerebrosides and gangliosides.
Sterols
Sterols, such as cholesterol and its derivatives, are an important component of membrane lipids, along with the glycerophospholipids and sphingomyelins. Other examples of sterols are the bile acids and their conjugates, which in mammals are oxidized derivatives of cholesterol and are synthesized in the liver. The plant equivalents are the phytosterols, such as β-sitosterol, stigmasterol, and brassicasterol; the latter compound is also used as a biomarker for algal growth. The predominant sterol in fungal cell membranes is ergosterol.
Sterols are steroids in which one of the hydrogen atoms is substituted with a hydroxyl group, at position 3 in the carbon chain. They have in common with steroids the same fused four-ring core structure. Steroids have different biological roles as hormones and signaling molecules. The eighteen-carbon (C18) steroids include the estrogen family whereas the C19 steroids comprise the androgens such as testosterone and androsterone. The C21 subclass includes the progestogens as well as the glucocorticoids and mineralocorticoids. The secosteroids, comprising various forms of vitamin D, are characterized by cleavage of the B ring of the core structure.
Prenols
Prenol lipids are synthesized from the five-carbon-unit precursors isopentenyl diphosphate and dimethylallyl diphosphate, which are produced mainly via the mevalonic acid (MVA) pathway. The simple isoprenoids (linear alcohols, diphosphates, etc.) are formed by the successive addition of C5 units, and are classified according to number of these terpene units. Structures containing greater than 40 carbons are known as polyterpenes. Carotenoids are important simple isoprenoids that function as antioxidants and as precursors of vitamin A. Another biologically important class of molecules is exemplified by the quinones and hydroquinones, which contain an isoprenoid tail attached to a quinonoid core of non-isoprenoid origin. Vitamin E and vitamin K, as well as the ubiquinones, are examples of this class. Prokaryotes synthesize polyprenols (called bactoprenols) in which the terminal isoprenoid unit attached to oxygen remains unsaturated, whereas in animal polyprenols (dolichols) the terminal isoprenoid is reduced.
Saccharolipids
Saccharolipids describe compounds in which fatty acids are linked to a sugar backbone, forming structures that are compatible with membrane bilayers. In the saccharolipids, a monosaccharide substitutes for the glycerol backbone present in glycerolipids and glycerophospholipids. The most familiar saccharolipids are the acylated glucosamine precursors of the Lipid A component of the lipopolysaccharides in Gram-negative bacteria. Typical lipid A molecules are disaccharides of glucosamine, which are derivatized with as many as seven fatty-acyl chains. The minimal lipopolysaccharide required for growth in E. coli is Kdo2-Lipid A, a hexa-acylated disaccharide of glucosamine that is glycosylated with two 3-deoxy-D-manno-octulosonic acid (Kdo) residues.
Polyketides
Polyketides are synthesized by polymerization of acetyl and propionyl subunits by classic enzymes as well as iterative and multimodular enzymes that share mechanistic features with the fatty acid synthases. They comprise many secondary metabolites and natural products from animal, plant, bacterial, fungal and marine sources, and have great structural diversity. Many polyketides are cyclic molecules whose backbones are often further modified by glycosylation, methylation, hydroxylation, oxidation, or other processes. Many commonly used antimicrobial, antiparasitic, and anticancer agents are polyketides or polyketide derivatives, such as erythromycins, tetracyclines, avermectins, and antitumor epothilones.
Biological functions
Component of biological membranes
Eukaryotic cells feature the compartmentalized membrane-bound organelles that carry out different biological functions. The glycerophospholipids are the main structural component of biological membranes, as the cellular plasma membrane and the intracellular membranes of organelles; in animal cells, the plasma membrane physically separates the intracellular components from the extracellular environment. The glycerophospholipids are amphipathic molecules (containing both hydrophobic and hydrophilic regions) that contain a glycerol core linked to two fatty acid-derived "tails" by ester linkages and to one "head" group by a phosphate ester linkage. While glycerophospholipids are the major component of biological membranes, other non-glyceride lipid components such as sphingomyelin and sterols (mainly cholesterol in animal cell membranes) are also found in biological membranes. In plants and algae, the galactosyldiacylglycerols, and sulfoquinovosyldiacylglycerol, which lack a phosphate group, are important components of membranes of chloroplasts and related organelles and are among the most abundant lipids in photosynthetic tissues, including those of higher plants, algae and certain bacteria.
Plant thylakoid membranes have the largest lipid component of a non-bilayer forming monogalactosyl diglyceride (MGDG), and little phospholipids; despite this unique lipid composition, chloroplast thylakoid membranes have been shown to contain a dynamic lipid-bilayer matrix as revealed by magnetic resonance and electron microscope studies.
A biological membrane is a form of lamellar phase lipid bilayer. The formation of lipid bilayers is an energetically preferred process when the glycerophospholipids described above are in an aqueous environment. This is known as the hydrophobic effect. In an aqueous system, the polar heads of lipids align towards the polar, aqueous environment, while the hydrophobic tails minimize their contact with water and tend to cluster together, forming a vesicle; depending on the concentration of the lipid, this biophysical interaction may result in the formation of micelles, liposomes, or lipid bilayers. Other aggregations are also observed and form part of the polymorphism of amphiphile (lipid) behavior. Phase behavior is an area of study within biophysics. Micelles and bilayers form in the polar medium by a process known as the hydrophobic effect. When dissolving a lipophilic or amphiphilic substance in a polar environment, the polar molecules (i.e., water in an aqueous solution) become more ordered around the dissolved lipophilic substance, since the polar molecules cannot form hydrogen bonds to the lipophilic areas of the amphiphile. So in an aqueous environment, the water molecules form an ordered "clathrate" cage around the dissolved lipophilic molecule.
The formation of lipids into protocell membranes represents a key step in models of abiogenesis, the origin of life.
Energy storage
Triglycerides, stored in adipose tissue, are a major form of energy storage both in animals and plants. They are a major source of energy in aerobic respiration. The complete oxidation of fatty acids releases about 38 kJ/g (9 kcal/g), compared with only 17 kJ/g (4 kcal/g) for the oxidative breakdown of carbohydrates and proteins. The adipocyte, or fat cell, is designed for continuous synthesis and breakdown of triglycerides in animals, with breakdown controlled mainly by the activation of hormone-sensitive enzyme lipase. Migratory birds that must fly long distances without eating use triglycerides to fuel their flights.
Signaling
Evidence has emerged showing that lipid signaling is a vital part of the cell signaling. Lipid signaling may occur via activation of G protein-coupled or nuclear receptors, and members of several different lipid categories have been identified as signaling molecules and cellular messengers. These include sphingosine-1-phosphate, a sphingolipid derived from ceramide that is a potent messenger molecule involved in regulating calcium mobilization, cell growth, and apoptosis; diacylglycerol and the phosphatidylinositol phosphates (PIPs), involved in calcium-mediated activation of protein kinase C; the prostaglandins, which are one type of fatty-acid derived eicosanoid involved in inflammation and immunity; the steroid hormones such as estrogen, testosterone and cortisol, which modulate a host of functions such as reproduction, metabolism and blood pressure; and the oxysterols such as 25-hydroxy-cholesterol that are liver X receptor agonists. Phosphatidylserine lipids are known to be involved in signaling for the phagocytosis of apoptotic cells or pieces of cells. They accomplish this by being exposed to the extracellular face of the cell membrane after the inactivation of flippases which place them exclusively on the cytosolic side and the activation of scramblases, which scramble the orientation of the phospholipids. After this occurs, other cells recognize the phosphatidylserines and phagocytosize the cells or cell fragments exposing them.
Other functions
The "fat-soluble" vitamins (A, D, E and K) – which are isoprene-based lipids – are essential nutrients stored in the liver and fatty tissues, with a diverse range of functions. Acyl-carnitines are involved in the transport and metabolism of fatty acids in and out of mitochondria, where they undergo beta oxidation. Polyprenols and their phosphorylated derivatives also play important transport roles, in this case the transport of oligosaccharides across membranes. Polyprenol phosphate sugars and polyprenol diphosphate sugars function in extra-cytoplasmic glycosylation reactions, in extracellular polysaccharide biosynthesis (for instance, peptidoglycan polymerization in bacteria), and in eukaryotic protein N-glycosylation. Cardiolipins are a subclass of glycerophospholipids containing four acyl chains and three glycerol groups that are particularly abundant in the inner mitochondrial membrane. They are believed to activate enzymes involved with oxidative phosphorylation. Lipids also form the basis of steroid hormones.
Metabolism
The major dietary lipids for humans and other animals are animal and plant triglycerides, sterols, and membrane phospholipids. The process of lipid metabolism synthesizes and degrades the lipid stores and produces the structural and functional lipids characteristic of individual tissues.
Biosynthesis
In animals, when there is an oversupply of dietary carbohydrate, the excess carbohydrate is converted to triglycerides. This involves the synthesis of fatty acids from acetyl-CoA and the esterification of fatty acids in the production of triglycerides, a process called lipogenesis. Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acetyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups, in animals and fungi all these fatty acid synthase reactions are carried out by a single multifunctional protein, while in plant plastids and bacteria separate enzymes perform each step in the pathway. The fatty acids may be subsequently converted to triglycerides that are packaged in lipoproteins and secreted from the liver.
The synthesis of unsaturated fatty acids involves a desaturation reaction, whereby a double bond is introduced into the fatty acyl chain. For example, in humans, the desaturation of stearic acid by stearoyl-CoA desaturase-1 produces oleic acid. The doubly unsaturated fatty acid linoleic acid as well as the triply unsaturated α-linolenic acid cannot be synthesized in mammalian tissues, and are therefore essential fatty acids and must be obtained from the diet.
Triglyceride synthesis takes place in the endoplasmic reticulum by metabolic pathways in which acyl groups in fatty acyl-CoAs are transferred to the hydroxyl groups of glycerol-3-phosphate and diacylglycerol.
Terpenes and isoprenoids, including the carotenoids, are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is steroid biosynthesis. Here, the isoprene units are joined together to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other steroids such as cholesterol and ergosterol.
Degradation
Beta oxidation is the metabolic process by which fatty acids are broken down in the mitochondria or in peroxisomes to generate acetyl-CoA. For the most part, fatty acids are oxidized by a mechanism that is similar to, but not identical with, a reversal of the process of fatty acid synthesis. That is, two-carbon fragments are removed sequentially from the carboxyl end of the acid after steps of dehydrogenation, hydration, and oxidation to form a beta-keto acid, which is split by thiolysis. The acetyl-CoA is then ultimately converted into adenosine triphosphate (ATP), CO2, and H2O using the citric acid cycle and the electron transport chain. Hence the citric acid cycle can start at acetyl-CoA when fat is being broken down for energy if there is little or no glucose available. The energy yield of the complete oxidation of the fatty acid palmitate is 106 ATP. Unsaturated and odd-chain fatty acids require additional enzymatic steps for degradation.
Nutrition and health
Most of the fat found in food is in the form of triglycerides, cholesterol, and phospholipids. Some dietary fat is necessary to facilitate absorption of fat-soluble vitamins (A, D, E, and K) and carotenoids. Humans and other mammals have a dietary requirement for certain essential fatty acids, such as linoleic acid (an omega-6 fatty acid) and alpha-linolenic acid (an omega-3 fatty acid) because they cannot be synthesized from simple precursors in the diet. Both of these fatty acids are 18-carbon polyunsaturated fatty acids differing in the number and position of the double bonds. Most vegetable oils are rich in linoleic acid (safflower, sunflower, and corn oils). Alpha-linolenic acid is found in the green leaves of plants and in some seeds, nuts, and legumes (in particular flax, rapeseed, walnut, and soy). Fish oils are particularly rich in the longer-chain omega-3 fatty acids eicosapentaenoic acid and docosahexaenoic acid. Many studies have shown positive health benefits associated with consumption of omega-3 fatty acids on infant development, cancer, cardiovascular diseases, and various mental illnesses (such as depression, attention-deficit hyperactivity disorder, and dementia).
In contrast, it is now well-established that consumption of trans fats, such as those present in partially hydrogenated vegetable oils, are a risk factor for cardiovascular disease. Fats that are good for one may be turned into trans fats by improper cooking methods that result in overcooking the lipids.
A few studies have suggested that total dietary fat intake is linked to an increased risk of obesity. and diabetes; Others, including the Women's Health Initiative Dietary Modification Trial, an eight-year study of 49,000 women, the Nurses' Health Study, and the Health Professionals Follow-up Study, revealed no such links. None of these studies suggested any connection between percentage of calories from fat and risk of cancer, heart disease, or weight gain. The Nutrition Source, a website maintained by the department of nutrition at the T. H. Chan School of Public Health at Harvard University, summarizes the current evidence on the effect of dietary fat: "Detailed research—much of it done at Harvard—shows that the total amount of fat in the diet isn't really linked with weight or disease."
| Biology and health sciences | Chemistry | null |
17944 | https://en.wikipedia.org/wiki/Lie%20algebra | Lie algebra | In mathematics, a Lie algebra (pronounced ) is a vector space together with an operation called the Lie bracket, an alternating bilinear map , that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation (called the Lie bracket) is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors and is denoted . A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket, .
Lie algebras are closely related to Lie groups, which are groups that are also smooth manifolds: every Lie group gives rise to a Lie algebra, which is the tangent space at the identity. (In this case, the Lie bracket measures the failure of commutativity for the Lie group.) Conversely, to any finite-dimensional Lie algebra over the real or complex numbers, there is a corresponding connected Lie group, unique up to covering spaces (Lie's third theorem). This correspondence allows one to study the structure and classification of Lie groups in terms of Lie algebras, which are simpler objects of linear algebra.
In more detail: for any Lie group, the multiplication operation near the identity element 1 is commutative to first order. In other words, every Lie group G is (to first order) approximately a real vector space, namely the tangent space to G at the identity. To second order, the group operation may be non-commutative, and the second-order terms describing the non-commutativity of G near the identity give the structure of a Lie algebra. It is a remarkable fact that these second-order terms (the Lie algebra) completely determine the group structure of G near the identity. They even determine G globally, up to covering spaces.
In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics.
An elementary example (not directly coming from an associative algebra) is the 3-dimensional space with Lie bracket defined by the cross product This is skew-symmetric since , and instead of associativity it satisfies the Jacobi identity:
This is the Lie algebra of the Lie group of rotations of space, and each vector may be pictured as an infinitesimal rotation around the axis , with angular speed equal to the magnitude
of . The Lie bracket is a measure of the non-commutativity between two rotations. Since a rotation commutes with itself, one has the alternating property .
History
Lie algebras were introduced to study the concept of infinitesimal transformations by Sophus Lie in the 1870s, and independently discovered by Wilhelm Killing in the 1880s. The name Lie algebra was given by Hermann Weyl in the 1930s; in older texts, the term infinitesimal group was used.
Definition of a Lie algebra
A Lie algebra is a vector space over a field together with a binary operation called the Lie bracket, satisfying the following axioms:
Bilinearity,
for all scalars in and all elements in .
The Alternating property,
for all in .
The Jacobi identity,
for all in .
Given a Lie group, the Jacobi identity for its Lie algebra follows from the associativity of the group operation.
Using bilinearity to expand the Lie bracket and using the alternating property shows that for all in . Thus bilinearity and the alternating property together imply
Anticommutativity,
for all in . If the field does not have characteristic 2, then anticommutativity implies the alternating property, since it implies
It is customary to denote a Lie algebra by a lower-case fraktur letter such as . If a Lie algebra is associated with a Lie group, then the algebra is denoted by the fraktur version of the group's name: for example, the Lie algebra of SU(n) is .
Generators and dimension
The dimension of a Lie algebra over a field means its dimension as a vector space. In physics, a vector space basis of the Lie algebra of a Lie group G may be called a set of generators for G. (They are "infinitesimal generators" for G, so to speak.) In mathematics, a set S of generators for a Lie algebra means a subset of such that any Lie subalgebra (as defined below) that contains S must be all of . Equivalently, is spanned (as a vector space) by all iterated brackets of elements of S.
Basic examples
Abelian Lie algebras
Any vector space endowed with the identically zero Lie bracket becomes a Lie algebra. Such a Lie algebra is called abelian. Every one-dimensional Lie algebra is abelian, by the alternating property of the Lie bracket.
The Lie algebra of matrices
On an associative algebra over a field with multiplication written as , a Lie bracket may be defined by the commutator . With this bracket, is a Lie algebra. (The Jacobi identity follows from the associativity of the multiplication on .)
The endomorphism ring of an -vector space with the above Lie bracket is denoted .
For a field F and a positive integer n, the space of n × n matrices over F, denoted or , is a Lie algebra with bracket given by the commutator of matrices: . This is a special case of the previous example; it is a key example of a Lie algebra. It is called the general linear Lie algebra.
When F is the real numbers, is the Lie algebra of the general linear group , the group of invertible n x n real matrices (or equivalently, matrices with nonzero determinant), where the group operation is matrix multiplication. Likewise, is the Lie algebra of the complex Lie group . The Lie bracket on describes the failure of commutativity for matrix multiplication, or equivalently for the composition of linear maps. For any field F, can be viewed as the Lie algebra of the algebraic group over F.
Definitions
Subalgebras, ideals and homomorphisms
The Lie bracket is not required to be associative, meaning that need not be equal to . Nonetheless, much of the terminology for associative rings and algebras (and also for groups) has analogs for Lie algebras. A Lie subalgebra is a linear subspace which is closed under the Lie bracket. An ideal is a linear subspace that satisfies the stronger condition:
In the correspondence between Lie groups and Lie algebras, subgroups correspond to Lie subalgebras, and normal subgroups correspond to ideals.
A Lie algebra homomorphism is a linear map compatible with the respective Lie brackets:
An isomorphism of Lie algebras is a bijective homomorphism.
As with normal subgroups in groups, ideals in Lie algebras are precisely the kernels of homomorphisms. Given a Lie algebra and an ideal in it, the quotient Lie algebra is defined, with a surjective homomorphism of Lie algebras. The first isomorphism theorem holds for Lie algebras: for any homomorphism of Lie algebras, the image of is a Lie subalgebra of that is isomorphic to .
For the Lie algebra of a Lie group, the Lie bracket is a kind of infinitesimal commutator. As a result, for any Lie algebra, two elements are said to commute if their bracket vanishes: .
The centralizer subalgebra of a subset is the set of elements commuting with : that is, . The centralizer of itself is the center . Similarly, for a subspace S, the normalizer subalgebra of is . If is a Lie subalgebra, is the largest subalgebra such that is an ideal of .
Example
The subspace of diagonal matrices in is an abelian Lie subalgebra. (It is a Cartan subalgebra of , analogous to a maximal torus in the theory of compact Lie groups.) Here is not an ideal in for . For example, when , this follows from the calculation:
(which is not always in ).
Every one-dimensional linear subspace of a Lie algebra is an abelian Lie subalgebra, but it need not be an ideal.
Product and semidirect product
For two Lie algebras and , the product Lie algebra is the vector space consisting of all ordered pairs , with Lie bracket
This is the product in the category of Lie algebras. Note that the copies of and in commute with each other:
Let be a Lie algebra and an ideal of . If the canonical map splits (i.e., admits a section , as a homomorphism of Lie algebras), then is said to be a semidirect product of and , . | Mathematics | Linear algebra | null |
17945 | https://en.wikipedia.org/wiki/Lie%20group | Lie group | In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold, such that group multiplication and taking inverses are both differentiable.
A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the additional properties it must have to be thought of as a "transformation" in the abstract sense, for instance multiplication and the taking of inverses (to allow division), or equivalently, the concept of addition and subtraction. Combining these two ideas, one obtains a continuous group where multiplying points and their inverses is continuous. If the multiplication and taking of inverses are smooth (differentiable) as well, one obtains a Lie group.
Lie groups provide a natural model for the concept of continuous symmetry, a celebrated example of which is the circle group. Rotating a circle is an example of a continuous symmetry. For any rotation of the circle, there exists the same symmetry, and concatenation of such rotations makes them into the circle group, an archetypal example of a Lie group. Lie groups are widely used in many parts of modern mathematics and physics.
Lie groups were first found by studying matrix subgroups contained in or , the groups of invertible matrices over or . These are now called the classical groups, as the concept has been extended far beyond these origins. Lie groups are named after Norwegian mathematician Sophus Lie (1842–1899), who laid the foundations of the theory of continuous transformation groups. Lie's original motivation for introducing Lie groups was to model the continuous symmetries of differential equations, in much the same way that finite groups are used in Galois theory to model the discrete symmetries of algebraic equations.
History
Sophus Lie considered the winter of 1873–1874 as the birth date of his theory of continuous groups. Thomas Hawkins, however, suggests that it was "Lie's prodigious research activity during the four-year period from the fall of 1869 to the fall of 1873" that led to the theory's creation. Some of Lie's early ideas were developed in close collaboration with Felix Klein. Lie met with Klein every day from October 1869 through 1872: in Berlin from the end of October 1869 to the end of February 1870, and in Paris, Göttingen and Erlangen in the subsequent two years. Lie stated that all of the principal results were obtained by 1884. But during the 1870s all his papers (except the very first note) were published in Norwegian journals, which impeded recognition of the work throughout the rest of Europe. In 1884 a young German mathematician, Friedrich Engel, came to work with Lie on a systematic treatise to expose his theory of continuous groups. From this effort resulted the three-volume Theorie der Transformationsgruppen, published in 1888, 1890, and 1893. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse.
Lie's ideas did not stand in isolation from the rest of mathematics. In fact, his interest in the geometry of differential equations was first motivated by the work of Carl Gustav Jacobi, on the theory of partial differential equations of first order and on the equations of classical mechanics. Much of Jacobi's work was published posthumously in the 1860s, generating enormous interest in France and Germany. Lie's idée fixe was to develop a theory of symmetries of differential equations that would accomplish for them what Évariste Galois had done for algebraic equations: namely, to classify them in terms of group theory. Lie and other mathematicians showed that the most important equations for special functions and orthogonal polynomials tend to arise from group theoretical symmetries. In Lie's early work, the idea was to construct a theory of continuous groups, to complement the theory of discrete groups that had developed in the theory of modular forms, in the hands of Felix Klein and Henri Poincaré. The initial application that Lie had in mind was to the theory of differential equations. On the model of Galois theory and polynomial equations, the driving conception was of a theory capable of unifying, by the study of symmetry, the whole area of ordinary differential equations. However, the hope that Lie theory would unify the entire field of ordinary differential equations was not fulfilled. Symmetry methods for ODEs continue to be studied, but do not dominate the subject. There is a differential Galois theory, but it was developed by others, such as Picard and Vessiot, and it provides a theory of quadratures, the indefinite integrals required to express solutions.
Additional impetus to consider continuous groups came from ideas of Bernhard Riemann, on the foundations of geometry, and their further development in the hands of Klein. Thus three major themes in 19th century mathematics were combined by Lie in creating his new theory:
The idea of symmetry, as exemplified by Galois through the algebraic notion of a group;
Geometric theory and the explicit solutions of differential equations of mechanics, worked out by Poisson and Jacobi;
The new understanding of geometry that emerged in the works of Plücker, Möbius, Grassmann and others, and culminated in Riemann's revolutionary vision of the subject.
Although today Sophus Lie is rightfully recognized as the creator of the theory of continuous groups, a major stride in the development of their structure theory, which was to have a profound influence on subsequent development of mathematics, was made by Wilhelm Killing, who in 1888 published the first paper in a series entitled Die Zusammensetzung der stetigen endlichen Transformationsgruppen (The composition of continuous finite transformation groups). The work of Killing, later refined and generalized by Élie Cartan, led to classification of semisimple Lie algebras, Cartan's theory of symmetric spaces, and Hermann Weyl's description of representations of compact and semisimple Lie groups using highest weights.
In 1900 David Hilbert challenged Lie theorists with his Fifth Problem presented at the International Congress of Mathematicians in Paris.
Weyl brought the early period of the development of the theory of Lie groups to fruition, for not only did he classify irreducible representations of semisimple Lie groups and connect the theory of groups with quantum mechanics, but he also put Lie's theory itself on firmer footing by clearly enunciating the distinction between Lie's infinitesimal groups (i.e., Lie algebras) and the Lie groups proper, and began investigations of topology of Lie groups. The theory of Lie groups was systematically reworked in modern mathematical language in a monograph by Claude Chevalley.
Overview
Lie groups are smooth differentiable manifolds and as such can be studied using differential calculus, in contrast with the case of more general topological groups. One of the key ideas in the theory of Lie groups is to replace the global object, the group, with its local or linearized version, which Lie himself called its "infinitesimal group" and which has since become known as its Lie algebra.
Lie groups play an enormous role in modern geometry, on several different levels. Felix Klein argued in his Erlangen program that one can consider various "geometries" by specifying an appropriate transformation group that leaves certain geometric properties invariant. Thus Euclidean geometry corresponds to the choice of the group E(3) of distance-preserving transformations of the Euclidean space , conformal geometry corresponds to enlarging the group to the conformal group, whereas in projective geometry one is interested in the properties invariant under the projective group. This idea later led to the notion of a G-structure, where G is a Lie group of "local" symmetries of a manifold.
Lie groups (and their associated Lie algebras) play a major role in modern physics, with the Lie group typically playing the role of a symmetry of a physical system. Here, the representations of the Lie group (or of its Lie algebra) are especially important. Representation theory is used extensively in particle physics. Groups whose representations are of particular importance include the rotation group SO(3) (or its double cover SU(2)), the special unitary group SU(3) and the Poincaré group.
On a "global" level, whenever a Lie group acts on a geometric object, such as a Riemannian or a symplectic manifold, this action provides a measure of rigidity and yields a rich algebraic structure. The presence of continuous symmetries expressed via a Lie group action on a manifold places strong constraints on its geometry and facilitates analysis on the manifold. Linear actions of Lie groups are especially important, and are studied in representation theory.
In the 1940s–1950s, Ellis Kolchin, Armand Borel, and Claude Chevalley realised that many foundational results concerning Lie groups can be developed completely algebraically, giving rise to the theory of algebraic groups defined over an arbitrary field. This insight opened new possibilities in pure algebra, by providing a uniform construction for most finite simple groups, as well as in algebraic geometry. The theory of automorphic forms, an important branch of modern number theory, deals extensively with analogues of Lie groups over adele rings; p-adic Lie groups play an important role, via their connections with Galois representations in number theory.
Definitions and examples
A real Lie group is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. Smoothness of the group multiplication
means that μ is a smooth mapping of the product manifold into G. The two requirements can be combined to the single requirement that the mapping
be a smooth mapping of the product manifold into G.
First examples
The 2×2 real invertible matrices form a group under multiplication, called general linear group of degree 2 and denoted by or by : This is a four-dimensional noncompact real Lie group; it is an open subset of . This group is disconnected; it has two connected components corresponding to the positive and negative values of the determinant.
The rotation matrices form a subgroup of , denoted by . It is a Lie group in its own right: specifically, a one-dimensional compact connected Lie group which is diffeomorphic to the circle. Using the rotation angle as a parameter, this group can be parametrized as follows: Addition of the angles corresponds to multiplication of the elements of , and taking the opposite angle corresponds to inversion. Thus both multiplication and inversion are differentiable maps.
The affine group of one dimension is a two-dimensional matrix Lie group, consisting of real, upper-triangular matrices, with the first diagonal entry being positive and the second diagonal entry being 1. Thus, the group consists of matrices of the form
Non-example
We now present an example of a group with an uncountable number of elements that is not a Lie group under a certain topology. The group given by
with a fixed irrational number, is a subgroup of the torus that is not a Lie group when given the subspace topology. If we take any small neighborhood of a point in , for example, the portion of in is disconnected. The group winds repeatedly around the torus without ever reaching a previous point of the spiral and thus forms a dense subgroup of .
The group can, however, be given a different topology, in which the distance between two points is defined as the length of the shortest path in the group joining to . In this topology, is identified homeomorphically with the real line by identifying each element with the number in the definition of . With this topology, is just the group of real numbers under addition and is therefore a Lie group.
The group is an example of a "Lie subgroup" of a Lie group that is not closed. See the discussion below of Lie subgroups in the section on basic concepts.
Matrix Lie groups
Let denote the group of invertible matrices with entries in . Any closed subgroup of is a Lie group; Lie groups of this sort are called matrix Lie groups. Since most of the interesting examples of Lie groups can be realized as matrix Lie groups, some textbooks restrict attention to this class, including those of Hall, Rossmann, and Stillwell.
Restricting attention to matrix Lie groups simplifies the definition of the Lie algebra and the exponential map. The following are standard examples of matrix Lie groups.
The special linear groups over and , and , consisting of matrices with determinant one and entries in or
The unitary groups and special unitary groups, and , consisting of complex matrices satisfying (and also in the case of )
The orthogonal groups and special orthogonal groups, and , consisting of real matrices satisfying (and also in the case of )
All of the preceding examples fall under the heading of the classical groups.
Related concepts
A complex Lie group is defined in the same way using complex manifolds rather than real ones (example: ), and holomorphic maps. Similarly, using an alternate metric completion of , one can define a p-adic Lie group over the p-adic numbers, a topological group which is also an analytic p-adic manifold, such that the group operations are analytic. In particular, each point has a p-adic neighborhood.
Hilbert's fifth problem asked whether replacing differentiable manifolds with topological or analytic ones can yield new examples. The answer to this question turned out to be negative: in 1952, Gleason, Montgomery and Zippin showed that if G is a topological manifold with continuous group operations, then there exists exactly one analytic structure on G which turns it into a Lie group (see also Hilbert–Smith conjecture). If the underlying manifold is allowed to be infinite-dimensional (for example, a Hilbert manifold), then one arrives at the notion of an infinite-dimensional Lie group. It is possible to define analogues of many Lie groups over finite fields, and these give most of the examples of finite simple groups.
The language of category theory provides a concise definition for Lie groups: a Lie group is a group object in the category of smooth manifolds. This is important, because it allows generalization of the notion of a Lie group to Lie supergroups. This categorical point of view leads also to a different generalization of Lie groups, namely Lie groupoids, which are groupoid objects in the category of smooth manifolds with a further requirement.
Topological definition
A Lie group can be defined as a (Hausdorff) topological group that, near the identity element, looks like a transformation group, with no reference to differentiable manifolds. First, we define an immersely linear Lie group to be a subgroup G of the general linear group such that
for some neighborhood V of the identity element e in G, the topology on V is the subspace topology of and V is closed in .
G has at most countably many connected components.
(For example, a closed subgroup of ; that is, a matrix Lie group satisfies the above conditions.)
Then a Lie group is defined as a topological group that (1) is locally isomorphic near the identities to an immersely linear Lie group and (2) has at most countably many connected components. Showing the topological definition is equivalent to the usual one is technical (and the beginning readers should skip the following) but is done roughly as follows:
Given a Lie group G in the usual manifold sense, the Lie group–Lie algebra correspondence (or a version of Lie's third theorem) constructs an immersed Lie subgroup such that share the same Lie algebra; thus, they are locally isomorphic. Hence, satisfies the above topological definition.
Conversely, let be a topological group that is a Lie group in the above topological sense and choose an immersely linear Lie group that is locally isomorphic to . Then, by a version of the closed subgroup theorem, is a real-analytic manifold and then, through the local isomorphism, G acquires a structure of a manifold near the identity element. One then shows that the group law on G can be given by formal power series; so the group operations are real-analytic and itself is a real-analytic manifold.
The topological definition implies the statement that if two Lie groups are isomorphic as topological groups, then they are isomorphic as Lie groups. In fact, it states the general principle that, to a large extent, the topology of a Lie group together with the group law determines the geometry of the group.
More examples of Lie groups
Lie groups occur in abundance throughout mathematics and physics. Matrix groups or algebraic groups are (roughly) groups of matrices (for example, orthogonal and symplectic groups), and these give most of the more common examples of Lie groups.
Dimensions one and two
The only connected Lie groups with dimension one are the real line (with the group operation being addition) and the circle group of complex numbers with absolute value one (with the group operation being multiplication). The group is often denoted as , the group of unitary matrices.
In two dimensions, if we restrict attention to simply connected groups, then they are classified by their Lie algebras. There are (up to isomorphism) only two Lie algebras of dimension two. The associated simply connected Lie groups are (with the group operation being vector addition) and the affine group in dimension one, described in the previous subsection under "first examples".
Additional examples
The group SU(2) is the group of unitary matrices with determinant . Topologically, is the -sphere ; as a group, it may be identified with the group of unit quaternions.
The Heisenberg group is a connected nilpotent Lie group of dimension , playing a key role in quantum mechanics.
The Lorentz group is a 6-dimensional Lie group of linear isometries of the Minkowski space.
The Poincaré group is a 10-dimensional Lie group of affine isometries of the Minkowski space.
The exceptional Lie groups of types G2, F4, E6, E7, E8 have dimensions 14, 52, 78, 133, and 248. Along with the A–B–C–D series of simple Lie groups, the exceptional groups complete the list of simple Lie groups.
The symplectic group consists of all matrices preserving a symplectic form on . It is a connected Lie group of dimension .
Constructions
There are several standard ways to form new Lie groups from old ones:
The product of two Lie groups is a Lie group.
Any topologically closed subgroup of a Lie group is a Lie group. This is known as the closed subgroup theorem or Cartan's theorem.
The quotient of a Lie group by a closed normal subgroup is a Lie group.
The universal cover of a connected Lie group is a Lie group. For example, the group is the universal cover of the circle group . In fact any covering of a differentiable manifold is also a differentiable manifold, but by specifying universal cover, one guarantees a group structure (compatible with its other structures).
Related notions
Some examples of groups that are not Lie groups (except in the trivial sense that any group having at most countably many elements can be viewed as a 0-dimensional Lie group, with the discrete topology), are:
Infinite-dimensional groups, such as the additive group of an infinite-dimensional real vector space, or the space of smooth functions from a manifold to a Lie group , . These are not Lie groups as they are not finite-dimensional manifolds.
Some totally disconnected groups, such as the Galois group of an infinite extension of fields, or the additive group of the p-adic numbers. These are not Lie groups because their underlying spaces are not real manifolds. (Some of these groups are "p-adic Lie groups".) In general, only topological groups having similar local properties to Rn for some positive integer n can be Lie groups (of course they must also have a differentiable structure).
Basic concepts
The Lie algebra associated with a Lie group
To every Lie group we can associate a Lie algebra whose underlying vector space is the tangent space of the Lie group at the identity element and which completely captures the local structure of the group. Informally we can think of elements of the Lie algebra as elements of the group that are "infinitesimally close" to the identity, and the Lie bracket of the Lie algebra is related to the commutator of two such infinitesimal elements. Before giving the abstract definition we give a few examples:
The Lie algebra of the vector space Rn is just Rn with the Lie bracket given by [A, B] = 0. (In general the Lie bracket of a connected Lie group is always 0 if and only if the Lie group is abelian.)
The Lie algebra of the general linear group GL(n, C) of invertible matrices is the vector space M(n, C) of square matrices with the Lie bracket given by [A, B] = AB − BA.
If G is a closed subgroup of GL(n, C) then the Lie algebra of G can be thought of informally as the matrices m of M(n, C) such that 1 + εm is in G, where ε is an infinitesimal positive number with ε2 = 0 (of course, no such real number ε exists). For example, the orthogonal group O(n, R) consists of matrices A with AAT = 1, so the Lie algebra consists of the matrices m with (1 + εm)(1 + εm)T = 1, which is equivalent to m + mT = 0 because ε2 = 0.
The preceding description can be made more rigorous as follows. The Lie algebra of a closed subgroup G of GL(n, C), may be computed as
where exp(tX) is defined using the matrix exponential. It can then be shown that the Lie algebra of G is a real vector space that is closed under the bracket operation, .
The concrete definition given above for matrix groups is easy to work with, but has some minor problems: to use it we first need to represent a Lie group as a group of matrices, but not all Lie groups can be represented in this way, and it is not even obvious that the Lie algebra is independent of the representation we use. To get around these problems we give
the general definition of the Lie algebra of a Lie group (in 4 steps):
Vector fields on any smooth manifold M can be thought of as derivations X of the ring of smooth functions on the manifold, and therefore form a Lie algebra under the Lie bracket [X, Y] = XY − YX, because the Lie bracket of any two derivations is a derivation.
If G is any group acting smoothly on the manifold M, then it acts on the vector fields, and the vector space of vector fields fixed by the group is closed under the Lie bracket and therefore also forms a Lie algebra.
We apply this construction to the case when the manifold M is the underlying space of a Lie group G, with G acting on G = M by left translations Lg(h) = gh. This shows that the space of left invariant vector fields (vector fields satisfying Lg*Xh = Xgh for every h in G, where Lg* denotes the differential of Lg) on a Lie group is a Lie algebra under the Lie bracket of vector fields.
Any tangent vector at the identity of a Lie group can be extended to a left invariant vector field by left translating the tangent vector to other points of the manifold. Specifically, the left invariant extension of an element v of the tangent space at the identity is the vector field defined by v^g = Lg*v. This identifies the tangent space TeG at the identity with the space of left invariant vector fields, and therefore makes the tangent space at the identity into a Lie algebra, called the Lie algebra of G, usually denoted by a Fraktur Thus the Lie bracket on is given explicitly by [v, w] = [v^, w^]e.
This Lie algebra is finite-dimensional and it has the same dimension as the manifold G. The Lie algebra of G determines G up to "local isomorphism", where two Lie groups are called locally isomorphic if they look the same near the identity element.
Problems about Lie groups are often solved by first solving the corresponding problem for the Lie algebras, and the result for groups then usually follows easily.
For example, simple Lie groups are usually classified by first classifying the corresponding Lie algebras.
We could also define a Lie algebra structure on Te using right invariant vector fields instead of left invariant vector fields. This leads to the same Lie algebra, because the inverse map on G can be used to identify left invariant vector fields with right invariant vector fields, and acts as −1 on the tangent space Te.
The Lie algebra structure on Te can also be described as follows:
the commutator operation
(x, y) → xyx−1y−1
on G × G sends (e, e) to e, so its derivative yields a bilinear operation on TeG. This bilinear operation is actually the zero map, but the second derivative, under the proper identification of tangent spaces, yields an operation that satisfies the axioms of a Lie bracket, and it is equal to twice the one defined through left-invariant vector fields.
Homomorphisms and isomorphisms
If G and H are Lie groups, then a Lie group homomorphism f : G → H is a smooth group homomorphism. In the case of complex Lie groups, such a homomorphism is required to be a holomorphic map. However, these requirements are a bit stringent; every continuous homomorphism between real Lie groups turns out to be (real) analytic.
The composition of two Lie homomorphisms is again a homomorphism, and the class of all Lie groups, together with these morphisms, forms a category. Moreover, every Lie group homomorphism induces a homomorphism between the corresponding Lie algebras. Let be a Lie group homomorphism and let be its derivative at the identity. If we identify the Lie algebras of G and H with their tangent spaces at the identity elements, then is a map between the corresponding Lie algebras:
which turns out to be a Lie algebra homomorphism (meaning that it is a linear map which preserves the Lie bracket). In the language of category theory, we then have a covariant functor from the category of Lie groups to the category of Lie algebras which sends a Lie group to its Lie algebra and a Lie group homomorphism to its derivative at the identity.
Two Lie groups are called isomorphic if there exists a bijective homomorphism between them whose inverse is also a Lie group homomorphism. Equivalently, it is a diffeomorphism which is also a group homomorphism. Observe that, by the above, a continuous homomorphism from a Lie group to a Lie group is an isomorphism of Lie groups if and only if it is bijective.
Lie group versus Lie algebra isomorphisms
Isomorphic Lie groups necessarily have isomorphic Lie algebras; it is then reasonable to ask how isomorphism classes of Lie groups relate to isomorphism classes of Lie algebras.
The first result in this direction is Lie's third theorem, which states that every finite-dimensional, real Lie algebra is the Lie algebra of some (linear) Lie group. One way to prove Lie's third theorem is to use Ado's theorem, which says every finite-dimensional real Lie algebra is isomorphic to a matrix Lie algebra. Meanwhile, for every finite-dimensional matrix Lie algebra, there is a linear group (matrix Lie group) with this algebra as its Lie algebra.
On the other hand, Lie groups with isomorphic Lie algebras need not be isomorphic. Furthermore, this result remains true even if we assume the groups are connected. To put it differently, the global structure of a Lie group is not determined by its Lie algebra; for example, if Z is any discrete subgroup of the center of G then G and G/Z have the same Lie algebra (see the table of Lie groups for examples). An example of importance in physics are the groups SU(2) and SO(3). These two groups have isomorphic Lie algebras, but the groups themselves are not isomorphic, because SU(2) is simply connected but SO(3) is not.
On the other hand, if we require that the Lie group be simply connected, then the global structure is determined by its Lie algebra: two simply connected Lie groups with isomorphic Lie algebras are isomorphic. (See the next subsection for more information about simply connected Lie groups.) In light of Lie's third theorem, we may therefore say that there is a one-to-one correspondence between isomorphism classes of finite-dimensional real Lie algebras and isomorphism classes of simply connected Lie groups.
Simply connected Lie groups
A Lie group is said to be simply connected if every loop in can be shrunk continuously to a point in . This notion is important because of the following result that has simple connectedness as a hypothesis:
Theorem: Suppose and are Lie groups with Lie algebras and and that is a Lie algebra homomorphism. If is simply connected, then there is a unique Lie group homomorphism such that , where is the differential of at the identity.
Lie's third theorem says that every finite-dimensional real Lie algebra is the Lie algebra of a Lie group. It follows from Lie's third theorem and the preceding result that every finite-dimensional real Lie algebra is the Lie algebra of a unique simply connected Lie group.
An example of a simply connected group is the special unitary group SU(2), which as a manifold is the 3-sphere. The rotation group SO(3), on the other hand, is not simply connected. (See Topology of SO(3).) The failure of SO(3) to be simply connected is intimately connected to the distinction between integer spin and half-integer spin in quantum mechanics. Other examples of simply connected Lie groups include the special unitary group SU(n), the spin group (double cover of rotation group) Spin(n) for , and the compact symplectic group Sp(n).
Methods for determining whether a Lie group is simply connected or not are discussed in the article on fundamental groups of Lie groups.
Exponential map
The exponential map from the Lie algebra of the general linear group to is defined by the matrix exponential, given by the usual power series:
for matrices . If is a closed subgroup of , then the exponential map takes the Lie algebra of into ; thus, we have an exponential map for all matrix groups. Every element of that is sufficiently close to the identity is the exponential of a matrix in the Lie algebra.
The definition above is easy to use, but it is not defined for Lie groups that are not matrix groups, and it is not clear that the exponential map of a Lie group does not depend on its representation as a matrix group. We can solve both problems using a more abstract definition of the exponential map that works for all Lie groups, as follows.
For each vector in the Lie algebra of (i.e., the tangent space to at the identity), one proves that there is a unique one-parameter subgroup such that . Saying that is a one-parameter subgroup means simply that is a smooth map into and that
for all and . The operation on the right hand side is the group multiplication in . The formal similarity of this formula with the one valid for the exponential function justifies the definition
This is called the exponential map, and it maps the Lie algebra into the Lie group . It provides a diffeomorphism between a neighborhood of 0 in and a neighborhood of in . This exponential map is a generalization of the exponential function for real numbers (because is the Lie algebra of the Lie group of positive real numbers with multiplication), for complex numbers (because is the Lie algebra of the Lie group of non-zero complex numbers with multiplication) and for matrices (because with the regular commutator is the Lie algebra of the Lie group of all invertible matrices).
Because the exponential map is surjective on some neighbourhood of , it is common to call elements of the Lie algebra infinitesimal generators of the group . The subgroup of generated by is the identity component of .
The exponential map and the Lie algebra determine the local group structure of every connected Lie group, because of the Baker–Campbell–Hausdorff formula: there exists a neighborhood of the zero element of , such that for we have
where the omitted terms are known and involve Lie brackets of four or more elements. In case and commute, this formula reduces to the familiar exponential law .
The exponential map relates Lie group homomorphisms. That is, if is a Lie group homomorphism and the induced map on the corresponding Lie algebras, then for all we have
In other words, the following diagram commutes,
(In short, exp is a natural transformation from the functor Lie to the identity functor on the category of Lie groups.)
The exponential map from the Lie algebra to the Lie group is not always onto, even if the group is connected (though it does map onto the Lie group for connected groups that are either compact or nilpotent). For example, the exponential map of is not surjective. Also, the exponential map is neither surjective nor injective for infinite-dimensional (see below) Lie groups modelled on C∞ Fréchet space, even from arbitrary small neighborhood of 0 to corresponding neighborhood of 1.
Lie subgroup
A Lie subgroup of a Lie group is a Lie group that is a subset of and such that the inclusion map from to is an injective immersion and group homomorphism. According to Cartan's theorem, a closed subgroup of admits a unique smooth structure which makes it an embedded Lie subgroup of —i.e. a Lie subgroup such that the inclusion map is a smooth embedding.
Examples of non-closed subgroups are plentiful; for example take to be a torus of dimension 2 or greater, and let be a one-parameter subgroup of irrational slope, i.e. one that winds around in G. Then there is a Lie group homomorphism with . The closure of will be a sub-torus in .
The exponential map gives a one-to-one correspondence between the connected Lie subgroups of a connected Lie group and the subalgebras of the Lie algebra of . Typically, the subgroup corresponding to a subalgebra is not a closed subgroup. There is no criterion solely based on the structure of which determines which subalgebras correspond to closed subgroups.
Representations
One important aspect of the study of Lie groups is their representations, that is, the way they can act (linearly) on vector spaces. In physics, Lie groups often encode the symmetries of a physical system. The way one makes use of this symmetry to help analyze the system is often through representation theory. Consider, for example, the time-independent Schrödinger equation in quantum mechanics, . Assume the system in question has the rotation group SO(3) as a symmetry, meaning that the Hamiltonian operator commutes with the action of SO(3) on the wave function . (One important example of such a system is the hydrogen atom, which has a spherically symmetric potential.) This assumption does not necessarily mean that the solutions are rotationally invariant functions. Rather, it means that the space of solutions to is invariant under rotations (for each fixed value of ). This space, therefore, constitutes a representation of SO(3). These representations have been classified and the classification leads to a substantial simplification of the problem, essentially converting a three-dimensional partial differential equation to a one-dimensional ordinary differential equation.
The case of a connected compact Lie group K (including the just-mentioned case of SO(3)) is particularly tractable. In that case, every finite-dimensional representation of K decomposes as a direct sum of irreducible representations. The irreducible representations, in turn, were classified by Hermann Weyl. The classification is in terms of the "highest weight" of the representation. The classification is closely related to the classification of representations of a semisimple Lie algebra.
One can also study (in general infinite-dimensional) unitary representations of an arbitrary Lie group (not necessarily compact). For example, it is possible to give a relatively simple explicit description of the representations of the group SL(2, R) and the representations of the Poincaré group.
Classification
Lie groups may be thought of as smoothly varying families of symmetries. Examples of symmetries include rotation about an axis. What must be understood is the nature of 'small' transformations, for example, rotations through tiny angles, that link nearby transformations. The mathematical object capturing this structure is called a Lie algebra (Lie himself called them "infinitesimal groups"). It can be defined because Lie groups are smooth manifolds, so have tangent spaces at each point.
The Lie algebra of any compact Lie group (very roughly: one for which the symmetries form a bounded set) can be decomposed as a direct sum of an abelian Lie algebra and some number of simple ones. The structure of an abelian Lie algebra is mathematically uninteresting (since the Lie bracket is identically zero); the interest is in the simple summands. Hence the question arises: what are the simple Lie algebras of compact groups? It turns out that they mostly fall into four infinite families, the "classical Lie algebras" An, Bn, Cn and Dn, which have simple descriptions in terms of symmetries of Euclidean space. But there are also just five "exceptional Lie algebras" that do not fall into any of these families. E8 is the largest of these.
Lie groups are classified according to their algebraic properties (simple, semisimple, solvable, nilpotent, abelian), their connectedness (connected or simply connected) and their compactness.
A first key result is the Levi decomposition, which says that every simply connected Lie group is the semidirect product of a solvable normal subgroup and a semisimple subgroup.
Connected compact Lie groups are all known: they are finite central quotients of a product of copies of the circle group S1 and simple compact Lie groups (which correspond to connected Dynkin diagrams).
Any simply connected solvable Lie group is isomorphic to a closed subgroup of the group of invertible upper triangular matrices of some rank, and any finite-dimensional irreducible representation of such a group is 1-dimensional. Solvable groups are too messy to classify except in a few small dimensions.
Any simply connected nilpotent Lie group is isomorphic to a closed subgroup of the group of invertible upper triangular matrices with 1s on the diagonal of some rank, and any finite-dimensional irreducible representation of such a group is 1-dimensional. Like solvable groups, nilpotent groups are too messy to classify except in a few small dimensions.
Simple Lie groups are sometimes defined to be those that are simple as abstract groups, and sometimes defined to be connected Lie groups with a simple Lie algebra. For example, SL(2, R) is simple according to the second definition but not according to the first. They have all been classified (for either definition).
Semisimple Lie groups are Lie groups whose Lie algebra is a product of simple Lie algebras. They are central extensions of products of simple Lie groups.
The identity component of any Lie group is an open normal subgroup, and the quotient group is a discrete group. The universal cover of any connected Lie group is a simply connected Lie group, and conversely any connected Lie group is a quotient of a simply connected Lie group by a discrete normal subgroup of the center. Any Lie group G can be decomposed into discrete, simple, and abelian groups in a canonical way as follows. Write
Gcon for the connected component of the identity
Gsol for the largest connected normal solvable subgroup
Gnil for the largest connected normal nilpotent subgroup
so that we have a sequence of normal subgroups
.
Then
G/Gcon is discrete
Gcon/Gsol is a central extension of a product of simple connected Lie groups.
Gsol/Gnil is abelian. A connected abelian Lie group is isomorphic to a product of copies of R and the circle group S1.
Gnil/1 is nilpotent, and therefore its ascending central series has all quotients abelian.
This can be used to reduce some problems about Lie groups (such as finding their unitary representations) to the same problems for connected simple groups and nilpotent and solvable subgroups of smaller dimension.
The diffeomorphism group of a Lie group acts transitively on the Lie group
Every Lie group is parallelizable, and hence an orientable manifold (there is a bundle isomorphism between its tangent bundle and the product of itself with the tangent space at the identity)
Infinite-dimensional Lie groups
Lie groups are often defined to be finite-dimensional, but there are many groups that resemble Lie groups, except for being infinite-dimensional. The simplest way to define infinite-dimensional Lie groups is to model them locally on Banach spaces (as opposed to Euclidean space in the finite-dimensional case), and in this case much of the basic theory is similar to that of finite-dimensional Lie groups. However this is inadequate for many applications, because many natural examples of infinite-dimensional Lie groups are not Banach manifolds. Instead one needs to define Lie groups modeled on more general locally convex topological vector spaces. In this case the relation between the Lie algebra and the Lie group becomes rather subtle, and several results about finite-dimensional Lie groups no longer hold.
The literature is not entirely uniform in its terminology as to exactly which properties of infinite-dimensional groups qualify the group for the prefix Lie in Lie group. On the Lie algebra side of affairs, things are simpler since the qualifying criteria for the prefix Lie in Lie algebra are purely algebraic. For example, an infinite-dimensional Lie algebra may or may not have a corresponding Lie group. That is, there may be a group corresponding to the Lie algebra, but it might not be nice enough to be called a Lie group, or the connection between the group and the Lie algebra might not be nice enough (for example, failure of the exponential map to be onto a neighborhood of the identity). It is the "nice enough" that is not universally defined.
Some of the examples that have been studied include:
The group of diffeomorphisms of a manifold. Quite a lot is known about the group of diffeomorphisms of the circle. Its Lie algebra is (more or less) the Witt algebra, whose central extension the Virasoro algebra (see Virasoro algebra from Witt algebra for a derivation of this fact) is the symmetry algebra of two-dimensional conformal field theory. Diffeomorphism groups of compact manifolds of larger dimension are regular Fréchet Lie groups; very little about their structure is known.
The diffeomorphism group of spacetime sometimes appears in attempts to quantize gravity.
The group of smooth maps from a manifold to a finite-dimensional Lie group is an example of a gauge group (with operation of pointwise multiplication), and is used in quantum field theory and Donaldson theory. If the manifold is a circle these are called loop groups, and have central extensions whose Lie algebras are (more or less) Kac–Moody algebras.
There are infinite-dimensional analogues of general linear groups, orthogonal groups, and so on. One important aspect is that these may have simpler topological properties: see for example Kuiper's theorem. In M-theory, for example, a 10-dimensional SU(N) gauge theory becomes an 11-dimensional theory when N becomes infinite.
| Mathematics | Abstract algebra | null |
17961 | https://en.wikipedia.org/wiki/Least%20common%20multiple | Least common multiple | In arithmetic and number theory, the least common multiple, lowest common multiple, or smallest common multiple of two integers a and b, usually denoted by , is the smallest positive integer that is divisible by both a and b. Since division of integers by zero is undefined, this definition has meaning only if a and b are both different from zero. However, some authors define lcm(a, 0) as 0 for all a, since 0 is the only common multiple of a and 0.
The least common multiple of the denominators of two fractions is the "lowest common denominator" (lcd), and can be used for adding, subtracting or comparing the fractions.
The least common multiple of more than two integers a, b, c, . . . , usually denoted by , is defined as the smallest positive integer that is divisible by each of a, b, c, . . .
Overview
A multiple of a number is the product of that number and an integer. For example, 10 is a multiple of 5 because 5 × 2 = 10, so 10 is divisible by 5 and 2. Because 10 is the smallest positive integer that is divisible by both 5 and 2, it is the least common multiple of 5 and 2. By the same principle, 10 is the least common multiple of −5 and −2 as well.
Notation
The least common multiple of two integers a and b is denoted as lcm(a, b). Some older textbooks use [a, b].
Example
Multiples of 4 are:
Multiples of 6 are:
Common multiples of 4 and 6 are the numbers that are in both lists:
In this list, the smallest number is 12. Hence, the least common multiple is 12.
Applications
When adding, subtracting, or comparing simple fractions, the least common multiple of the denominators (often called the lowest common denominator) is used, because each of the fractions can be expressed as a fraction with this denominator. For example,
where the denominator 42 was used, because it is the least common multiple of 21 and 6.
Gears problem
Suppose there are two meshing gears in a machine, having m and n teeth, respectively, and the gears are marked by a line segment drawn from the center of the first gear to the center of the second gear. When the gears begin rotating, the number of rotations the first gear must complete to realign the line segment can be calculated by using . The first gear must complete rotations for the realignment. By that time, the second gear will have made rotations.
Planetary alignment
Suppose there are three planets revolving around a star which take l, m and n units of time, respectively, to complete their orbits. Assume that l, m and n are integers. Assuming the planets started moving around the star after an initial linear alignment, all the planets attain a linear alignment again after units of time. At this time, the first, second and third planet will have completed , and orbits, respectively, around the star.
Calculation
There are several ways to compute least common multiples.
Using the greatest common divisor
The least common multiple can be computed from the greatest common divisor (gcd) with the formula
To avoid introducing integers that are larger than the result, it is convenient to use the equivalent formulas
where the result of the division is always an integer.
These formulas are also valid when exactly one of and is , since . However, if both and are , these formulas would cause division by zero; so, must be considered as a special case.
To return to the example above,
There are fast algorithms, such as the Euclidean algorithm for computing the gcd that do not require the numbers to be factored. For very large integers, there are even faster algorithms for the three involved operations (multiplication, gcd, and division); see Fast multiplication. As these algorithms are more efficient with factors of similar size, it is more efficient to divide the largest argument of the lcm by the gcd of the arguments, as in the example above.
Using prime factorization
The unique factorization theorem indicates that every positive integer greater than 1 can be written in only one way as a product of prime numbers. The prime numbers can be considered as the atomic elements which, when combined, make up a composite number.
For example:
Here, the composite number 90 is made up of one atom of the prime number 2, two atoms of the prime number 3, and one atom of the prime number 5.
This fact can be used to find the lcm of a set of numbers.
Example: lcm(8,9,21)
Factor each number and express it as a product of prime number powers.
The lcm will be the product of multiplying the highest power of each prime number together. The highest power of the three prime numbers 2, 3, and 7 is 23, 32, and 71, respectively. Thus,
This method is not as efficient as reducing to the greatest common divisor, since there is no known general efficient algorithm for integer factorization.
The same method can also be illustrated with a Venn diagram as follows, with the prime factorization of each of the two numbers demonstrated in each circle and all factors they share in common in the intersection. The lcm then can be found by multiplying all of the prime numbers in the diagram.
Here is an example:
48 = 2 × 2 × 2 × 2 × 3,
180 = 2 × 2 × 3 × 3 × 5,
sharing two "2"s and a "3" in common:
Least common multiple = 2 × 2 × 2 × 2 × 3 × 3 × 5 = 720
Greatest common divisor = 2 × 2 × 3 = 12
Product = 2 × 2 × 2 × 2 × 3 × 2 × 2 × 3 × 3 × 5 = 8640
This also works for the greatest common divisor (gcd), except that instead of multiplying all of the numbers in the Venn diagram, one multiplies only the prime factors that are in the intersection. Thus the gcd of 48 and 180 is 2 × 2 × 3 = 12.
Formulas
Fundamental theorem of arithmetic
According to the fundamental theorem of arithmetic, every integer greater than 1 can be represented uniquely as a product of prime numbers, up to the order of the factors:
where the exponents n2, n3, ... are non-negative integers; for example, 84 = 22 31 50 71 110 130 ...
Given two positive integers and , their least common multiple and greatest common divisor are given by the formulas
and
Since
this gives
In fact, every rational number can be written uniquely as the product of primes, if negative exponents are allowed. When this is done, the above formulas remain valid. For example:
Lattice-theoretic
The positive integers may be partially ordered by divisibility: if a divides b (that is, if b is an integer multiple of a) write a ≤ b (or equivalently, b ≥ a). (Note that the usual magnitude-based definition of ≤ is not used here.)
Under this ordering, the positive integers become a lattice, with meet given by the gcd and join given by the lcm. The proof is straightforward, if a bit tedious; it amounts to checking that lcm and gcd satisfy the axioms for meet and join. Putting the lcm and gcd into this more general context establishes a duality between them:
If a formula involving integer variables, gcd, lcm, ≤ and ≥ is true, then the formula obtained by switching gcd with lcm and switching ≥ with ≤ is also true. (Remember ≤ is defined as divides).
The following pairs of dual formulas are special cases of general lattice-theoretic identities.
It can also be shown that this lattice is distributive; that is, lcm distributes over gcd and gcd distributes over lcm:
This identity is self-dual:
Other
Let D be the product of ω(D) distinct prime numbers (that is, D is squarefree).
Then
where the absolute bars || denote the cardinality of a set.
If none of is zero, then
In commutative rings
The least common multiple can be defined generally over commutative rings as follows:
Let and be elements of a commutative ring . A common multiple of and is an element of such that both and divide (that is, there exist elements and of such that and ). A least common multiple of and is a common multiple that is minimal, in the sense that for any other common multiple of and , divides .
In general, two elements in a commutative ring can have no least common multiple or more than one. However, any two least common multiples of the same pair of elements are associates. In a unique factorization domain, any two elements have a least common multiple. In a principal ideal domain, the least common multiple of and can be characterised as a generator of the intersection of the ideals generated by and (the intersection of a collection of ideals is always an ideal).
| Mathematics | Basics | null |
17973 | https://en.wikipedia.org/wiki/Liquid%20crystal | Liquid crystal | Liquid crystal (LC) is a state of matter whose properties are between those of conventional liquids and those of solid crystals. For example, a liquid crystal can flow like a liquid, but its molecules may be oriented in a common direction as in a solid. There are many types of LC phases, which can be distinguished by their optical properties (such as textures). The contrasting textures arise due to molecules within one area of material ("domain") being oriented in the same direction but different areas having different orientations. An LC material may not always be in an LC state of matter (just as water may be ice or water vapor).
Liquid crystals can be divided into three main types: thermotropic, lyotropic, and metallotropic. Thermotropic and lyotropic liquid crystals consist mostly of organic molecules, although a few minerals are also known. Thermotropic LCs exhibit a phase transition into the LC phase as temperature changes. Lyotropic LCs exhibit phase transitions as a function of both temperature and concentration of molecules in a solvent (typically water). Metallotropic LCs are composed of both organic and inorganic molecules; their LC transition additionally depends on the inorganic-organic composition ratio.
Examples of LCs exist both in the natural world and in technological applications. Lyotropic LCs abound in living systems; many proteins and cell membranes are LCs, as well as the tobacco mosaic virus . LCs in the mineral world include solutions of soap and various related detergents, and some clays. Widespread liquid-crystal displays (LCD) use liquid crystals.
History
In 1888, Austrian botanical physiologist Friedrich Reinitzer, working at the Karl-Ferdinands-Universität, examined the physico-chemical properties of various derivatives of cholesterol which now belong to the class of materials known as cholesteric liquid crystals. Previously, other researchers had observed distinct color effects when cooling cholesterol derivatives just above the freezing point, but had not associated it with a new phenomenon. Reinitzer perceived that color changes in a derivative cholesteryl benzoate were not the most peculiar feature. He found that cholesteryl benzoate does not melt in the same manner as other compounds, but has two melting points. At it melts into a cloudy liquid, and at it melts again and the cloudy liquid becomes clear. The phenomenon is reversible. Seeking help from a physicist, on March 14, 1888, he wrote to Otto Lehmann, at that time a in Aachen. They exchanged letters and samples. Lehmann examined the intermediate cloudy fluid, and reported seeing crystallites. Reinitzer's Viennese colleague von Zepharovich also indicated that the intermediate "fluid" was crystalline. The exchange of letters with Lehmann ended on April 24, with many questions unanswered. Reinitzer presented his results, with credits to Lehmann and von Zepharovich, at a meeting of the Vienna Chemical Society on May 3, 1888.
By that time, Reinitzer had discovered and described three important features of cholesteric liquid crystals (the name coined by Otto Lehmann in 1904): the existence of two melting points, the reflection of circularly polarized light, and the ability to rotate the polarization direction of light.
After his accidental discovery, Reinitzer did not pursue studying liquid crystals further. The research was continued by Lehmann, who realized that he had encountered a new phenomenon and was in a position to investigate it: In his postdoctoral years he had acquired expertise in crystallography and microscopy. Lehmann started a systematic study, first of cholesteryl benzoate, and then of related compounds which exhibited the double-melting phenomenon. He was able to make observations in polarized light, and his microscope was equipped with a hot stage (sample holder equipped with a heater) enabling high temperature observations. The intermediate cloudy phase clearly sustained flow, but other features, particularly the signature under a microscope, convinced Lehmann that he was dealing with a solid. By the end of August 1889 he had published his results in the Zeitschrift für Physikalische Chemie.
Lehmann's work was continued and significantly expanded by the German chemist Daniel Vorländer, who from the beginning of the 20th century until he retired in 1935, had synthesized most of the liquid crystals known. However, liquid crystals were not popular among scientists and the material remained a pure scientific curiosity for about 80 years.
After World War II, work on the synthesis of liquid crystals was restarted at university research laboratories in Europe. George William Gray, a prominent researcher of liquid crystals, began investigating these materials in England in the late 1940s. His group synthesized many new materials that exhibited the liquid crystalline state and developed a better understanding of how to design molecules that exhibit the state. His book Molecular Structure and the Properties of Liquid Crystals became a guidebook on the subject. One of the first U.S. chemists to study liquid crystals was Glenn H. Brown, starting in 1953 at the University of Cincinnati and later at Kent State University. In 1965, he organized the first international conference on liquid crystals, in Kent, Ohio, with about 100 of the world's top liquid crystal scientists in attendance. This conference marked the beginning of a worldwide effort to perform research in this field, which soon led to the development of practical applications for these unique materials.
Liquid crystal materials became a focus of research in the development of flat panel electronic displays beginning in 1962 at RCA Laboratories. When physical chemist Richard Williams applied an electric field to a thin layer of a nematic liquid crystal at 125 °C, he observed the formation of a regular pattern that he called domains (now known as Williams Domains). This led his colleague George H. Heilmeier to perform research on a liquid crystal-based flat panel display to replace the cathode ray vacuum tube used in televisions. But the para-azoxyanisole that Williams and Heilmeier used exhibits the nematic liquid crystal state only above 116 °C, which made it impractical to use in a commercial display product. A material that could be operated at room temperature was clearly needed.
In 1966, Joel E. Goldmacher and Joseph A. Castellano, research chemists in Heilmeier group at RCA, discovered that mixtures made exclusively of nematic compounds that differed only in the number of carbon atoms in the terminal side chains could yield room-temperature nematic liquid crystals. A ternary mixture of Schiff base compounds resulted in a material that had a nematic range of 22–105 °C. Operation at room temperature enabled the first practical display device to be made. The team then proceeded to prepare numerous mixtures of nematic compounds many of which had much lower melting points. This technique of mixing nematic compounds to obtain wide operating temperature range eventually became the industry standard and is still used to tailor materials to meet specific applications.
In 1969, Hans Keller succeeded in synthesizing a substance that had a nematic phase at room temperature, N-(4-methoxybenzylidene)-4-butylaniline (MBBA), which is one of the most popular subjects of liquid crystal research. The next step to commercialization of liquid-crystal displays was the synthesis of further chemically stable substances (cyanobiphenyls) with low melting temperatures by George Gray. That work with Ken Harrison and the UK MOD (RRE Malvern), in 1973, led to design of new materials resulting in rapid adoption of small area LCDs within electronic products.
These molecules are rod-shaped, some created in the laboratory and some appearing spontaneously in nature. Since then, two new types of LC molecules have been synthesized: disc-shaped (by Sivaramakrishna Chandrasekhar in India in 1977) and cone or bowl shaped (predicted by Lui Lam in China in 1982 and synthesized in Europe in 1985).
In 1991, when liquid crystal displays were already well established, Pierre-Gilles de Gennes working at the Université Paris-Sud received the Nobel Prize in physics "for discovering that methods developed for studying order phenomena in simple systems can be generalized to more complex forms of matter, in particular to liquid crystals and polymers".
Design of liquid crystalline materials
A large number of chemical compounds are known to exhibit one or several liquid crystalline phases. Despite significant differences in chemical composition, these molecules have some common features in chemical and physical properties. There are three types of thermotropic liquid crystals: discotic, conic (bowlic), and rod-shaped molecules. Discotics are disc-like molecules consisting of a flat core of adjacent aromatic rings, whereas the core in a conic LC is not flat, but is shaped like a rice bowl (a three-dimensional object). This allows for two dimensional columnar ordering, for both discotic and conic LCs. Rod-shaped molecules have an elongated, anisotropic geometry which allows for preferential alignment along one spatial direction.
The molecular shape should be relatively thin, flat or conic, especially within rigid molecular frameworks.
The molecular length should be at least 1.3 nm, consistent with the presence of long alkyl group on many room-temperature liquid crystals.
The structure should not be branched or angular, except for the conic LC.
A low melting point is preferable in order to avoid metastable, monotropic liquid crystalline phases. Low-temperature mesomorphic behavior in general is technologically more useful, and alkyl terminal groups promote this.
An extended, structurally rigid, highly anisotropic shape seems to be the main criterion for liquid crystalline behavior, and as a result many liquid crystalline materials are based on benzene rings.
Liquid-crystal phases
The various liquid-crystal phases (called mesophases together with plastic crystal phases) can be characterized by the type of ordering. One can distinguish positional order (whether molecules are arranged in any sort of ordered lattice) and orientational order (whether molecules are mostly pointing in the same direction). Liquid crystals are characterized by orientational order, but only partial or completely absent positional order. In contrast, materials with positional order but no orientational order are known as plastic crystals. Most thermotropic LCs will have an isotropic phase at high temperature: heating will eventually drive them into a conventional liquid phase characterized by random and isotropic molecular ordering and fluid-like flow behavior. Under other conditions (for instance, lower temperature), a LC might inhabit one or more phases with significant anisotropic orientational structure and short-range orientational order while still having an ability to flow.
The ordering of liquid crystals extends up to the entire domain size, which may be on the order of micrometers, but usually not to the macroscopic scale as often occurs in classical crystalline solids. However some techniques, such as the use of boundaries or an applied electric field, can be used to enforce a single ordered domain in a macroscopic liquid crystal sample. The orientational ordering in a liquid crystal might extend along only one dimension, with the material being essentially disordered in the other two directions.
Thermotropic liquid crystals
Thermotropic phases are those that occur in a certain temperature range. If the temperature rise is too high, thermal motion will destroy the delicate cooperative ordering of the LC phase, pushing the material into a conventional isotropic liquid phase. At too low temperature, most LC materials will form a conventional crystal. Many thermotropic LCs exhibit a variety of phases as temperature is changed. For instance, a particular type of LC molecule (called a mesogen) may exhibit various smectic phases followed by the nematic phase and finally the isotropic phase as temperature is increased. An example of a compound displaying thermotropic LC behavior is para-azoxyanisole.
Nematic phase
The simplest liquid crystal phase is the nematic. In a nematic phase, organic molecules lack a crystalline positional order, but do self-align with their long axes roughly parallel. The molecules are free to flow and their center of mass positions are randomly distributed as in a liquid, but their orientation is constrained to form a long-range directional order.
The word nematic comes from the Greek (), which means "thread". This term originates from the disclinations: thread-like topological defects observed in nematic phases.
Nematics also exhibit so-called "hedgehog" topological defects. In two dimensions, there are topological defects with topological charges and . Due to hydrodynamics, the defect moves considerably faster than the defect. When placed close to each other, the defects attract; upon collision, they annihilate.
Most nematic phases are uniaxial: they have one axis (called a directrix) that is longer and preferred, with the other two being equivalent (can be approximated as cylinders or rods). However, some liquid crystals are biaxial nematic, meaning that in addition to orienting their long axis, they also orient along a secondary axis. Nematic crystals have fluidity similar to that of ordinary (isotropic) liquids but they can be easily aligned by an external magnetic or electric field. Aligned nematics have the optical properties of uniaxial crystals and this makes them extremely useful in liquid-crystal displays (LCD).
Nematic phases are also known in non-molecular systems: at high magnetic fields, electrons flow in bundles or stripes to create an "electronic nematic" form of matter.
Smectic phases
The smectic phases, which are found at lower temperatures than the nematic, form well-defined layers that can slide over one another in a manner similar to that of soap. The word "smectic" originates from the Latin word "smecticus", meaning cleaning, or having soap-like properties.
The smectics are thus positionally ordered along one direction. In the Smectic A phase, the molecules are oriented along the layer normal, while in the Smectic C phase they are tilted away from it. These phases are liquid-like within the layers. There are many different smectic phases, all characterized by different types and degrees of positional and orientational order. Beyond organic molecules, Smectic ordering has also been reported to occur within colloidal suspensions of 2-D materials or nanosheets. One example of smectic LCs is [[p,p-dinonylazobenzene|p,p-dinonylazobenzene]].
Chiral phases or twisted nematics
The chiral nematic phase exhibits chirality (handedness). This phase is often called the cholesteric phase because it was first observed for cholesterol derivatives. Only chiral molecules can give rise to such a phase. This phase exhibits a twisting of the molecules perpendicular to the director, with the molecular axis parallel to the director. The finite twist angle between adjacent molecules is due to their asymmetric packing, which results in longer-range chiral order. In the smectic C* phase (an asterisk denotes a chiral phase), the molecules have positional ordering in a layered structure (as in the other smectic phases), with the molecules tilted by a finite angle with respect to the layer normal. The chirality induces a finite azimuthal twist from one layer to the next, producing a spiral twisting of the molecular axis along the layer normal, hence they are also called twisted nematics.
The chiral pitch, p, refers to the distance over which the LC molecules undergo a full 360° twist (but note that the structure of the chiral nematic phase repeats itself every half-pitch, since in this phase directors at 0° and ±180° are equivalent). The pitch, p, typically changes when the temperature is altered or when other molecules are added to the LC host (an achiral LC host material will form a chiral phase if doped with a chiral material), allowing the pitch of a given material to be tuned accordingly. In some liquid crystal systems, the pitch is of the same order as the wavelength of visible light. This causes these systems to exhibit unique optical properties, such as Bragg reflection and low-threshold laser emission, and these properties are exploited in a number of optical applications. For the case of Bragg reflection only the lowest-order reflection is allowed if the light is incident along the helical axis, whereas for oblique incidence higher-order reflections become permitted. Cholesteric liquid crystals also exhibit the unique property that they reflect circularly polarized light when it is incident along the helical axis and elliptically polarized if it comes in obliquely.
Blue phases
Blue phases are liquid crystal phases that appear in the temperature range between a chiral nematic phase and an isotropic liquid phase. Blue phases have a regular three-dimensional cubic structure of defects with lattice periods of several hundred nanometers, and thus they exhibit selective Bragg reflections in the wavelength range of visible light corresponding to the cubic lattice. It was theoretically predicted in 1981 that these phases can possess icosahedral symmetry similar to quasicrystals.
Although blue phases are of interest for fast light modulators or tunable photonic crystals, they exist in a very narrow temperature range, usually less than a few kelvins. Recently the stabilization of blue phases over a temperature range of more than 60 K including room temperature (260–326 K) has been demonstrated. Blue phases stabilized at room temperature allow electro-optical switching with response times of the order of 10−4 s. In May 2008, the first blue phase mode LCD panel had been developed.
Blue phase crystals, being a periodic cubic structure with a bandgap in the visible wavelength range, can be considered as 3D photonic crystals. Producing ideal blue phase crystals in large volumes is still problematic, since the produced crystals are usually polycrystalline (platelet structure) or the single crystal size is limited (in the micrometer range). Recently, blue phases obtained as ideal 3D photonic crystals in large volumes have been stabilized and produced with different controlled crystal lattice orientations.
Discotic phases
Disk-shaped LC molecules can orient themselves in a layer-like fashion known as the discotic nematic phase. If the disks pack into stacks, the phase is called a discotic columnar. The columns themselves may be organized into rectangular or hexagonal arrays. Chiral discotic phases, similar to the chiral nematic phase, are also known.
Conic phases
Conic LC molecules, like in discotics, can form columnar phases. Other phases, such as nonpolar nematic, polar nematic, stringbean, donut and onion phases, have been predicted. Conic phases, except nonpolar nematic, are polar phases.
Lyotropic liquid crystals
A lyotropic liquid crystal consists of two or more components that exhibit liquid-crystalline properties in certain concentration ranges. In the lyotropic phases, solvent molecules fill the space around the compounds to provide fluidity to the system. In contrast to thermotropic liquid crystals, these lyotropics have another degree of freedom of concentration that enables them to induce a variety of different phases.
A compound that has two immiscible hydrophilic and hydrophobic parts within the same molecule is called an amphiphilic molecule. Many amphiphilic molecules show lyotropic liquid-crystalline phase sequences depending on the volume balances between the hydrophilic part and hydrophobic part. These structures are formed through the micro-phase segregation of two incompatible components on a nanometer scale. Soap is an everyday example of a lyotropic liquid crystal.
The content of water or other solvent molecules changes the self-assembled structures. At very low amphiphile concentration, the molecules will be dispersed randomly without any ordering. At slightly higher (but still low) concentration, amphiphilic molecules will spontaneously assemble into micelles or vesicles. This is done so as to 'hide' the hydrophobic tail of the amphiphile inside the micelle core, exposing a hydrophilic (water-soluble) surface to aqueous solution. These spherical objects do not order themselves in solution, however. At higher concentration, the assemblies will become ordered. A typical phase is a hexagonal columnar phase, where the amphiphiles form long cylinders (again with a hydrophilic surface) that arrange themselves into a roughly hexagonal lattice. This is called the middle soap phase. At still higher concentration, a lamellar phase (neat soap phase) may form, wherein extended sheets of amphiphiles are separated by thin layers of water. For some systems, a cubic (also called viscous isotropic) phase may exist between the hexagonal and lamellar phases, wherein spheres are formed that create a dense cubic lattice. These spheres may also be connected to one another, forming a bicontinuous cubic phase.
The objects created by amphiphiles are usually spherical (as in the case of micelles), but may also be disc-like (bicelles), rod-like, or biaxial (all three micelle axes are distinct). These anisotropic self-assembled nano-structures can then order themselves in much the same way as thermotropic liquid crystals do, forming large-scale versions of all the thermotropic phases (such as a nematic phase of rod-shaped micelles).
For some systems, at high concentrations, inverse phases are observed. That is, one may generate an inverse hexagonal columnar phase (columns of water encapsulated by amphiphiles) or an inverse micellar phase (a bulk liquid crystal sample with spherical water cavities).
A generic progression of phases, going from low to high amphiphile concentration, is:
Discontinuous cubic phase (micellar cubic phase)
Hexagonal phase (hexagonal columnar phase) (middle phase)
Lamellar phase
Bicontinuous cubic phase
Reverse hexagonal columnar phase
Inverse cubic phase (Inverse micellar phase)
Even within the same phases, their self-assembled structures are tunable by the concentration: for example, in lamellar phases, the layer distances increase with the solvent volume. Since lyotropic liquid crystals rely on a subtle balance of intermolecular interactions, it is more difficult to analyze their structures and properties than those of thermotropic liquid crystals.
Similar phases and characteristics can be observed in immiscible diblock copolymers.
Metallotropic liquid crystals
Liquid crystal phases can also be based on low-melting inorganic phases like ZnCl2 that have a structure formed of linked tetrahedra and easily form glasses. The addition of long chain soap-like molecules leads to a series of new phases that show a variety of liquid crystalline behavior both as a function of the inorganic-organic composition ratio and of temperature. This class of materials has been named metallotropic.
Laboratory analysis of mesophases
Thermotropic mesophases are detected and characterized by two major methods, the original method was use of thermal optical microscopy, in which a small sample of the material was placed between two crossed polarizers; the sample was then heated and cooled. As the isotropic phase would not significantly affect the polarization of the light, it would appear very dark, whereas the crystal and liquid crystal phases will both polarize the light in a uniform way, leading to brightness and color gradients. This method allows for the characterization of the particular phase, as the different phases are defined by their particular order, which must be observed. The second method, differential scanning calorimetry (DSC), allows for more precise determination of phase transitions and transition enthalpies. In DSC, a small sample is heated in a way that generates a very precise change in temperature with respect to time. During phase transitions, the heat flow required to maintain this heating or cooling rate will change. These changes can be observed and attributed to various phase transitions, such as key liquid crystal transitions.
Lyotropic mesophases are analyzed in a similar fashion, though these experiments are somewhat more complex, as the concentration of mesogen is a key factor. These experiments are run at various concentrations of mesogen in order to analyze that impact.
Biological liquid crystals
Lyotropic liquid-crystalline phases are abundant in living systems, the study of which is referred to as lipid polymorphism. Accordingly, lyotropic liquid crystals attract particular attention in the field of biomimetic chemistry. In particular, biological membranes and cell membranes are a form of liquid crystal. Their constituent molecules (e.g. phospholipids) are perpendicular to the membrane surface, yet the membrane is flexible. These lipids vary in shape (see page on lipid polymorphism). The constituent molecules can inter-mingle easily, but tend not to leave the membrane due to the high energy requirement of this process. Lipid molecules can flip from one side of the membrane to the other, this process being catalyzed by flippases and floppases (depending on the direction of movement). These liquid crystal membrane phases can also host important proteins such as receptors freely "floating" inside, or partly outside, the membrane, e.g. CTP:phosphocholine cytidylyltransferase (CCT).
Many other biological structures exhibit liquid-crystal behavior. For instance, the concentrated protein solution that is extruded by a spider to generate silk is, in fact, a liquid crystal phase. The precise ordering of molecules in silk is critical to its renowned strength. DNA and many polypeptides, including actively-driven cytoskeletal filaments, can also form liquid crystal phases. Monolayers of elongated cells have also been described to exhibit liquid-crystal behavior, and the associated topological defects have been associated with biological consequences, including cell death and extrusion. Together, these biological applications of liquid crystals form an important part of current academic research.
Mineral liquid crystals
Examples of liquid crystals can also be found in the mineral world, most of them being lyotropic. The first discovered was vanadium(V) oxide, by Zocher in 1925. Since then, few others have been discovered and studied in detail. The existence of a true nematic phase in the case of the smectite clays family was raised by Langmuir in 1938, but remained an open question for a very long time and was only confirmed recently.
With the rapid development of nanosciences, and the synthesis of many new anisotropic nanoparticles, the number of such mineral liquid crystals is increasing quickly, with, for example, carbon nanotubes and graphene. A lamellar phase was even discovered, H3Sb3P2O14, which exhibits hyperswelling up to ~250 nm for the interlamellar distance.
Pattern formation in liquid crystals
Anisotropy of liquid crystals is a property not observed in other fluids. This anisotropy makes flows of liquid crystals behave more differentially than those of ordinary fluids. For example, injection of a flux of a liquid crystal between two close parallel plates (viscous fingering) causes orientation of the molecules to couple with the flow, with the resulting emergence of dendritic patterns. This anisotropy is also manifested in the interfacial energy (surface tension) between different liquid crystal phases. This anisotropy determines the equilibrium shape at the coexistence temperature, and is so strong that usually facets appear. When temperature is changed one of the phases grows, forming different morphologies depending on the temperature change. Since growth is controlled by heat diffusion, anisotropy in thermal conductivity favors growth in specific directions, which has also an effect on the final shape.
Theoretical treatment of liquid crystals
Microscopic theoretical treatment of fluid phases can become quite complicated, owing to the high material density, meaning that strong interactions, hard-core repulsions, and many-body correlations cannot be ignored. In the case of liquid crystals, anisotropy in all of these interactions further complicates analysis. There are a number of fairly simple theories, however, that can at least predict the general behavior of the phase transitions in liquid crystal systems.
Director
As we already saw above, the nematic liquid crystals are composed of rod-like molecules with the long axes of neighboring molecules aligned approximately to one another. To describe this anisotropic structure, a dimensionless unit vector n called the director, is introduced to represent the direction of preferred orientation of molecules in the neighborhood of any point. Because there is no physical polarity along the director axis, n and -n''' are fully equivalent.
Order parameter
The description of liquid crystals involves an analysis of order. A second rank symmetric traceless tensor order parameter, the Q tensor is used to describe the orientational order of the most general biaxial nematic liquid crystal. However, to describe the more common case of uniaxial nematic liquid crystals, a scalar order parameter is sufficient. To make this quantitative, an orientational order parameter is usually defined based on the average of the second Legendre polynomial:
where is the angle between the liquid-crystal molecular axis and the local director (which is the 'preferred direction' in a volume element of a liquid crystal sample, also representing its local optical axis). The brackets denote both a temporal and spatial average. This definition is convenient, since for a completely random and isotropic sample, S = 0, whereas for a perfectly aligned sample S=1. For a typical liquid crystal sample, S is on the order of 0.3 to 0.8, and generally decreases as the temperature is raised. In particular, a sharp drop of the order parameter to 0 is observed when the system undergoes a phase transition from an LC phase into the isotropic phase. The order parameter can be measured experimentally in a number of ways; for instance, diamagnetism, birefringence, Raman scattering, NMR and EPR can be used to determine S.
The order of a liquid crystal could also be characterized by using other even Legendre polynomials (all the odd polynomials average to zero since the director can point in either of two antiparallel directions). These higher-order averages are more difficult to measure, but can yield additional information about molecular ordering.
A positional order parameter is also used to describe the ordering of a liquid crystal. It is characterized by the variation of the density of the center of mass of the liquid crystal molecules along a given vector. In the case of positional variation along the z-axis the density is often given by:
The complex positional order parameter is defined as and the average density. Typically only the first two terms are kept and higher order terms are ignored since most phases can be described adequately using sinusoidal functions. For a perfect nematic and for a smectic phase will take on complex values. The complex nature of this order parameter allows for many parallels between nematic to smectic phase transitions and conductor to superconductor transitions.
Onsager hard-rod model
A simple model which predicts lyotropic phase transitions is the hard-rod model proposed by Lars Onsager. This theory considers the volume excluded from the center-of-mass of one idealized cylinder as it approaches another. Specifically, if the cylinders are oriented parallel to one another, there is very little volume that is excluded from the center-of-mass of the approaching cylinder (it can come quite close to the other cylinder). If, however, the cylinders are at some angle to one another, then there is a large volume surrounding the cylinder which the approaching cylinder's center-of-mass cannot enter (due to the hard-rod repulsion between the two idealized objects). Thus, this angular arrangement sees a decrease in the net positional entropy of the approaching cylinder (there are fewer states available to it).
The fundamental insight here is that, whilst parallel arrangements of anisotropic objects lead to a decrease in orientational entropy, there is an increase in positional entropy. Thus in some case greater positional order will be entropically favorable. This theory thus predicts that a solution of rod-shaped objects will undergo a phase transition, at sufficient concentration, into a nematic phase. Although this model is conceptually helpful, its mathematical formulation makes several assumptions that limit its applicability to real systems. An extension of Onsager Theory was proposed by Flory to account for non entropic effects.
Maier–Saupe mean field theory
This statistical theory, proposed by Alfred Saupe and Wilhelm Maier, includes contributions from an attractive intermolecular potential from an induced dipole moment between adjacent rod-like liquid crystal molecules. The anisotropic attraction stabilizes parallel alignment of neighboring molecules, and the theory then considers a mean-field average of the interaction. Solved self-consistently, this theory predicts thermotropic nematic-isotropic phase transitions, consistent with experiment. Maier-Saupe mean field theory is extended to high molecular weight liquid crystals by incorporating the bending stiffness of the molecules and using the method of path integrals in polymer science.
McMillan's model
McMillan's model, proposed by William McMillan, is an extension of the Maier–Saupe mean field theory used to describe the phase transition of a liquid crystal from a nematic to a smectic A phase. It predicts that the phase transition can be either continuous or discontinuous depending on the strength of the short-range interaction between the molecules. As a result, it allows for a triple critical point where the nematic, isotropic, and smectic A phase meet. Although it predicts the existence of a triple critical point, it does not successfully predict its value. The model utilizes two order parameters that describe the orientational and positional order of the liquid crystal. The first is simply the average of the second Legendre polynomial and the second order parameter is given by:
The values zi, θi, and d'' are the position of the molecule, the angle between the molecular axis and director, and the layer spacing. The postulated potential energy of a single molecule is given by:
Here constant α quantifies the strength of the interaction between adjacent molecules. The potential is then used to derive the thermodynamic properties of the system assuming thermal equilibrium. It results in two self-consistency equations that must be solved numerically, the solutions of which are the three stable phases of the liquid crystal.
Elastic continuum theory
In this formalism, a liquid crystal material is treated as a continuum; molecular details are entirely ignored. Rather, this theory considers perturbations to a presumed oriented sample. The distortions of the liquid crystal are commonly described by the Frank free energy density. One can identify three types of distortions that could occur in an oriented sample: (1) twists of the material, where neighboring molecules are forced to be angled with respect to one another, rather than aligned; (2) splay of the material, where bending occurs perpendicular to the director; and (3) bend of the material, where the distortion is parallel to the director and molecular axis. All three of these types of distortions incur an energy penalty. They are distortions that are induced by the boundary conditions at domain walls or the enclosing container. The response of the material can then be decomposed into terms based on the elastic constants corresponding to the three types of distortions. Elastic continuum theory is an effective tool for modeling liquid crystal devices and lipid bilayers.
External influences on liquid crystals
Scientists and engineers are able to use liquid crystals in a variety of applications because external perturbation can cause significant changes in the macroscopic properties of the liquid crystal system. Both electric and magnetic fields can be used to induce these changes. The magnitude of the fields, as well as the speed at which the molecules align are important characteristics industry deals with. Special surface treatments can be used in liquid crystal devices to force specific orientations of the director.
Electric and magnetic field effects
The ability of the director to align along an external field is caused by the electric nature of the molecules. Permanent electric dipoles result when one end of a molecule has a net positive charge while the other end has a net negative charge. When an external electric field is applied to the liquid crystal, the dipole molecules tend to orient themselves along the direction of the field.
Even if a molecule does not form a permanent dipole, it can still be influenced by an electric field. In some cases, the field produces slight re-arrangement of electrons and protons in molecules such that an induced electric dipole results. While not as strong as permanent dipoles, orientation with the external field still occurs.
The response of any system to an external electrical field is
where , and are the components of the electric field, electric displacement field and polarization density. The electric energy per volume stored in the system is
(summation over the doubly appearing index ). In nematic liquid crystals, the polarization, and electric displacement both depend linearly on the direction of the electric field. The polarization should be even in the director since liquid crystals are invariants under reflexions of . The most general form to express is
(summation over the index ) with and the electric permittivity parallel and perpendicular to the director . Then density of energy is (ignoring the constant terms that do not contribute to the dynamics of the system)
(summation over ). If is positive, then the minimum of the energy is achieved when and are parallel. This means that the system will favor aligning the liquid crystal with the externally applied electric field. If is negative, then the minimum of the energy is achieved when and are perpendicular (in nematics the perpendicular orientation is degenerated, making possible the emergence of vortices).
The difference is called dielectrical anisotropy and is an important parameter in liquid crystal applications. There are both and commercial liquid crystals. 5CB and E7 liquid crystal mixture are two liquid crystals commonly used. MBBA is a common liquid crystal.
The effects of magnetic fields on liquid crystal molecules are analogous to electric fields. Because magnetic fields are generated by moving electric charges, permanent magnetic dipoles are produced by electrons moving about atoms. When a magnetic field is applied, the molecules will tend to align with or against the field. Electromagnetic radiation, e.g. UV-Visible light, can influence light-responsive liquid crystals which mainly carry at least a photo-switchable unit.
Surface preparations
In the absence of an external field, the director of a liquid crystal is free to point in any direction. It is possible, however, to force the director to point in a specific direction by introducing an outside agent to the system. For example, when a thin polymer coating (usually a polyimide) is spread on a glass substrate and rubbed in a single direction with a cloth, it is observed that liquid crystal molecules in contact with that surface align with the rubbing direction. The currently accepted mechanism for this is believed to be an epitaxial growth of the liquid crystal layers on the partially aligned polymer chains in the near surface layers of the polyimide.
Several liquid crystal chemicals also align to a 'command surface' which is in turn aligned by electric field of polarized light. This process is called photoalignment.
Fréedericksz transition
The competition between orientation produced by surface anchoring and by electric field effects is often exploited in liquid crystal devices. Consider the case in which liquid crystal molecules are aligned parallel to the surface and an electric field is applied perpendicular to the cell. At first, as the electric field increases in magnitude, no change in alignment occurs. However at a threshold magnitude of electric field, deformation occurs. Deformation occurs where the director changes its orientation from one molecule to the next. The occurrence of such a change from an aligned to a deformed state is called a Fréedericksz transition and can also be produced by the application of a magnetic field of sufficient strength.
The Fréedericksz transition is fundamental to the operation of many liquid crystal displays because the director orientation (and thus the properties) can be controlled easily by the application of a field.
Effect of chirality
As already described, chiral liquid-crystal molecules usually give rise to chiral mesophases. This means that the molecule must possess some form of asymmetry, usually a stereogenic center. An additional requirement is that the system not be racemic: a mixture of right- and left-handed molecules will cancel the chiral effect. Due to the cooperative nature of liquid crystal ordering, however, a small amount of chiral dopant in an otherwise achiral mesophase is often enough to select out one domain handedness, making the system overall chiral.
Chiral phases usually have a helical twisting of the molecules. If the pitch of this twist is on the order of the wavelength of visible light, then interesting optical interference effects can be observed. The chiral twisting that occurs in chiral LC phases also makes the system respond differently from right- and left-handed circularly polarized light. These materials can thus be used as polarization filters.
It is possible for chiral LC molecules to produce essentially achiral mesophases. For instance, in certain ranges of concentration and molecular weight, DNA will form an achiral line hexatic phase. An interesting recent observation is of the formation of chiral mesophases from achiral LC molecules. Specifically, bent-core molecules (sometimes called banana liquid crystals) have been shown to form liquid crystal phases that are chiral. In any particular sample, various domains will have opposite handedness, but within any given domain, strong chiral ordering will be present. The appearance mechanism of this macroscopic chirality is not yet entirely clear. It appears that the molecules stack in layers and orient themselves in a tilted fashion inside the layers. These liquid crystals phases may be ferroelectric or anti-ferroelectric, both of which are of interest for applications.
Chirality can also be incorporated into a phase by adding a chiral dopant, which may not form LCs itself. Twisted-nematic or super-twisted nematic mixtures often contain a small amount of such dopants.
Applications of liquid crystals
Liquid crystals find wide use in liquid crystal displays, which rely on the optical properties of certain liquid crystalline substances in the presence or absence of an electric field. In a typical device, a liquid crystal layer (typically 4 μm thick) sits between two polarizers that are crossed (oriented at 90° to one another). The liquid crystal alignment is chosen so that its relaxed phase is a twisted one (see Twisted nematic field effect). This twisted phase reorients light that has passed through the first polarizer, allowing its transmission through the second polarizer (and reflected back to the observer if a reflector is provided). The device thus appears transparent. When an electric field is applied to the LC layer, the long molecular axes tend to align parallel to the electric field thus gradually untwisting in the center of the liquid crystal layer. In this state, the LC molecules do not reorient light, so the light polarized at the first polarizer is absorbed at the second polarizer, and the device loses transparency with increasing voltage. In this way, the electric field can be used to make a pixel switch between transparent or opaque on command. Color LCD systems use the same technique, with color filters used to generate red, green, and blue pixels. Chiral smectic liquid crystals are used in ferroelectric LCDs which are fast-switching binary light modulators. Similar principles can be used to make other liquid crystal based optical devices.
Liquid crystal tunable filters are used as electro-optical devices, e.g., in hyperspectral imaging.
Thermotropic chiral LCs whose pitch varies strongly with temperature can be used as crude liquid crystal thermometers, since the color of the material will change as the pitch is changed. Liquid crystal color transitions are used on many aquarium and pool thermometers as well as on thermometers for infants or baths. Other liquid crystal materials change color when stretched or stressed. Thus, liquid crystal sheets are often used in industry to look for hot spots, map heat flow, measure stress distribution patterns, and so on. Liquid crystal in fluid form is used to detect electrically generated hot spots for failure analysis in the semiconductor industry.
Liquid crystal lenses converge or diverge the incident light by adjusting the refractive index of liquid crystal layer with applied voltage or temperature. Generally, the liquid crystal lenses generate a parabolic refractive index distribution by arranging molecular orientations. Therefore, a plane wave is reshaped into a parabolic wavefront by a liquid crystal lens. The focal length of liquid crystal lenses could be continuously tunable when the external electric field can be properly tuned. Liquid crystal lenses are a kind of adaptive optics. Imaging systems can benefit from focusing correction, image plane adjustment, or changing the range of depth-of-field or depth of focus. The liquid crystal lense is one of the candidates to develop vision correction devices for myopia and presbyopia (e.g., tunable eyeglass and smart contact lenses). Being an optical phase modulator, a liquid crystal lens feature space-variant optical path length (i.e., optical path length as the function of its pupil coordinate). In different imaging system, the required function of optical path length varies from one to another. For example, to converge a plane wave into a diffraction limited spot, for a physically-planar liquid crystal structure, the refractive index of liquid crystal layer should be spherical or paraboloidal under paraxial approximation. As for projecting images or sensing objects, it may be expected to have the liquid crystal lens with aspheric distribution of optical path length across its aperture of interest. Liquid crystal lenses with electrically tunable refractive index (by addressing the different magnitude of electric field on liquid crystal layer) have potentials to achieve arbitrary function of optical path length for modulating incoming wavefront; current liquid crystal freeform optical elements were extended from liquid crystal lens with same optical mechanisms. The applications of liquid crystals lenses includes pico-projectors, prescriptions lenses (eyeglasses or contact lenses), smart phone camera, augmented reality, virtual reality etc.
Liquid crystal lasers use a liquid crystal in the lasing medium as a distributed feedback mechanism instead of external mirrors. Emission at a photonic bandgap created by the periodic dielectric structure of the liquid crystal gives a low-threshold high-output device with stable monochromatic emission.
Polymer dispersed liquid crystal (PDLC) sheets and rolls are available as adhesive backed Smart film which can be applied to windows and electrically switched between transparent and opaque to provide privacy.
Many common fluids, such as soapy water, are in fact liquid crystals. Soap forms a variety of LC phases depending on its concentration in water.
Liquid crystal films have revolutionized the world of technology. Currently they are used in the most diverse devices, such as digital clocks, mobile phones, calculating machines and televisions. The use of liquid crystal films in optical memory devices, with a process similar to the recording and reading of CDs and DVDs may be possible.
Liquid crystals are also used as basic technology to imitate quantum computers, using electric fields to manipulate the orientation of the liquid crystal molecules, to store data and to encode a different value for every different degree of misalignment with other molecules.
| Physical sciences | States of matter | Physics |
17974 | https://en.wikipedia.org/wiki/Long%20gun | Long gun | A long gun is a category of firearms with long barrels. In small arms, a long gun or longarm is generally designed to be held by both hands and braced against the shoulder, in contrast to a handgun, which can be fired being held with a single hand. In the context of cannons and mounted firearms, an artillery long gun would be contrasted with a field gun or howitzer.
Small arms
The actual length of the barrels of a long gun is subject to various laws in many jurisdictions, mainly concerning minimum length, sometimes as measured in a specific position or configuration. The National Firearms Act in the United States sets a minimum length of for rifle barrels and for shotgun barrels. Canada sets a minimum of for either. In addition, Canada sets a minimum fireable length for long guns with detachable or folding stocks . In the United States, the minimum length for long guns with detachable or folding stocks is with the stock in the extended position.
Examples of various classes of small arms generally considered long arms include, but are not limited to shotguns, personal defense weapons, submachine guns, carbines, assault rifles, designated marksman rifles, sniper rifles, anti-material rifles, light machine guns, medium machine guns, and heavy machine guns.
Advantages and disadvantages
Almost all long arms have front grips (forearms) and shoulder stocks, which provide the user the ability to hold the firearm more steadily than a handgun. In addition, the long barrel of a long gun usually provides a longer distance between the front and rear sights, providing the user with more precision when aiming. The presence of a stock makes the use of a telescopic sight or red dot sight easier than with a handgun.
The mass of a long gun is usually greater than that of a handgun, making the long gun more expensive to transport, and more difficult and tiring to carry. The increased moment of inertia makes the long gun slower and more difficult to traverse and elevate, and it is thus slower and more difficult to adjust the aim. However, this also results in greater stability in aiming. The greater amount of material in a long gun tends to make it more expensive to manufacture, other factors being equal. The greater size makes it more difficult to conceal, and more inconvenient to use in confined quarters, as well as requiring larger storage space.
As long guns include a stock that is braced against the shoulder, the recoil when firing is transferred directly into the body of the user. This allows better control of aim than handguns, which do not include stock, and thus all their recoil must be transferred to the arms of the user. It also makes it possible to manage larger amounts of recoil without damage or loss of control; in combination with the higher mass of long guns, this means more propellant (such as gunpowder) can be used and thus larger projectiles can be fired at higher velocities. This is one of the main reasons for the use of long guns over handguns—faster or heavier projectiles help with penetration and accuracy over longer distances.
Shotguns are long guns that are designed to fire many small projectiles at once. This makes them very effective at close ranges, but with diminished usefulness at long ranges, even with shotgun slugs they are mostly only effective to about .
Naval long guns
In historical navy usage, a long gun was the standard type of cannon mounted by a sailing vessel, called such to distinguish it from the much shorter carronades. In informal usage, the length was combined with the weight of the shot, yielding terms like "long 9s", referring to full-length cannons firing a 9-pound round shot.
| Technology | Firearms | null |
17981 | https://en.wikipedia.org/wiki/Law%20of%20definite%20proportions | Law of definite proportions | In chemistry, the law of definite proportions, sometimes called Proust's law or the law of constant composition, states that a given
chemical compound contains its constituent elements in a fixed ratio (by mass) and does not depend on its source or method of preparation. For example, oxygen makes up about 8/9 of the mass of any sample of pure water, while hydrogen makes up the remaining 1/9 of the mass: the mass of two elements in a compound are always in the same ratio. Along with the law of multiple proportions, the law of definite proportions forms the basis of stoichiometry.
History
The law of definite proportion was given by Joseph Proust in 1797.
At the end of the 18th century, when the concept of a chemical compound had not yet been fully developed, the law was novel. In fact, when first proposed, it was a controversial statement and was opposed by other chemists, most notably Proust's fellow Frenchman Claude Louis Berthollet, who argued that the elements could combine in any proportion. The existence of this debate demonstrates that, at the time, the distinction between pure chemical compounds and mixtures had not yet been fully developed.
The law of definite proportions contributed to the atomic theory that John Dalton promoted beginning in 1805, which explained matter as consisting of discrete atoms, that there was one type of atom for each element, and that the compounds were made of combinations of different types of atoms in fixed proportions.
A related early idea was Prout's hypothesis, formulated by English chemist William Prout, who proposed that the hydrogen atom was the fundamental atomic unit. From this hypothesis was derived the whole number rule, which was the rule of thumb that atomic masses were whole number multiples of the mass of hydrogen. This was later rejected in the 1820s and 30s following more refined measurements of atomic mass, notably by Jöns Jacob Berzelius, which revealed in particular that the atomic mass of chlorine was 35.45, which was incompatible with the hypothesis. Since the 1920s this discrepancy has been explained by the presence of isotopes; the atomic mass of any isotope is very close to satisfying the whole number rule, with the mass defect caused by differing binding energies being significantly smaller.
Non-stoichiometric compounds and isotopes
Although very useful in the foundation of modern chemistry, the law of definite proportions is not universally true. There exist non-stoichiometric compounds whose elemental composition can vary from sample to sample. Such compounds follow the law of multiple proportion. An example is the iron oxide wüstite, which can contain between 0.83 and 0.95 iron atoms for every oxygen atom, and thus contain anywhere between 23% and 25% oxygen by mass. The ideal formula is FeO, but it is about Fe0.95O due to crystallographic vacancies. In general, Proust's measurements were not precise enough to detect such variations.
In addition, the isotopic composition of an element can vary depending on its source, hence its contribution to the mass of even a pure stoichiometric compound may vary. This variation is used in radiometric dating since astronomical, atmospheric, oceanic, crustal and deep Earth processes may concentrate some environmental isotopes preferentially. With the exception of hydrogen and its isotopes, the effect is usually small, but is measurable with modern-day instrumentation.
Many natural polymers vary in composition (for instance DNA, proteins, carbohydrates) even when "pure". Polymers are generally not considered "pure chemical compounds" except when their molecular weight is uniform (mono-disperse) and their stoichiometry is constant.
| Physical sciences | Reaction | Chemistry |
18009 | https://en.wikipedia.org/wiki/Lift%20%28force%29 | Lift (force) | When a fluid flows around an object, the fluid exerts a force on the object. Lift is the component of this force that is perpendicular to the oncoming flow direction. It contrasts with the drag force, which is the component of the force parallel to the flow direction. Lift conventionally acts in an upward direction in order to counter the force of gravity, but it is defined to act perpendicular to the flow and therefore can act in any direction.
If the surrounding fluid is air, the force is called an aerodynamic force. In water or any other liquid, it is called a hydrodynamic force.
Dynamic lift is distinguished from other kinds of lift in fluids. Aerostatic lift or buoyancy, in which an internal fluid is lighter than the surrounding fluid, does not require movement and is used by balloons, blimps, dirigibles, boats, and submarines. Planing lift, in which only the lower portion of the body is immersed in a liquid flow, is used by motorboats, surfboards, windsurfers, sailboats, and water-skis.
Overview
A fluid flowing around the surface of a solid object applies a force on it. It does not matter whether the object is moving through a stationary fluid (e.g. an aircraft flying through the air) or whether the object is stationary and the fluid is moving (e.g. a wing in a wind tunnel) or whether both are moving (e.g. a sailboat using the wind to move forward). Lift is the component of this force that is perpendicular to the oncoming flow direction. Lift is always accompanied by a drag force, which is the component of the surface force parallel to the flow direction.
Lift is mostly associated with the wings of fixed-wing aircraft, although it is more widely generated by many other streamlined bodies such as propellers, kites, helicopter rotors, racing car wings, maritime sails, wind turbines, and by sailboat keels, ship's rudders, and hydrofoils in water. Lift is also used by flying and gliding animals, especially by birds, bats, and insects, and even in the plant world by the seeds of certain trees.
While the common meaning of the word "lift" assumes that lift opposes weight, lift can be in any direction with respect to gravity, since it is defined with respect to the direction of flow rather than to the direction of gravity. When an aircraft is cruising in straight and level flight, the lift opposes gravity. However, when an aircraft is climbing, descending, or banking in a turn the lift is tilted with respect to the vertical. Lift may also act as downforce on the wing of a fixed-wing aircraft at the top of an aerobatic loop, and on the horizontal stabiliser of an aircraft. Lift may also be largely horizontal, for instance on a sailing ship.
The lift discussed in this article is mainly in relation to airfoils; marine hydrofoils and propellers share the same physical principles and work in the same way, despite differences between air and water such as density, compressibility, and viscosity.
The flow around a lifting airfoil is a fluid mechanics phenomenon that can be understood on essentially two levels: There are mathematical theories, which are based on established laws of physics and represent the flow accurately, but which require solving equations. And there are physical explanations without math, which are less rigorous. Correctly explaining lift in these qualitative terms is difficult because the cause-and-effect relationships involved are subtle. A comprehensive explanation that captures all of the essential aspects is necessarily complex. There are also many simplified explanations, but all leave significant parts of the phenomenon unexplained, while some also have elements that are simply incorrect.
Simplified physical explanations of lift on an airfoil
An airfoil is a streamlined shape that is capable of generating significantly more lift than drag. A flat plate can generate lift, but not as much as a streamlined airfoil, and with somewhat higher drag.
Most simplified explanations follow one of two basic approaches, based either on Newton's laws of motion or on Bernoulli's principle.
Explanation based on flow deflection and Newton's laws
An airfoil generates lift by exerting a downward force on the air as it flows past. According to Newton's third law, the air must exert an equal and opposite (upward) force on the airfoil, which is lift.
As the airflow approaches the airfoil it is curving upward, but as it passes the airfoil it changes direction and follows a path that is curved downward. According to Newton's second law, this change in flow direction requires a downward force applied to the air by the airfoil. Then Newton's third law requires the air to exert an upward force on the airfoil; thus a reaction force, lift, is generated opposite to the directional change. In the case of an airplane wing, the wing exerts a downward force on the air and the air exerts an upward force on the wing.
The downward turning of the flow is not produced solely by the lower surface of the airfoil, and the air flow above the airfoil accounts for much of the downward-turning action.
This explanation is correct but it is incomplete. It does not explain how the airfoil can impart downward turning to a much deeper swath of the flow than it actually touches. Furthermore, it does not mention that the lift force is exerted by pressure differences, and does not explain how those pressure differences are sustained.
Controversy regarding the Coandă effect
Some versions of the flow-deflection explanation of lift cite the Coandă effect as the reason the flow is able to follow the convex upper surface of the airfoil. The conventional definition in the aerodynamics field is that the Coandă effect refers to the tendency of a fluid jet to stay attached to an adjacent surface that curves away from the flow, and the resultant entrainment of ambient air into the flow.
More broadly, some consider the effect to include the tendency of any fluid boundary layer to adhere to a curved surface, not just the boundary layer accompanying a fluid jet. It is in this broader sense that the Coandă effect is used by some popular references to explain why airflow remains attached to the top side of an airfoil. This is a controversial use of the term "Coandă effect"; the flow following the upper surface simply reflects an absence of boundary-layer separation, thus it is not an example of the Coandă effect. Regardless of whether this broader definition of the "Coandă effect" is applicable, calling it the "Coandă effect" does not provide an explanation, it just gives the phenomenon a name.
The ability of a fluid flow to follow a curved path is not dependent on shear forces, viscosity of the fluid, or the presence of a boundary layer. Air flowing around an airfoil, adhering to both upper and lower surfaces, and generating lift, is accepted as a phenomenon in inviscid flow.
Explanations based on an increase in flow speed and Bernoulli's principle
There are two common versions of this explanation, one based on "equal transit time", and one based on "obstruction" of the airflow.
False explanation based on equal transit-time
The "equal transit time" explanation starts by arguing that the flow over the upper surface is faster than the flow over the lower surface because the path length over the upper surface is longer and must be traversed in equal transit time. Bernoulli's principle states that under certain conditions increased flow speed is associated with reduced pressure. It is concluded that the reduced pressure over the upper surface results in upward lift.
While it is true that the flow speeds up, a serious flaw in this explanation is that it does not correctly explain what causes the flow to speed up. The longer-path-length explanation is incorrect. No difference in path length is needed, and even when there is a difference, it is typically much too small to explain the observed speed difference. This is because the assumption of equal transit time is wrong when applied to a body generating lift. There is no physical principle that requires equal transit time in all situations and experimental results confirm that for a body generating lift the transit times are not equal. In fact, the air moving past the top of an airfoil generating lift moves much faster than equal transit time predicts. The much higher flow speed over the upper surface can be clearly seen in this animated flow visualization.
Obstruction of the airflow
Like the equal transit time explanation, the "obstruction" or "streamtube pinching" explanation argues that the flow over the upper surface is faster than the flow over the lower surface, but gives a different reason for the difference in speed. It argues that the curved upper surface acts as more of an obstacle to the flow, forcing the streamlines to pinch closer together, making the streamtubes narrower. When streamtubes become narrower, conservation of mass requires that flow speed must increase. Reduced upper-surface pressure and upward lift follow from the higher speed by Bernoulli's principle, just as in the equal transit time explanation. Sometimes an analogy is made to a venturi nozzle, claiming the upper surface of the wing acts like a venturi nozzle to constrict the flow.
One serious flaw in the obstruction explanation is that it does not explain how streamtube pinching comes about, or why it is greater over the upper surface than the lower surface. For conventional wings that are flat on the bottom and curved on top this makes some intuitive sense, but it does not explain how flat plates, symmetric airfoils, sailboat sails, or conventional airfoils flying upside down can generate lift, and attempts to calculate lift based on the amount of constriction or obstruction do not predict experimental results. Another flaw is that conservation of mass is not a satisfying physical reason why the flow would speed up. Effectively explaining the acceleration of an object requires identifying the force that accelerates it.
Issues common to both versions of the Bernoulli-based explanation
A serious flaw common to all the Bernoulli-based explanations is that they imply that a speed difference can arise from causes other than a pressure difference, and that the speed difference then leads to a pressure difference, by Bernoulli's principle. This implied one-way causation is a misconception. The real relationship between pressure and flow speed is a mutual interaction. As explained below under a more comprehensive physical explanation, producing a lift force requires maintaining pressure differences in both the vertical and horizontal directions. The Bernoulli-only explanations do not explain how the pressure differences in the vertical direction are sustained. That is, they leave out the flow-deflection part of the interaction.
Although the two simple Bernoulli-based explanations above are incorrect, there is nothing incorrect about Bernoulli's principle or the fact that the air goes faster on the top of the wing, and Bernoulli's principle can be used correctly as part of a more complicated explanation of lift.
Basic attributes of lift
Lift is a result of pressure differences and depends on angle of attack, airfoil shape, air density, and airspeed.
Pressure differences
Pressure is the normal force per unit area exerted by the air on itself and on surfaces that it touches. The lift force is transmitted through the pressure, which acts perpendicular to the surface of the airfoil. Thus, the net force manifests itself as pressure differences. The direction of the net force implies that the average pressure on the upper surface of the airfoil is lower than the average pressure on the underside.
These pressure differences arise in conjunction with the curved airflow. When a fluid follows a curved path, there is a pressure gradient perpendicular to the flow direction with higher pressure on the outside of the curve and lower pressure on the inside. This direct relationship between curved streamlines and pressure differences, sometimes called the streamline curvature theorem, was derived from Newton's second law by Leonhard Euler in 1754:
The left side of this equation represents the pressure difference perpendicular to the fluid flow. On the right side of the equation, ρ is the density, v is the velocity, and R is the radius of curvature. This formula shows that higher velocities and tighter curvatures create larger pressure differentials and that for straight flow (R → ∞), the pressure difference is zero.
Angle of attack
The angle of attack is the angle between the chord line of an airfoil and the oncoming airflow. A symmetrical airfoil generates zero lift at zero angle of attack. But as the angle of attack increases, the air is deflected through a larger angle and the vertical component of the airstream velocity increases, resulting in more lift. For small angles, a symmetrical airfoil generates a lift force roughly proportional to the angle of attack.
As the angle of attack increases, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the wing; there is less deflection downward so the airfoil generates less lift. The airfoil is said to be stalled.
Airfoil shape
The maximum lift force that can be generated by an airfoil at a given airspeed depends on the shape of the airfoil, especially the amount of camber (curvature such that the upper surface is more convex than the lower surface, as illustrated at right). Increasing the camber generally increases the maximum lift at a given airspeed.
Cambered airfoils generate lift at zero angle of attack. When the chord line is horizontal, the trailing edge has a downward direction and since the air follows the trailing edge it is deflected downward. When a cambered airfoil is upside down, the angle of attack can be adjusted so that the lift force is upward. This explains how a plane can fly upside down.
Flow conditions
The ambient flow conditions which affect lift include the fluid density, viscosity and speed of flow. Density is affected by temperature, and by the medium's acoustic velocity – i.e. by compressibility effects.
Air speed and density
Lift is proportional to the density of the air and approximately proportional to the square of the flow speed. Lift also depends on the size of the wing, being generally proportional to the wing's area projected in the lift direction. In calculations it is convenient to quantify lift in terms of a lift coefficient based on these factors.
Boundary layer and profile drag
No matter how smooth the surface of an airfoil seems, any surface is rough on the scale of air molecules. Air molecules flying into the surface bounce off the rough surface in random directions relative to their original velocities. The result is that when the air is viewed as a continuous material, it is seen to be unable to slide along the surface, and the air's velocity relative to the airfoil decreases to nearly zero at the surface (i.e., the air molecules "stick" to the surface instead of sliding along it), something known as the no-slip condition. Because the air at the surface has near-zero velocity but the air away from the surface is moving, there is a thin boundary layer in which air close to the surface is subjected to a shearing motion. The air's viscosity resists the shearing, giving rise to a shear stress at the airfoil's surface called skin friction drag. Over most of the surface of most airfoils, the boundary layer is naturally turbulent, which increases skin friction drag.
Under usual flight conditions, the boundary layer remains attached to both the upper and lower surfaces all the way to the trailing edge, and its effect on the rest of the flow is modest. Compared to the predictions of inviscid flow theory, in which there is no boundary layer, the attached boundary layer reduces the lift by a modest amount and modifies the pressure distribution somewhat, which results in a viscosity-related pressure drag over and above the skin friction drag. The total of the skin friction drag and the viscosity-related pressure drag is usually called the profile drag.
Stalling
An airfoil's maximum lift at a given airspeed is limited by boundary-layer separation. As the angle of attack is increased, a point is reached where the boundary layer can no longer remain attached to the upper surface. When the boundary layer separates, it leaves a region of recirculating flow above the upper surface, as illustrated in the flow-visualization photo at right. This is known as the stall, or stalling. At angles of attack above the stall, lift is significantly reduced, though it does not drop to zero. The maximum lift that can be achieved before stall, in terms of the lift coefficient, is generally less than 1.5 for single-element airfoils and can be more than 3.0 for airfoils with high-lift slotted flaps and leading-edge devices deployed.
Bluff bodies
The flow around bluff bodies – i.e. without a streamlined shape, or stalling airfoils – may also generate lift, in addition to a strong drag force. This lift may be steady, or it may oscillate due to vortex shedding. Interaction of the object's flexibility with the vortex shedding may enhance the effects of fluctuating lift and cause vortex-induced vibrations. For instance, the flow around a circular cylinder generates a Kármán vortex street: vortices being shed in an alternating fashion from the cylinder's sides. The oscillatory nature of the flow produces a fluctuating lift force on the cylinder, even though the net (mean) force is negligible. The lift force frequency is characterised by the dimensionless Strouhal number, which depends on the Reynolds number of the flow.
For a flexible structure, this oscillatory lift force may induce vortex-induced vibrations. Under certain conditions – for instance resonance or strong spanwise correlation of the lift force – the resulting motion of the structure due to the lift fluctuations may be strongly enhanced. Such vibrations may pose problems and threaten collapse in tall man-made structures like industrial chimneys.
In the Magnus effect, a lift force is generated by a spinning cylinder in a freestream. Here the mechanical rotation acts on the boundary layer, causing it to separate at different locations on the two sides of the cylinder. The asymmetric separation changes the effective shape of the cylinder as far as the flow is concerned such that the cylinder acts like a lifting airfoil with circulation in the outer flow.
A more comprehensive physical explanation
As described above under "Simplified physical explanations of lift on an airfoil", there are two main popular explanations: one based on downward deflection of the flow (Newton's laws), and one based on pressure differences accompanied by changes in flow speed (Bernoulli's principle). Either of these, by itself, correctly identifies some aspects of the lifting flow but leaves other important aspects of the phenomenon unexplained. A more comprehensive explanation involves both downward deflection and pressure differences (including changes in flow speed associated with the pressure differences), and requires looking at the flow in more detail.
Lift at the airfoil surface
The airfoil shape and angle of attack work together so that the airfoil exerts a downward force on the air as it flows past. According to Newton's third law, the air must then exert an equal and opposite (upward) force on the airfoil, which is the lift.
The net force exerted by the air occurs as a pressure difference over the airfoil's surfaces. Pressure in a fluid is always positive in an absolute sense, so that pressure must always be thought of as pushing, and never as pulling. The pressure thus pushes inward on the airfoil everywhere on both the upper and lower surfaces. The flowing air reacts to the presence of the wing by reducing the pressure on the wing's upper surface and increasing the pressure on the lower surface. The pressure on the lower surface pushes up harder than the reduced pressure on the upper surface pushes down, and the net result is upward lift.
The pressure difference which results in lift acts directly on the airfoil surfaces; however, understanding how the pressure difference is produced requires understanding what the flow does over a wider area.
The wider flow around the airfoil
An airfoil affects the speed and direction of the flow over a wide area, producing a pattern called a velocity field. When an airfoil produces lift, the flow ahead of the airfoil is deflected upward, the flow above and below the airfoil is deflected downward leaving the air far behind the airfoil in the same state as the oncoming flow far ahead. The flow above the upper surface is sped up, while the flow below the airfoil is slowed down. Together with the upward deflection of air in front and the downward deflection of the air immediately behind, this establishes a net circulatory component of the flow. The downward deflection and the changes in flow speed are pronounced and extend over a wide area, as can be seen in the flow animation on the right. These differences in the direction and speed of the flow are greatest close to the airfoil and decrease gradually far above and below. All of these features of the velocity field also appear in theoretical models for lifting flows.
The pressure is also affected over a wide area, in a pattern of non-uniform pressure called a pressure field. When an airfoil produces lift, there is a diffuse region of low pressure above the airfoil, and usually a diffuse region of high pressure below, as illustrated by the isobars (curves of constant pressure) in the drawing. The pressure difference that acts on the surface is just part of this pressure field.
Mutual interaction of pressure differences and changes in flow velocity
The non-uniform pressure exerts forces on the air in the direction from higher pressure to lower pressure. The direction of the force is different at different locations around the airfoil, as indicated by the block arrows in the pressure field around an airfoil figure. Air above the airfoil is pushed toward the center of the low-pressure region, and air below the airfoil is pushed outward from the center of the high-pressure region.
According to Newton's second law, a force causes air to accelerate in the direction of the force. Thus the vertical arrows in the accompanying pressure field diagram indicate that air above and below the airfoil is accelerated, or turned downward, and that the non-uniform pressure is thus the cause of the downward deflection of the flow visible in the flow animation. To produce this downward turning, the airfoil must have a positive angle of attack or have sufficient positive camber. Note that the downward turning of the flow over the upper surface is the result of the air being pushed downward by higher pressure above it than below it. Some explanations that refer to the "Coandă effect" suggest that viscosity plays a key role in the downward turning, but this is false. (see above under "Controversy regarding the Coandă effect").
The arrows ahead of the airfoil indicate that the flow ahead of the airfoil is deflected upward, and the arrows behind the airfoil indicate that the flow behind is deflected upward again, after being deflected downward over the airfoil. These deflections are also visible in the flow animation.
The arrows ahead of the airfoil and behind also indicate that air passing through the low-pressure region above the airfoil is sped up as it enters, and slowed back down as it leaves. Air passing through the high-pressure region below the airfoil is slowed down as it enters and then sped back up as it leaves. Thus the non-uniform pressure is also the cause of the changes in flow speed visible in the flow animation. The changes in flow speed are consistent with Bernoulli's principle, which states that in a steady flow without viscosity, lower pressure means higher speed, and higher pressure means lower speed.
Thus changes in flow direction and speed are directly caused by the non-uniform pressure. But this cause-and-effect relationship is not just one-way; it works in both directions simultaneously. The air's motion is affected by the pressure differences, but the existence of the pressure differences depends on the air's motion. The relationship is thus a mutual, or reciprocal, interaction: Air flow changes speed or direction in response to pressure differences, and the pressure differences are sustained by the air's resistance to changing speed or direction. A pressure difference can exist only if something is there for it to push against. In aerodynamic flow, the pressure difference pushes against the air's inertia, as the air is accelerated by the pressure difference. This is why the air's mass is part of the calculation, and why lift depends on air density.
Sustaining the pressure difference that exerts the lift force on the airfoil surfaces requires sustaining a pattern of non-uniform pressure in a wide area around the airfoil. This requires maintaining pressure differences in both the vertical and horizontal directions, and thus requires both downward turning of the flow and changes in flow speed according to Bernoulli's principle. The pressure differences and the changes in flow direction and speed sustain each other in a mutual interaction. The pressure differences follow naturally from Newton's second law and from the fact that flow along the surface follows the predominantly downward-sloping contours of the airfoil. And the fact that the air has mass is crucial to the interaction.
How simpler explanations fall short
Producing a lift force requires both downward turning of the flow and changes in flow speed consistent with Bernoulli's principle. Each of the simplified explanations given above in Simplified physical explanations of lift on an airfoil falls short by trying to explain lift in terms of only one or the other, thus explaining only part of the phenomenon and leaving other parts unexplained.
Quantifying lift
Pressure integration
When the pressure distribution on the airfoil surface is known, determining the total lift requires adding up the contributions to the pressure force from local elements of the surface, each with its own local value of pressure. The total lift is thus the integral of the pressure, in the direction perpendicular to the farfield flow, over the airfoil surface.
where:
S is the projected (planform) area of the airfoil, measured normal to the mean airflow;
n is the normal unit vector pointing into the wing;
k is the vertical unit vector, normal to the freestream direction.
The above lift equation neglects the skin friction forces, which are small compared to the pressure forces.
By using the streamwise vector i parallel to the freestream in place of k in the integral, we obtain an expression for the pressure drag Dp (which includes the pressure portion of the profile drag and, if the wing is three-dimensional, the induced drag). If we use the spanwise vector j, we obtain the side force Y.
The validity of this integration generally requires the airfoil shape to be a closed curve that is piecewise smooth.
Lift coefficient
Lift depends on the size of the wing, being approximately proportional to the wing area. It is often convenient to quantify the lift of a given airfoil by its lift coefficient , which defines its overall lift in terms of a unit area of the wing.
If the value of for a wing at a specified angle of attack is given, then the lift produced for specific flow conditions can be determined:
where
is the lift force
is the air density
is the velocity or true airspeed
is the planform (projected) wing area
is the lift coefficient at the desired angle of attack, Mach number, and Reynolds number
Mathematical theories of lift
Mathematical theories of lift are based on continuum fluid mechanics, assuming that air flows as a continuous fluid. Lift is generated in accordance with the fundamental principles of physics, the most relevant being the following three principles:
Conservation of momentum, which is a consequence of Newton's laws of motion, especially Newton's second law which relates the net force on an element of air to its rate of momentum change,
Conservation of mass, including the assumption that the airfoil's surface is impermeable for the air flowing around, and
Conservation of energy, which says that energy is neither created nor destroyed.
Because an airfoil affects the flow in a wide area around it, the conservation laws of mechanics are embodied in the form of partial differential equations combined with a set of boundary condition requirements which the flow has to satisfy at the airfoil surface and far away from the airfoil.
To predict lift requires solving the equations for a particular airfoil shape and flow condition, which generally requires calculations that are so voluminous that they are practical only on a computer, through the methods of computational fluid dynamics (CFD). Determining the net aerodynamic force from a CFD solution requires "adding up" (integrating) the forces due to pressure and shear determined by the CFD over every surface element of the airfoil as described under "pressure integration".
The Navier–Stokes equations (NS) provide the potentially most accurate theory of lift, but in practice, capturing the effects of turbulence in the boundary layer on the airfoil surface requires sacrificing some accuracy, and requires use of the Reynolds-averaged Navier–Stokes equations (RANS). Simpler but less accurate theories have also been developed.
Navier–Stokes (NS) equations
These equations represent conservation of mass, Newton's second law (conservation of momentum), conservation of energy, the Newtonian law for the action of viscosity, the Fourier heat conduction law, an equation of state relating density, temperature, and pressure, and formulas for the viscosity and thermal conductivity of the fluid.
In principle, the NS equations, combined with boundary conditions of no through-flow and no slip at the airfoil surface, could be used to predict lift with high accuracy in any situation in ordinary atmospheric flight. However, airflows in practical situations always involve turbulence in the boundary layer next to the airfoil surface, at least over the aft portion of the airfoil. Predicting lift by solving the NS equations in their raw form would require the calculations to resolve the details of the turbulence, down to the smallest eddy. This is not yet possible, even on the most powerful computer. So in principle the NS equations provide a complete and very accurate theory of lift, but practical prediction of lift requires that the effects of turbulence be modeled in the RANS equations rather than computed directly.
Reynolds-averaged Navier–Stokes (RANS) equations
These are the NS equations with the turbulence motions averaged over time, and the effects of the turbulence on the time-averaged flow represented by turbulence modeling (an additional set of equations based on a combination of dimensional analysis and empirical information on how turbulence affects a boundary layer in a time-averaged average sense). A RANS solution consists of the time-averaged velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil.
The amount of computation required is a minuscule fraction (billionths) of what would be required to resolve all of the turbulence motions in a raw NS calculation, and with large computers available it is now practical to carry out RANS calculations for complete airplanes in three dimensions. Because turbulence models are not perfect, the accuracy of RANS calculations is imperfect, but it is adequate for practical aircraft design. Lift predicted by RANS is usually within a few percent of the actual lift.
Inviscid-flow equations (Euler or potential)
The Euler equations are the NS equations without the viscosity, heat conduction, and turbulence effects. As with a RANS solution, an Euler solution consists of the velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil. While the Euler equations are simpler than the NS equations, they do not lend themselves to exact analytic solutions.
Further simplification is available through potential flow theory, which reduces the number of unknowns to be determined, and makes analytic solutions possible in some cases, as described below.
Either Euler or potential-flow calculations predict the pressure distribution on the airfoil surfaces roughly correctly for angles of attack below stall, where they might miss the total lift by as much as 10–20%. At angles of attack above stall, inviscid calculations do not predict that stall has happened, and as a result they grossly overestimate the lift.
In potential-flow theory, the flow is assumed to be irrotational, i.e. that small fluid parcels have no net rate of rotation. Mathematically, this is expressed by the statement that the curl of the velocity vector field is everywhere equal to zero. Irrotational flows have the convenient property that the velocity can be expressed as the gradient of a scalar function called a potential. A flow represented in this way is called potential flow.
In potential-flow theory, the flow is assumed to be incompressible. Incompressible potential-flow theory has the advantage that the equation (Laplace's equation) to be solved for the potential is linear, which allows solutions to be constructed by superposition of other known solutions. The incompressible-potential-flow equation can also be solved by conformal mapping, a method based on the theory of functions of a complex variable. In the early 20th century, before computers were available, conformal mapping was used to generate solutions to the incompressible potential-flow equation for a class of idealized airfoil shapes, providing some of the first practical theoretical predictions of the pressure distribution on a lifting airfoil.
A solution of the potential equation directly determines only the velocity field. The pressure field is deduced from the velocity field through Bernoulli's equation.
Applying potential-flow theory to a lifting flow requires special treatment and an additional assumption. The problem arises because lift on an airfoil in inviscid flow requires circulation in the flow around the airfoil (See "Circulation and the Kutta–Joukowski theorem" below), but a single potential function that is continuous throughout the domain around the airfoil cannot represent a flow with nonzero circulation. The solution to this problem is to introduce a branch cut, a curve or line from some point on the airfoil surface out to infinite distance, and to allow a jump in the value of the potential across the cut. The jump in the potential imposes circulation in the flow equal to the potential jump and thus allows nonzero circulation to be represented. However, the potential jump is a free parameter that is not determined by the potential equation or the other boundary conditions, and the solution is thus indeterminate. A potential-flow solution exists for any value of the circulation and any value of the lift. One way to resolve this indeterminacy is to impose the Kutta condition, which is that, of all the possible solutions, the physically reasonable solution is the one in which the flow leaves the trailing edge smoothly. The streamline sketches illustrate one flow pattern with zero lift, in which the flow goes around the trailing edge and leaves the upper surface ahead of the trailing edge, and another flow pattern with positive lift, in which the flow leaves smoothly at the trailing edge in accordance with the Kutta condition.
Linearized potential flow
This is potential-flow theory with the further assumptions that the airfoil is very thin and the angle of attack is small. The linearized theory predicts the general character of the airfoil pressure distribution and how it is influenced by airfoil shape and angle of attack, but is not accurate enough for design work. For a 2D airfoil, such calculations can be done in a fraction of a second in a spreadsheet on a PC.
Circulation and the Kutta–Joukowski theorem
When an airfoil generates lift, several components of the overall velocity field contribute to a net circulation of air around it: the upward flow ahead of the airfoil, the accelerated flow above, the decelerated flow below, and the downward flow behind.
The circulation can be understood as the total amount of "spinning" (or vorticity) of an inviscid fluid around the airfoil.
The Kutta–Joukowski theorem relates the lift per unit width of span of a two-dimensional airfoil to this circulation component of the flow. It is a key element in an explanation of lift that follows the development of the flow around an airfoil as the airfoil starts its motion from rest and a starting vortex is formed and left behind, leading to the formation of circulation around the airfoil. Lift is then inferred from the Kutta-Joukowski theorem. This explanation is largely mathematical, and its general progression is based on logical inference, not physical cause-and-effect.
The Kutta–Joukowski model does not predict how much circulation or lift a two-dimensional airfoil produces. Calculating the lift per unit span using Kutta–Joukowski requires a known value for the circulation. In particular, if the Kutta condition is met, in which the rear stagnation point moves to the airfoil trailing edge and attaches there for the duration of flight, the lift can be calculated theoretically through the conformal mapping method.
The lift generated by a conventional airfoil is dictated by both its design and the flight conditions, such as forward velocity, angle of attack and air density. Lift can be increased by artificially increasing the circulation, for example by boundary-layer blowing or the use of blown flaps. In the Flettner rotor the entire airfoil is circular and spins about a spanwise axis to create the circulation.
Three-dimensional flow
The flow around a three-dimensional wing involves significant additional issues, especially relating to the wing tips. For a wing of low aspect ratio, such as a typical delta wing, two-dimensional theories may provide a poor model and three-dimensional flow effects can dominate. Even for wings of high aspect ratio, the three-dimensional effects associated with finite span can affect the whole span, not just close to the tips.
Wing tips and spanwise distribution
The vertical pressure gradient at the wing tips causes air to flow sideways, out from under the wing then up and back over the upper surface. This reduces the pressure gradient at the wing tip, therefore also reducing lift. The lift tends to decrease in the spanwise direction from root to tip, and the pressure distributions around the airfoil sections change accordingly in the spanwise direction. Pressure distributions in planes perpendicular to the flight direction tend to look like the illustration at right. This spanwise-varying pressure distribution is sustained by a mutual interaction with the velocity field. Flow below the wing is accelerated outboard, flow outboard of the tips is accelerated upward, and flow above the wing is accelerated inboard, which results in the flow pattern illustrated at right.
There is more downward turning of the flow than there would be in a two-dimensional flow with the same airfoil shape and sectional lift, and a higher sectional angle of attack is required to achieve the same lift compared to a two-dimensional flow. The wing is effectively flying in a downdraft of its own making, as if the freestream flow were tilted downward, with the result that the total aerodynamic force vector is tilted backward slightly compared to what it would be in two dimensions. The additional backward component of the force vector is called lift-induced drag.
The difference in the spanwise component of velocity above and below the wing (between being in the inboard direction above and in the outboard direction below) persists at the trailing edge and into the wake downstream. After the flow leaves the trailing edge, this difference in velocity takes place across a relatively thin shear layer called a vortex sheet.
Horseshoe vortex system
The wingtip flow leaving the wing creates a tip vortex. As the main vortex sheet passes downstream from the trailing edge, it rolls up at its outer edges, merging with the tip vortices. The combination of the wingtip vortices and the vortex sheets feeding them is called the vortex wake.
In addition to the vorticity in the trailing vortex wake there is vorticity in the wing's boundary layer, called 'bound vorticity', which connects the trailing sheets from the two sides of the wing into a vortex system in the general form of a horseshoe. The horseshoe form of the vortex system was recognized by the British aeronautical pioneer Lanchester in 1907.
Given the distribution of bound vorticity and the vorticity in the wake, the Biot–Savart law (a vector-calculus relation) can be used to calculate the velocity perturbation anywhere in the field, caused by the lift on the wing. Approximate theories for the lift distribution and lift-induced drag of three-dimensional wings are based on such analysis applied to the wing's horseshoe vortex system. In these theories, the bound vorticity is usually idealized and assumed to reside at the camber surface inside the wing.
Because the velocity is deduced from the vorticity in such theories, some authors describe the situation to imply that the vorticity is the cause of the velocity perturbations, using terms such as "the velocity induced by the vortex", for example. But attributing mechanical cause-and-effect between the vorticity and the velocity in this way is not consistent with the physics. The velocity perturbations in the flow around a wing are in fact produced by the pressure field.
Manifestations of lift in the farfield
Integrated force/momentum balance in lifting flows
The flow around a lifting airfoil must satisfy Newton's second law regarding conservation of momentum, both locally at every point in the flow field, and in an integrated sense over any extended region of the flow. For an extended region, Newton's second law takes the form of the momentum theorem for a control volume, where a control volume can be any region of the flow chosen for analysis. The momentum theorem states that the integrated force exerted at the boundaries of the control volume (a surface integral), is equal to the integrated time rate of change (material derivative) of the momentum of fluid parcels passing through the interior of the control volume. For a steady flow, this can be expressed in the form of the net surface integral of the flux of momentum through the boundary.
The lifting flow around a 2D airfoil is usually analyzed in a control volume that completely surrounds the airfoil, so that the inner boundary of the control volume is the airfoil surface, where the downward force per unit span is exerted on the fluid by the airfoil. The outer boundary is usually either a large circle or a large rectangle. At this outer boundary distant from the airfoil, the velocity and pressure are well represented by the velocity and pressure associated with a uniform flow plus a vortex, and viscous stress is negligible, so that the only force that must be integrated over the outer boundary is the pressure. The free-stream velocity is usually assumed to be horizontal, with lift vertically upward, so that the vertical momentum is the component of interest.
For the free-air case (no ground plane), the force exerted by the airfoil on the fluid is manifested partly as momentum fluxes and partly as pressure differences at the outer boundary, in proportions that depend on the shape of the outer boundary, as shown in the diagram at right. For a flat horizontal rectangle that is much longer than it is tall, the fluxes of vertical momentum through the front and back are negligible, and the lift is accounted for entirely by the integrated pressure differences on the top and bottom. For a square or circle, the momentum fluxes and pressure differences account for half the lift each. For a vertical rectangle that is much taller than it is wide, the unbalanced pressure forces on the top and bottom are negligible, and lift is accounted for entirely by momentum fluxes, with a flux of upward momentum that enters the control volume through the front accounting for half the lift, and a flux of downward momentum that exits the control volume through the back accounting for the other half.
The results of all of the control-volume analyses described above are consistent with the Kutta–Joukowski theorem described above. Both the tall rectangle and circle control volumes have been used in derivations of the theorem.
Lift reacted by overpressure on the ground under an airplane
An airfoil produces a pressure field in the surrounding air, as explained under "The wider flow around the airfoil" above. The pressure differences associated with this field die off gradually, becoming very small at large distances, but never disappearing altogether. Below the airplane, the pressure field persists as a positive pressure disturbance that reaches the ground, forming a pattern of slightly-higher-than-ambient pressure on the ground, as shown on the right. Although the pressure differences are very small far below the airplane, they are spread over a wide area and add up to a substantial force. For steady, level flight, the integrated force due to the pressure differences is equal to the total aerodynamic lift of the airplane and to the airplane's weight. According to Newton's third law, this pressure force exerted on the ground by the air is matched by an equal-and-opposite upward force exerted on the air by the ground, which offsets all of the downward force exerted on the air by the airplane. The net force due to the lift, acting on the atmosphere as a whole, is therefore zero, and thus there is no integrated accumulation of vertical momentum in the atmosphere, as was noted by Lanchester early in the development of modern aerodynamics.
| Physical sciences | Fluid mechanics | null |
18016 | https://en.wikipedia.org/wiki/Lisp%20%28programming%20language%29 | Lisp (programming language) | Lisp (historically LISP, an abbreviation of "list processing") is a family of programming languages with a long history and a distinctive, fully parenthesized prefix notation.
Originally specified in the late 1950s, it is the second-oldest high-level programming language still in common use, after Fortran. Lisp has changed since its early days, and many dialects have existed over its history. Today, the best-known general-purpose Lisp dialects are Common Lisp, Scheme, Racket, and Clojure.
Lisp was originally created as a practical mathematical notation for computer programs, influenced by (though not originally derived from) the notation of Alonzo Church's lambda calculus. It quickly became a favored programming language for artificial intelligence (AI) research. As one of the earliest programming languages, Lisp pioneered many ideas in computer science, including tree data structures, automatic storage management, dynamic typing, conditionals, higher-order functions, recursion, the self-hosting compiler, and the read–eval–print loop.
The name LISP derives from "LISt Processor". Linked lists are one of Lisp's major data structures, and Lisp source code is made of lists. Thus, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or new domain-specific languages embedded in Lisp.
The interchangeability of code and data gives Lisp its instantly recognizable syntax. All program code is written as s-expressions, or parenthesized lists. A function call or syntactic form is written as a list with the function or operator's name first, and the arguments following; for instance, a function that takes three arguments would be called as .
History
John McCarthy began developing Lisp in 1958 while he was at the Massachusetts Institute of Technology (MIT). He was motivated by a desire to create an AI programming language that would work on the IBM 704, as he believed that "IBM looked like a good bet to pursue Artificial Intelligence research vigorously." He was inspired by Information Processing Language, which was also based on list processing, but did not use it because it was designed for different hardware and he found an algebraic language more appealing. Due to these factors, he consulted on the design of the Fortran List Processing Language, which was implemented as a Fortran library. However, he was dissatisfied with it because it did not support recursion or a modern if-then-else statement (which was a new concept when lisp was first introduced) .
McCarthy's original notation used bracketed "M-expressions" that would be translated into S-expressions. As an example, the M-expression is equivalent to the S-expression . Once Lisp was implemented, programmers rapidly chose to use S-expressions, and M-expressions were abandoned. M-expressions surfaced again with short-lived attempts of MLisp by Horace Enea and CGOL by Vaughan Pratt.
Lisp was first implemented by Steve Russell on an IBM 704 computer using punched cards. Russell was working for McCarthy's at the time and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code.
According to McCarthy
The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".
Two assembly language macros for the IBM 704 became the primitive operations for decomposing lists: car (Contents of the Address part of Register number) and cdr (Contents of the Decrement part of Register number), where "register" refers to registers of the computer's central processing unit (CPU). Lisp dialects still use and ( and ) for the operations that return the first item in a list and the rest of the list, respectively.
McCarthy published Lisp's design in a paper in Communications of the ACM in April 1960, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". He showed that with a few simple operators and a notation for anonymous functions borrowed from Church, one can build a Turing-complete language for algorithms.
The first complete Lisp compiler, written in Lisp, was implemented in 1962 by Tim Hart and Mike Levin at MIT, and could be compiled by simply having an existing LISP interpreter interpret the compiler code, producing machine code output able to be executed at a 40-fold improvement in speed over that of the interpreter. This compiler introduced the Lisp model of incremental compilation, in which compiled and interpreted functions can intermix freely. The language used in Hart and Levin's memo is much closer to modern Lisp style than McCarthy's earlier code.
Garbage collection routines were developed by MIT graduate student Daniel Edwards, prior to 1962.
During the 1980s and 1990s, a great effort was made to unify the work on new Lisp dialects (mostly successors to Maclisp such as ZetaLisp and NIL (New Implementation of Lisp) into a single language. The new language, Common Lisp, was somewhat compatible with the dialects it replaced (the book Common Lisp the Language notes the compatibility of various constructs). In 1994, ANSI published the Common Lisp standard, "ANSI X3.226-1994 Information Technology Programming Language Common Lisp".
Timeline
Connection to artificial intelligence
Since inception, Lisp was closely connected with the artificial intelligence research community, especially on PDP-10 systems. Lisp was used as the implementation of the language Micro Planner, which was used in the famous AI system SHRDLU. In the 1970s, as AI research spawned commercial offshoots, the performance of existing Lisp systems became a growing issue, as programmers needed to be familiar with the performance ramifications of the various techniques and choices involved in the implementation of Lisp.
Genealogy and variants
Over its sixty-year history, Lisp has spawned many variations on the core theme of an S-expression language. Some of these variations have been standardized and implemented by different groups with different priorities (for example, both Common Lisp and Scheme have multiple implementations). However, in other cases a software project defines a lisp without a standard and there is no clear distinction between the dialect and the implementation (for example, Clojure and Emacs Lisp fall into this category).
Differences between dialects (and/or implementations) may be quite visible—for instance, Common Lisp uses the keyword defun to name a function, but Scheme uses define. Within a dialect that is standardized conforming implementations support the same core language, but with different extensions and libraries. This sometimes also creates quite visible changes from the base language - for instance, Guile (an implementation of Scheme) uses define* to create functions which can have default arguments and/or keyword arguments, neither of which are standardized.
Historically significant dialects
LISP 1 – First implementation.
LISP 1.5 – First widely distributed version, developed by McCarthy and others at MIT. So named because it contained several improvements on the original "LISP 1" interpreter, but was not a major restructuring as the planned LISP 2 would be.
Stanford LISP 1.6 – A successor to LISP 1.5 developed at the Stanford AI Lab, and widely distributed to PDP-10 systems running the TOPS-10 operating system. It was rendered obsolete by Maclisp and InterLisp.
Maclisp – developed for MIT's Project MAC, MACLISP is a direct descendant of LISP 1.5. It ran on the PDP-10 and Multics systems. MACLISP would later come to be called Maclisp, and is often referred to as MacLisp. The "MAC" in MACLISP is unrelated to Apple's Macintosh or McCarthy.
Interlisp – developed at BBN Technologies for PDP-10 systems running the TENEX operating system, later adopted as a "West coast" Lisp for the Xerox Lisp machines as InterLisp-D. A small version called "InterLISP 65" was published for the MOS Technology 6502-based Atari 8-bit computers. Maclisp and InterLisp were strong competitors.
Franz Lisp – originally a University of California, Berkeley project; later developed by Franz Inc. The name is a humorous deformation of the name "Franz Liszt", and does not refer to Allegro Common Lisp, the dialect of Common Lisp sold by Franz Inc., in more recent years.
XLISP, which AutoLISP was based on.
Standard Lisp and Portable Standard Lisp were widely used and ported, especially with the Computer Algebra System REDUCE.
ZetaLisp, also termed Lisp Machine Lisp – used on the Lisp machines, direct descendant of Maclisp. ZetaLisp had a big influence on Common Lisp.
LeLisp is a French Lisp dialect. One of the first Interface Builders (called SOS Interface) was written in LeLisp.
Scheme (1975).
Common Lisp (1984), as described by Common Lisp the Language – a consolidation of several divergent attempts (ZetaLisp, Spice Lisp, NIL, and S-1 Lisp) to create successor dialects to Maclisp, with substantive influences from the Scheme dialect as well. This version of Common Lisp was available for wide-ranging platforms and was accepted by many as a de facto standard until the publication of ANSI Common Lisp (ANSI X3.226-1994). Among the most widespread sub-dialects of Common Lisp are Steel Bank Common Lisp (SBCL), CMU Common Lisp (CMU-CL), Clozure OpenMCL (not to be confused with Clojure!), GNU CLisp, and later versions of Franz Lisp; all of them adhere to the later ANSI CL standard (see below).
Dylan was in its first version a mix of Scheme with the Common Lisp Object System.
EuLisp – attempt to develop a new efficient and cleaned-up Lisp.
ISLISP – attempt to develop a new efficient and cleaned-up Lisp. Standardized as ISO/IEC 13816:1997 and later revised as ISO/IEC 13816:2007: Information technology – Programming languages, their environments and system software interfaces – Programming language ISLISP.
IEEE Scheme – IEEE standard, 1178–1990 (R1995).
ANSI Common Lisp – an American National Standards Institute (ANSI) standard for Common Lisp, created by subcommittee X3J13, chartered to begin with Common Lisp: The Language as a base document and to work through a public consensus process to find solutions to shared issues of portability of programs and compatibility of Common Lisp implementations. Although formally an ANSI standard, the implementation, sale, use, and influence of ANSI Common Lisp has been and continues to be seen worldwide.
ACL2 or "A Computational Logic for Applicative Common Lisp", an applicative (side-effect free) variant of Common LISP. ACL2 is both a programming language which can model computer systems, and a tool to help proving properties of those models.
Clojure, a recent dialect of Lisp which compiles to the Java virtual machine and has a particular focus on concurrency.
Game Oriented Assembly Lisp (or GOAL) is a video game programming language developed by Andy Gavin at Naughty Dog. It was written using Allegro Common Lisp and used in the development of the entire Jak and Daxter series of games developed by Naughty Dog.
2000 to present
After having declined somewhat in the 1990s, Lisp has experienced a resurgence of interest after 2000. Most new activity has been focused around implementations of Common Lisp, Scheme, Emacs Lisp, Clojure, and Racket, and includes development of new portable libraries and applications.
Many new Lisp programmers were inspired by writers such as Paul Graham and Eric S. Raymond to pursue a language others considered antiquated. New Lisp programmers often describe the language as an eye-opening experience and claim to be substantially more productive than in other languages. This increase in awareness may be contrasted to the "AI winter" and Lisp's brief gain in the mid-1990s.
, there were eleven actively maintained Common Lisp implementations.
The open source community has created new supporting infrastructure: CLiki is a wiki that collects Common Lisp related information, the Common Lisp directory lists resources, #lisp is a popular IRC channel and allows the sharing and commenting of code snippets (with support by lisppaste, an IRC bot written in Lisp), Planet Lisp collects the contents of various Lisp-related blogs, on LispForum users discuss Lisp topics, Lispjobs is a service for announcing job offers and there is a weekly news service, Weekly Lisp News. Common-lisp.net is a hosting site for open source Common Lisp projects. Quicklisp is a library manager for Common Lisp.
Fifty years of Lisp (1958–2008) was celebrated at LISP50@OOPSLA. There are regular local user meetings in Boston, Vancouver, and Hamburg. Other events include the European Common Lisp Meeting, the European Lisp Symposium and an International Lisp Conference.
The Scheme community actively maintains over twenty implementations. Several significant new implementations (Chicken, Gambit, Gauche, Ikarus, Larceny, Ypsilon) have been developed in the 2000s (decade). The Revised5 Report on the Algorithmic Language Scheme standard of Scheme was widely accepted in the Scheme community. The Scheme Requests for Implementation process has created a lot of quasi-standard libraries and extensions for Scheme. User communities of individual Scheme implementations continue to grow. A new language standardization process was started in 2003 and led to the R6RS Scheme standard in 2007. Academic use of Scheme for teaching computer science seems to have declined somewhat. Some universities are no longer using Scheme in their computer science introductory courses; MIT now uses Python instead of Scheme for its undergraduate computer science program and MITx massive open online course.
There are several new dialects of Lisp: Arc, Hy, Nu, Liskell, and LFE (Lisp Flavored Erlang). The parser for Julia is implemented in Femtolisp, a dialect of Scheme (Julia is inspired by Scheme, which in turn is a Lisp dialect).
In October 2019, Paul Graham released a specification for Bel, "a new dialect of Lisp."
Major dialects
Common Lisp and Scheme represent two major streams of Lisp development. These languages embody significantly different design choices.
Common Lisp is a successor to Maclisp. The primary influences were Lisp Machine Lisp, Maclisp, NIL, S-1 Lisp, Spice Lisp, and Scheme. It has many of the features of Lisp Machine Lisp (a large Lisp dialect used to program Lisp Machines), but was designed to be efficiently implementable on any personal computer or workstation. Common Lisp is a general-purpose programming language and thus has a large language standard including many built-in data types, functions, macros and other language elements, and an object system (Common Lisp Object System). Common Lisp also borrowed certain features from Scheme such as lexical scoping and lexical closures. Common Lisp implementations are available for targeting different platforms such as the LLVM, the Java virtual machine,
x86-64, PowerPC, Alpha, ARM, Motorola 68000, and MIPS, and operating systems such as Windows, macOS, Linux, Solaris, FreeBSD, NetBSD, OpenBSD, Dragonfly BSD, and Heroku.
Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy L. Steele, Jr. and Gerald Jay Sussman. It was designed to have exceptionally clear and simple semantics and few different ways to form expressions. Designed about a decade earlier than Common Lisp, Scheme is a more minimalist design. It has a much smaller set of standard features but with certain implementation features (such as tail-call optimization and full continuations) not specified in Common Lisp. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme. Scheme continues to evolve with a series of standards (Revisedn Report on the Algorithmic Language Scheme) and a series of Scheme Requests for Implementation.
Clojure is a dialect of Lisp that targets mainly the Java virtual machine, and the Common Language Runtime (CLR), the Python VM, the Ruby VM YARV, and compiling to JavaScript. It is designed to be a pragmatic general-purpose language. Clojure draws considerable influences from Haskell and places a very strong emphasis on immutability. Clojure provides access to Java frameworks and libraries, with optional type hints and type inference, so that calls to Java can avoid reflection and enable fast primitive operations. Clojure is not designed to be backwards compatible with other Lisp dialects.
Further, Lisp dialects are used as scripting languages in many applications, with the best-known being Emacs Lisp in the Emacs editor, AutoLISP and later Visual Lisp in AutoCAD, Nyquist in Audacity, and Scheme in LilyPond. The potential small size of a useful Scheme interpreter makes it particularly popular for embedded scripting. Examples include SIOD and TinyScheme, both of which have been successfully embedded in the GIMP image processor under the generic name "Script-fu". LIBREP, a Lisp interpreter by John Harper originally based on the Emacs Lisp language, has been embedded in the Sawfish window manager.
Standardized dialects
Lisp has officially standardized dialects: R6RS Scheme, R7RS Scheme, IEEE Scheme, ANSI Common Lisp and ISO ISLISP.
Language innovations
Paul Graham identifies nine important aspects of Lisp that distinguished it from existing languages like Fortran:
Conditionals not limited to goto
First-class functions
Recursion
Treating variables uniformly as pointers, leaving types to values
Garbage collection
Programs made entirely of expressions with no statements
The symbol data type, distinct from the string data type
Notation for code made of trees of symbols (using many parentheses)
Full language available at load time, compile time, and run time
Lisp was the first language where the structure of program code is represented faithfully and directly in a standard data structure—a quality much later dubbed "homoiconicity". Thus, Lisp functions can be manipulated, altered or even created within a Lisp program without lower-level manipulations. This is generally considered one of the main advantages of the language with regard to its expressive power, and makes the language suitable for syntactic macros and meta-circular evaluation.
A conditional using an if–then–else syntax was invented by McCarthy for a chess program written in Fortran. He proposed its inclusion in ALGOL, but it was not made part of the Algol 58 specification. For Lisp, McCarthy used the more general cond-structure. Algol 60 took up if–then–else and popularized it.
Lisp deeply influenced Alan Kay, the leader of the research team that developed Smalltalk at Xerox PARC; and in turn Lisp was influenced by Smalltalk, with later dialects adopting object-oriented programming features (inheritance classes, encapsulating instances, message passing, etc.) in the 1970s. The Flavors object system introduced the concept of multiple inheritance and the mixin. The Common Lisp Object System provides multiple inheritance, multimethods with multiple dispatch, and first-class generic functions, yielding a flexible and powerful form of dynamic dispatch. It has served as the template for many subsequent Lisp (including Scheme) object systems, which are often implemented via a metaobject protocol, a reflective meta-circular design in which the object system is defined in terms of itself: Lisp was only the second language after Smalltalk (and is still one of the very few languages) to possess such a metaobject system. Many years later, Alan Kay suggested that as a result of the confluence of these features, only Smalltalk and Lisp could be regarded as properly conceived object-oriented programming systems.
Lisp introduced the concept of automatic garbage collection, in which the system walks the heap looking for unused memory. Progress in modern sophisticated garbage collection algorithms such as generational garbage collection was stimulated by its use in Lisp.
Edsger W. Dijkstra in his 1972 Turing Award lecture said,
Largely because of its resource requirements with respect to early computing hardware (including early microprocessors), Lisp did not become as popular outside of the AI community as Fortran and the ALGOL-descended C language. Because of its suitability to complex and dynamic applications, Lisp enjoyed some resurgence of popular interest in the 2010s.
Syntax and semantics
This article's examples are written in Common Lisp (though most are also valid in Scheme).
Symbolic expressions (S-expressions)
Lisp is an expression oriented language. Unlike most other languages, no distinction is made between "expressions" and "statements"; all code and data are written as expressions. When an expression is evaluated, it produces a value (possibly multiple values), which can then be embedded into other expressions. Each value can be any data type.
McCarthy's 1958 paper introduced two types of syntax: Symbolic expressions (S-expressions, sexps), which mirror the internal representation of code and data; and Meta expressions (M-expressions), which express functions of S-expressions. M-expressions never found favor, and almost all Lisps today use S-expressions to manipulate both code and data.
The use of parentheses is Lisp's most immediately obvious difference from other programming language families. As a result, students have long given Lisp nicknames such as Lost In Stupid Parentheses, or Lots of Irritating Superfluous Parentheses. However, the S-expression syntax is also responsible for much of Lisp's power: the syntax is simple and consistent, which facilitates manipulation by computer. However, the syntax of Lisp is not limited to traditional parentheses notation. It can be extended to include alternative notations. For example, XMLisp is a Common Lisp extension that employs the metaobject protocol to integrate S-expressions with the Extensible Markup Language (XML).
The reliance on expressions gives the language great flexibility. Because Lisp functions are written as lists, they can be processed exactly like data. This allows easy writing of programs which manipulate other programs (metaprogramming). Many Lisp dialects exploit this feature using macro systems, which enables extension of the language almost without limit.
Lists
A Lisp list is written with its elements separated by whitespace, and surrounded by parentheses. For example, is a list whose elements are the three atoms , , and . These values are implicitly typed: they are respectively two integers and a Lisp-specific data type called a "symbol", and do not have to be declared as such.
The empty list is also represented as the special atom . This is the only entity in Lisp which is both an atom and a list.
Expressions are written as lists, using prefix notation. The first element in the list is the name of a function, the name of a macro, a lambda expression or the name of a "special operator" (see below). The remainder of the list are the arguments. For example, the function returns its arguments as a list, so the expression
(list 1 2 (quote foo))
evaluates to the list . The "quote" before the in the preceding example is a "special operator" which returns its argument without evaluating it. Any unquoted expressions are recursively evaluated before the enclosing expression is evaluated. For example,
(list 1 2 (list 3 4))
evaluates to the list . The third argument is a list; lists can be nested.
Operators
Arithmetic operators are treated similarly. The expression
(+ 1 2 3 4)
evaluates to 10. The equivalent under infix notation would be "".
Lisp has no notion of operators as implemented in ALGOL-derived languages. Arithmetic operators in Lisp are variadic functions (or n-ary), able to take any number of arguments. A C-style '++' increment operator is sometimes implemented under the name incf giving syntax
(incf x)
equivalent to (setq x (+ x 1)), returning the new value of x.
"Special operators" (sometimes called "special forms") provide Lisp's control structure. For example, the special operator takes three arguments. If the first argument is non-nil, it evaluates to the second argument; otherwise, it evaluates to the third argument. Thus, the expression
(if nil
(list 1 2 "foo")
(list 3 4 "bar"))
evaluates to . Of course, this would be more useful if a non-trivial expression had been substituted in place of .
Lisp also provides logical operators and, or and not. The and and or operators do short-circuit evaluation and will return their first nil and non-nil argument respectively.
(or (and "zero" nil "never") "James" 'task 'time)
will evaluate to "James".
Lambda expressions and function definition
Another special operator, , is used to bind variables to values which are then evaluated within an expression. This operator is also used to create functions: the arguments to are a list of arguments, and the expression or expressions to which the function evaluates (the returned value is the value of the last expression that is evaluated). The expression
(lambda (arg) (+ arg 1))
evaluates to a function that, when applied, takes one argument, binds it to and returns the number one greater than that argument. Lambda expressions are treated no differently from named functions; they are invoked the same way. Therefore, the expression
((lambda (arg) (+ arg 1)) 5)
evaluates to . Here, we're doing a function application: we execute the anonymous function by passing to it the value 5.
Named functions are created by storing a lambda expression in a symbol using the defun macro.
(defun foo (a b c d) (+ a b c d))
defines a new function named in the global environment. It is conceptually similar to the expression:
(setf (fdefinition 'f) #'(lambda (a) (block f b...)))
where is a macro used to set the value of the first argument to a new function object. is a global function definition for the function named . is an abbreviation for special operator, returning a function object.
Atoms
In the original LISP there were two fundamental data types: atoms and lists. A list was a finite ordered sequence of elements, where each element is either an atom or a list, and an atom was a number or a symbol. A symbol was essentially a unique named item, written as an alphanumeric string in source code, and used either as a variable name or as a data item in symbolic processing. For example, the list contains three elements: the symbol , the list , and the number 2.
The essential difference between atoms and lists was that atoms were immutable and unique. Two atoms that appeared in different places in source code but were written in exactly the same way represented the same object, whereas each list was a separate object that could be altered independently of other lists and could be distinguished from other lists by comparison operators.
As more data types were introduced in later Lisp dialects, and programming styles evolved, the concept of an atom lost importance. Many dialects still retained the predicate atom for legacy compatibility, defining it true for any object which is not a cons.
Conses and lists
A Lisp list is implemented as a singly linked list. Each cell of this list is called a cons (in Scheme, a pair) and is composed of two pointers, called the car and cdr. These are respectively equivalent to the and fields discussed in the article linked list.
Of the many data structures that can be built out of cons cells, one of the most basic is called a proper list. A proper list is either the special (empty list) symbol, or a cons in which the points to a datum (which may be another cons structure, such as a list), and the points to another proper list.
If a given cons is taken to be the head of a linked list, then its car points to the first element of the list, and its cdr points to the rest of the list. For this reason, the and functions are also called and when referring to conses which are part of a linked list (rather than, say, a tree).
Thus, a Lisp list is not an atomic object, as an instance of a container class in C++ or Java would be. A list is nothing more than an aggregate of linked conses. A variable that refers to a given list is simply a pointer to the first cons in the list. Traversal of a list can be done by cdring down the list; that is, taking successive cdrs to visit each cons of the list; or by using any of several higher-order functions to map a function over a list.
Because conses and lists are so universal in Lisp systems, it is a common misconception that they are Lisp's only data structures. In fact, all but the most simplistic Lisps have other data structures, such as vectors (arrays), hash tables, structures, and so forth.
S-expressions represent lists
Parenthesized S-expressions represent linked list structures. There are several ways to represent the same list as an S-expression. A cons can be written in dotted-pair notation as , where is the car and the cdr. A longer proper list might be written in dotted-pair notation. This is conventionally abbreviated as in list notation. An improper list may be written in a combination of the two – as for the list of three conses whose last cdr is (i.e., the list in fully specified form).
List-processing procedures
Lisp provides many built-in procedures for accessing and controlling lists. Lists can be created directly with the procedure, which takes any number of arguments, and returns the list of these arguments.
(list 1 2 'a 3)
;Output: (1 2 a 3)
(list 1 '(2 3) 4)
;Output: (1 (2 3) 4)
Because of the way that lists are constructed from cons pairs, the procedure can be used to add an element to the front of a list. The procedure is asymmetric in how it handles list arguments, because of how lists are constructed.
(cons 1 '(2 3))
;Output: (1 2 3)
(cons '(1 2) '(3 4))
;Output: ((1 2) 3 4)
The procedure appends two (or more) lists to one another. Because Lisp lists are linked lists, appending two lists has asymptotic time complexity
(append '(1 2) '(3 4))
;Output: (1 2 3 4)
(append '(1 2 3) '() '(a) '(5 6))
;Output: (1 2 3 a 5 6)
Shared structure
Lisp lists, being simple linked lists, can share structure with one another. That is to say, two lists can have the same tail, or final sequence of conses. For instance, after the execution of the following Common Lisp code:
(setf foo (list 'a 'b 'c))
(setf bar (cons 'x (cdr foo)))
the lists and are and respectively. However, the tail is the same structure in both lists. It is not a copy; the cons cells pointing to and are in the same memory locations for both lists.
Sharing structure rather than copying can give a dramatic performance improvement. However, this technique can interact in undesired ways with functions that alter lists passed to them as arguments. Altering one list, such as by replacing the with a , will affect the other:
(setf (third foo) 'goose)
This changes to , but thereby also changes to – a possibly unexpected result. This can be a source of bugs, and functions which alter their arguments are documented as destructive for this very reason.
Aficionados of functional programming avoid destructive functions. In the Scheme dialect, which favors the functional style, the names of destructive functions are marked with a cautionary exclamation point, or "bang"—such as (read set car bang), which replaces the car of a cons. In the Common Lisp dialect, destructive functions are commonplace; the equivalent of is named for "replace car". This function is rarely seen, however, as Common Lisp includes a special facility, , to make it easier to define and use destructive functions. A frequent style in Common Lisp is to write code functionally (without destructive calls) when prototyping, then to add destructive calls as an optimization where it is safe to do so.
Self-evaluating forms and quoting
Lisp evaluates expressions which are entered by the user. Symbols and lists evaluate to some other (usually, simpler) expression – for instance, a symbol evaluates to the value of the variable it names; evaluates to . However, most other forms evaluate to themselves: if entering into Lisp, it returns .
Any expression can also be marked to prevent it from being evaluated (as is necessary for symbols and lists). This is the role of the special operator, or its abbreviation (one quotation mark). For instance, usually if entering the symbol , it returns the value of the corresponding variable (or an error, if there is no such variable). To refer to the literal symbol, enter or, usually, .
Both Common Lisp and Scheme also support the backquote operator (termed quasiquote in Scheme), entered with the character (Backtick). This is almost the same as the plain quote, except it allows expressions to be evaluated and their values interpolated into a quoted list with the comma unquote and comma-at splice operators. If the variable has the value then evaluates to , while evaluates to . The backquote is most often used in defining macro expansions.
Self-evaluating forms and quoted forms are Lisp's equivalent of literals. It may be possible to modify the values of (mutable) literals in program code. For instance, if a function returns a quoted form, and the code that calls the function modifies the form, this may alter the behavior of the function on subsequent invocations.
(defun should-be-constant ()
'(one two three))
(let ((stuff (should-be-constant)))
(setf (third stuff) 'bizarre)) ; bad!
(should-be-constant) ; returns (one two bizarre)
Modifying a quoted form like this is generally considered bad style, and is defined by ANSI Common Lisp as erroneous (resulting in "undefined" behavior in compiled files, because the file-compiler can coalesce similar constants, put them in write-protected memory, etc.).
Lisp's formalization of quotation has been noted by Douglas Hofstadter (in Gödel, Escher, Bach) and others as an example of the philosophical idea of self-reference.
Scope and closure
The Lisp family splits over the use of dynamic or static (a.k.a. lexical) scope. Clojure, Common Lisp and Scheme make use of static scoping by default, while newLISP, Picolisp and the embedded languages in Emacs and AutoCAD use dynamic scoping. Since version 24.1, Emacs uses both dynamic and lexical scoping.
List structure of program code; exploitation by macros and compilers
A fundamental distinction between Lisp and other languages is that in Lisp, the textual representation of a program is simply a human-readable description of the same internal data structures (linked lists, symbols, number, characters, etc.) as would be used by the underlying Lisp system.
Lisp uses this to implement a very powerful macro system. Like other macro languages such as the one defined by the C preprocessor (the macro preprocessor for the C, Objective-C and C++ programming languages), a macro returns code that can then be compiled. However, unlike C preprocessor macros, the macros are Lisp functions and so can exploit the full power of Lisp.
Further, because Lisp code has the same structure as lists, macros can be built with any of the list-processing functions in the language. In short, anything that Lisp can do to a data structure, Lisp macros can do to code. In contrast, in most other languages, the parser's output is purely internal to the language implementation and cannot be manipulated by the programmer.
This feature makes it easy to develop efficient languages within languages. For example, the Common Lisp Object System can be implemented cleanly as a language extension using macros. This means that if an application needs a different inheritance mechanism, it can use a different object system. This is in stark contrast to most other languages; for example, Java does not support multiple inheritance and there is no reasonable way to add it.
In simplistic Lisp implementations, this list structure is directly interpreted to run the program; a function is literally a piece of list structure which is traversed by the interpreter in executing it. However, most substantial Lisp systems also include a compiler. The compiler translates list structure into machine code or bytecode for execution. This code can run as fast as code compiled in conventional languages such as C.
Macros expand before the compilation step, and thus offer some interesting options. If a program needs a precomputed table, then a macro might create the table at compile time, so the compiler need only output the table and need not call code to create the table at run time. Some Lisp implementations even have a mechanism, eval-when, that allows code to be present during compile time (when a macro would need it), but not present in the emitted module.
Evaluation and the read–eval–print loop
Lisp languages are often used with an interactive command line, which may be combined with an integrated development environment (IDE). The user types in expressions at the command line, or directs the IDE to transmit them to the Lisp system. Lisp reads the entered expressions, evaluates them, and prints the result. For this reason, the Lisp command line is called a read–eval–print loop (REPL).
The basic operation of the REPL is as follows. This is a simplistic description which omits many elements of a real Lisp, such as quoting and macros.
The function accepts textual S-expressions as input, and parses them into an internal data structure. For instance, if you type the text at the prompt, translates this into a linked list with three elements: the symbol , the number 1, and the number 2. It so happens that this list is also a valid piece of Lisp code; that is, it can be evaluated. This is because the car of the list names a function—the addition operation.
A will be read as a single symbol. will be read as the number one hundred and twenty-three. will be read as the string "123".
The function evaluates the data, returning zero or more other Lisp data as a result. Evaluation does not have to mean interpretation; some Lisp systems compile every expression to native machine code. It is simple, however, to describe evaluation as interpretation: To evaluate a list whose car names a function, first evaluates each of the arguments given in its cdr, then applies the function to the arguments. In this case, the function is addition, and applying it to the argument list yields the answer . This is the result of the evaluation.
The symbol evaluates to the value of the symbol foo. Data like the string "123" evaluates to the same string. The list evaluates to the list (1 2 3).
It is the job of the function to represent output to the user. For a simple result such as this is trivial. An expression which evaluated to a piece of list structure would require that traverse the list and print it out as an S-expression.
To implement a Lisp REPL, it is necessary only to implement these three functions and an infinite-loop function. (Naturally, the implementation of will be complex, since it must also implement all special operators like or .) This done, a basic REPL is one line of code: .
The Lisp REPL typically also provides input editing, an input history, error handling and an interface to the debugger.
Lisp is usually evaluated eagerly. In Common Lisp, arguments are evaluated in applicative order ('leftmost innermost'), while in Scheme order of arguments is undefined, leaving room for optimization by a compiler.
Control structures
Lisp originally had very few control structures, but many more were added during the language's evolution. (Lisp's original conditional operator, , is the precursor to later structures.)
Programmers in the Scheme dialect often express loops using tail recursion. Scheme's commonality in academic computer science has led some students to believe that tail recursion is the only, or the most common, way to write iterations in Lisp, but this is incorrect. All oft-seen Lisp dialects have imperative-style iteration constructs, from Scheme's loop to Common Lisp's complex expressions. Moreover, the key issue that makes this an objective rather than subjective matter is that Scheme makes specific requirements for the handling of tail calls, and thus the reason that the use of tail recursion is generally encouraged for Scheme is that the practice is expressly supported by the language definition. By contrast, ANSI Common Lisp does not require the optimization commonly termed a tail call elimination. Thus, the fact that tail recursive style as a casual replacement for the use of more traditional iteration constructs (such as , or ) is discouraged in Common Lisp is not just a matter of stylistic preference, but potentially one of efficiency (since an apparent tail call in Common Lisp may not compile as a simple jump) and program correctness (since tail recursion may increase stack use in Common Lisp, risking stack overflow).
Some Lisp control structures are special operators, equivalent to other languages' syntactic keywords. Expressions using these operators have the same surface appearance as function calls, but differ in that the arguments are not necessarily evaluated—or, in the case of an iteration expression, may be evaluated more than once.
In contrast to most other major programming languages, Lisp allows implementing control structures using the language. Several control structures are implemented as Lisp macros, and can even be macro-expanded by the programmer who wants to know how they work.
Both Common Lisp and Scheme have operators for non-local control flow. The differences in these operators are some of the deepest differences between the two dialects. Scheme supports re-entrant continuations using the procedure, which allows a program to save (and later restore) a particular place in execution. Common Lisp does not support re-entrant continuations, but does support several ways of handling escape continuations.
Often, the same algorithm can be expressed in Lisp in either an imperative or a functional style. As noted above, Scheme tends to favor the functional style, using tail recursion and continuations to express control flow. However, imperative style is still quite possible. The style preferred by many Common Lisp programmers may seem more familiar to programmers used to structured languages such as C, while that preferred by Schemers more closely resembles pure-functional languages such as Haskell.
Because of Lisp's early heritage in list processing, it has a wide array of higher-order functions relating to iteration over sequences. In many cases where an explicit loop would be needed in other languages (like a loop in C) in Lisp the same task can be accomplished with a higher-order function. (The same is true of many functional programming languages.)
A good example is a function which in Scheme is called and in Common Lisp is called . Given a function and one or more lists, applies the function successively to the lists' elements in order, collecting the results in a new list:
(mapcar #'+ '(1 2 3 4 5) '(10 20 30 40 50))
This applies the function to each corresponding pair of list elements, yielding the result .
Examples
Here are examples of Common Lisp code.
The basic "Hello, World!" program:
(print "Hello, World!")
Lisp syntax lends itself naturally to recursion. Mathematical problems such as the enumeration of recursively defined sets are simple to express in this notation. For example, to evaluate a number's factorial:
(defun factorial (n)
(if (zerop n) 1
(* n (factorial (1- n)))))
An alternative implementation takes less stack space than the previous version if the underlying Lisp system optimizes tail recursion:
(defun factorial (n &optional (acc 1))
(if (zerop n) acc
(factorial (1- n) (* acc n))))
Contrast the examples above with an iterative version which uses Common Lisp's macro:
(defun factorial (n)
(loop for i from 1 to n
for fac = 1 then (* fac i)
finally (return fac)))
The following function reverses a list. (Lisp's built-in reverse function does the same thing.)
(defun -reverse (list)
(let ((return-value))
(dolist (e list) (push e return-value))
return-value))
Object systems
Various object systems and models have been built on top of, alongside, or into Lisp, including
The Common Lisp Object System, CLOS, is an integral part of ANSI Common Lisp. CLOS descended from New Flavors and CommonLOOPS. ANSI Common Lisp was the first standardized object-oriented programming language (1994, ANSI X3J13).
ObjectLisp or Object Lisp, used by Lisp Machines Incorporated and early versions of Macintosh Common Lisp
LOOPS (Lisp Object-Oriented Programming System) and the later CommonLoops
Flavors, built at MIT, and its descendant New Flavors (developed by Symbolics).
KR (short for Knowledge Representation), a constraints-based object system developed to aid the writing of Garnet, a GUI library for Common Lisp.
Knowledge Engineering Environment (KEE) used an object system named UNITS and integrated it with an inference engine and a truth maintenance system (ATMS).
Operating systems
Several operating systems, including language-based systems, are based on Lisp (use Lisp features, conventions, methods, data structures, etc.), or are written in Lisp, including:
Genera, renamed Open Genera, by Symbolics; Medley, written in Interlisp, originally a family of graphical operating systems that ran on Xerox's later Star workstations; Mezzano; Interim; ChrysaLisp, by developers of Tao Systems' TAOS; and also the Guix System for GNU/Linux.
| Technology | Programming languages | null |
18062 | https://en.wikipedia.org/wiki/Leather | Leather | Leather is a strong, flexible and durable material obtained from the tanning, or chemical treatment, of animal skins and hides to prevent decay. The most common leathers come from cattle, sheep, goats, equine animals, buffalo, pigs and hogs, and aquatic animals such as seals and alligators.
Leather can be used to make a variety of items, including clothing, footwear, handbags, furniture, tools and sports equipment, and lasts for decades. Leather making has been practiced for more than 7,000 years and the leading producers of leather today are China and India.
Critics of tanneries claim that they engage in unsustainable practices that pose health hazards to the people and the environment near them.
Production processes
The leather manufacturing process is divided into three fundamental subprocesses: preparatory stages, tanning, and crusting. A further subprocess, finishing, can be added into the leather process sequence, but not all leathers receive finishing.
The preparatory stages are when the hide is prepared for tanning. Preparatory stages may include soaking, hair removal, liming, deliming, bating, bleaching, and pickling.
Tanning is a process that stabilizes the proteins, particularly collagen, of the raw hide to increase the thermal, chemical and microbiological stability of the hides and skins, making it suitable for a wide variety of end applications. The principal difference between raw and tanned hides is that raw hides dry out to form a hard, inflexible material that, when rewetted, will putrefy, while tanned material dries to a flexible form that does not become putrid when rewetted.
Many tanning methods and materials exist. The typical process sees tanners load the hides into a drum and immerse them in a tank that contains the tanning "liquor". The hides soak while the drum slowly rotates about its axis, and the tanning liquor slowly penetrates through the full thickness of the hide. Once the process achieves even penetration, workers slowly raise the liquor's pH in a process called basification, which fixes the tanning material to the leather. The more tanning material fixed, the higher the leather's hydrothermal stability and shrinkage temperature resistance.
Crusting is a process that thins and lubricates leather. It often includes a coloring operation. Chemicals added during crusting must be fixed in place. Crusting culminates with a drying and softening operation, and may include splitting, shaving, dyeing, whitening or other methods.
For some leathers, tanners apply a surface coating, called "finishing". Finishing operations can include oiling, brushing, buffing, coating, polishing, embossing, glazing, or tumbling, among others.
Leather can be oiled to improve its water resistance. This currying process after tanning supplements the natural oils remaining in the leather itself, which can be washed out through repeated exposure to water. Frequent oiling of leather, with mink oil, neatsfoot oil, or a similar material keeps it supple and improves its lifespan dramatically.
Tanning methods
Tanning processes largely differ in which chemicals are used in the tanning liquor. Some common types include:
is tanned using tannins extracted from vegetable matter, such as tree bark prepared in bark mills. It is the oldest known method. It is supple and light brown in color, with the exact shade depending on the mix of materials and the color of the skin. The color tan derives its name from the appearance of undyed vegetable-tanned leather. Vegetable-tanned leather is not stable in water; it tends to discolor, and if left to soak and then dry, it shrinks and becomes harder, a feature of vegetable-tanned leather that is exploited in traditional shoemaking. In hot water, it shrinks drastically and partly congeals, becoming rigid and eventually brittle. Boiled leather is an example of this, where the leather has been hardened by being immersed in boiling water, or in wax or similar substances. Historically, it was occasionally used as armor after hardening, and it has also been used for book binding.
Chrome-tanned leather is tanned using chromium sulfate and other chromium salts. It is also known as "wet blue" for the pale blue color of the undyed leather. The chrome tanning method usually takes approximately one day to complete, making it best suited for large-scale industrial use. This is the most common method in modern use. It is more supple and pliable than vegetable-tanned leather and does not discolor or lose shape as drastically in water as vegetable-tanned. However, there are environmental concerns with this tanning method, as chromium is a heavy metal; while the trivalent chromium used for tanning is harmless, other byproducts can contain toxic variants. The method was developed in the latter half of the 19th century as tanneries wanted to find ways to speed up the process and to make leather more waterproof.
Aldehyde-tanned leather is tanned using glutaraldehyde or oxazolidine compounds. It is referred to as "wet white" due to its pale cream color. It is the main type of "chrome-free" leather, often seen in shoes for infants and automobiles. Formaldehyde has been used for tanning in the past; it is being phased out due to danger to workers and sensitivity of many people to formaldehyde. Chamois leather is a form of aldehyde-tanned leather that is porous and highly water-absorbent. Chamois leather is made using oil (traditionally cod oil) that oxidizes to produce the aldehydes that tan the leather.
Brain tanned leathers are made by a labor-intensive process that uses emulsified oils, often those of animal brains such as deer, cattle, and buffalo. An example of this kind is buckskin. Leather products made in this manner are known for their exceptional softness and washability.
Alum leather is transformed using aluminium salts mixed with a variety of binders and protein sources, such as flour and egg yolk. Alum leather is not actually tanned; rather the process is called "tawing", and the resulting material reverts to rawhide if soaked in water long enough to remove the alum salts.
Grades
In general, leather is produced in the following grades:
Top-grain leather includes the outer layer of the hide, known as the grain, which features finer, more densely packed fibers, resulting in strength and durability. Depending on thickness, it may also contain some of the more fibrous under layer, known as the corium. Types of top-grain leather include:
Full-grain leather contains the entire grain layer, without any removal of the surface. Rather than wearing out, it develops a patina during its useful lifetime. It is usually considered the highest quality leather. Furniture and footwear are often made from full-grain leather. Full-grain leather is typically finished with a soluble aniline dye. Russia leather is a form of full-grain leather.
Corrected grain leather has the surface subjected to finishing treatments to create a more uniform appearance. This usually involves buffing or sanding away flaws in the grain, then dyeing and embossing the surface.
Nubuck is top-grain leather that has been sanded or buffed on the grain side to give a slight nap of short protein fibers, producing a velvet-like surface.
Split leather is created from the corium left once the top-grain has been separated from the hide, known as the drop split. In thicker hides, the drop split can be further split into a middle split and a flesh split.
Bicast leather is split leather that is coated with a layer of polyurethane or vinyl with an embossed texture. This gives it the appearance of a grain. It is slightly stiffer than top-grain leather but has a more consistent texture.
Patent leather is leather that has been given a high-gloss finish by the addition of a coating. Dating to the late 1700s, it became widely popular after inventor Seth Boyden developed the first mass-production process, using a linseed-oil-based lacquer, in 1818. Modern versions are usually a form of bicast leather.
Suede is made from the underside of a split to create a soft, napped finish. It is often made from younger or smaller animals, as the skins of adults often result in a coarse, shaggy nap.
Bonded leather, also called reconstituted leather, is a material that uses leather scraps that are shredded and bonded together with polyurethane or latex onto a fiber mesh. The amount of leather fibers in the mix varies from 10% to 90%, affecting the properties of the product.
The term "genuine leather" does not describe a specific grade. The term often indicates split leather that has been extensively processed, and some sources describe it as synonymous with bicast leather, or made from multiple splits glued together and coated. In some countries, when it is the description on a product label the term means nothing more than "contains leather"; depending on jurisdiction, regulations limit the term's use in product labelling.
Animals used
Today, most leather is made of cattle (cow) hides, which constitute about 65% of all leather produced. Other animals that are used include sheep (about 13%), goats (about 11%), and pigs (about 10%). Obtaining accurate figures from around the world is difficult, especially for areas where the skin may be eaten. There are significant regional differences in leather production: e.g. goat leather was historically called "Turkey" or "Morocco" due to its association with the Middle East, while pig skin had historically been used the most in Germany. Other animals mentioned below only constitute a fraction of a percent of total leather production.
Horse hides are used to make particularly durable leathers. Shell cordovan is a horse leather made not from the outer skin but from an under layer, found only in equine species, called the shell. It is prized for its mirror-like finish and anti-creasing properties.
Lamb and deerskin are used for soft leather in more expensive apparel. Deerskin is widely used in work gloves and indoor shoes.
Reptilian skins, such as alligator, crocodile, and snake, are noted for their distinct patterns that reflect the scales of their species. This has led to hunting and farming of these species in part for their skins. The Argentine black and white tegu is one of the most exploited reptile species in the world in the leather trade. However, it is not endangered and while monitored, trade is legal in most South American countries.
Kangaroo leather is used to make items that must be strong and flexible. It is the material most commonly used in bullwhips. Some motorcyclists favor kangaroo leather for motorcycle leathers because of its light weight and abrasion resistance. Kangaroo leather is also used for falconry jesses, soccer footwear, (e.g. Adidas Copa Mundial) and boxing speed bags.
Although originally raised for their feathers in the 19th century, ostriches are now more popular for both meat and leather. Ostrich leather has a characteristic "goose bump" look because of the large follicles where the feathers grew. Different processes produce different finishes for many applications, including upholstery, footwear, automotive products, accessories, and clothing.
In Thailand, stingray leather is used in wallets and belts. Stingray leather is tough and durable. The leather is often dyed black and covered with tiny round bumps in the natural pattern of the back ridge of an animal. These bumps are then usually dyed white to highlight the decoration. Stingray rawhide is also used as grips on Chinese swords, Scottish basket hilted swords, and Japanese katanas. Stingray leather is also used for high abrasion areas in motorcycle racing leathers (especially in gloves, where its high abrasion resistance helps prevent wear through in the event of an accident).
For a given thickness, fish leather is typically much stronger due to its criss-crossed fibers.
Environmental impact
Leather produces some environmental impact, most notably due to:
The carbon footprint of cattle rearing (see environmental impact of meat production)
Use of chemicals in the tanning process (e.g., chromium, phthalate esters, nonyl phenol ethoxylate soaps, pentachlorophenol and solvents)
Air pollution due to the transformation process (hydrogen sulfide is formed during mixing with acids and ammonia liberated during deliming, solvent vapors)
Carbon footprint
Estimates of the carbon footprint of bovine leather range from 65 to 150 kg of CO2 equivalent per square meter of production.
Water footprint
One ton of hide or skin generally produces 20 to 80 m3 of waste water, including chromium levels of 100–400 mg/L, sulfide levels of
200–800 mg/L, high levels of fat and other solid wastes, and notable pathogen contamination. Producers often add pesticides to protect hides during transport. With solid wastes representing up to 70% of the wet weight of the original hides, the tanning process represents a considerable strain on water treatment installations.
Disposal
Leather biodegrades slowly—taking 25 to 40 years to decompose. However, vinyl and petrochemical-derived materials take 500 or more years to decompose.
Chemical waste disposal
Tanning is especially polluting in countries where environmental regulations are lax, such as in India, the world's third-largest producer and exporter of leather. To give an example of an efficient pollution prevention system, chromium loads per produced tonne are generally abated from 8 kg to 1.5 kg. VOC emissions are typically reduced from 30 kg/t to 2 kg/t in a properly managed facility. A review of the total pollution load decrease achievable according to the United Nations Industrial Development Organization posts precise data on the abatement achievable through industrially proven low-waste advanced methods, while noting, "even though the chrome pollution load can be decreased by 94% on introducing advanced technologies, the minimum residual load 0.15 kg/t raw hide can still cause difficulties when using landfills and composting sludge from wastewater treatment on account of the regulations currently in force in some countries."
In Kanpur, the self-proclaimed "Leather City of World"—with 10,000 tanneries as of 2011 and a city of three million on the banks of the Ganges—pollution levels were so high, that despite an industry crisis, the pollution control board decided to shut down 49 high-polluting tanneries out of 404 in July 2009. In 2003 for instance, the main tanneries' effluent disposal unit was dumping 22 tonnes of chromium-laden solid waste per day in the open.
In the Hazaribagh neighborhood of Dhaka in Bangladesh, chemicals from tanneries end up in Dhaka's main river. Besides the environmental damage, the health of both local factory workers and the end consumer is also negatively affected. After approximately 15 years of ignoring high court rulings, the government shut down more than 100 tanneries the weekend of 8 April 2017 in the neighborhood.
The higher cost associated with the treatment of effluents than to untreated effluent discharging leads to illegal dumping to save on costs. For instance, in Croatia in 2001, proper pollution abatement cost US$70–100 per ton of raw hides processed against $43/t for irresponsible behavior. In November 2009, one of Uganda's main leather making companies was caught directly dumping waste water into a wetland adjacent to Lake Victoria.
Role of enzymes
Enzymes like proteases, lipases, and amylases have an important role in the soaking, dehairing, degreasing, and bating operations of leather manufacturing. Proteases are the most commonly used enzymes in leather production. The enzyme must not damage or dissolve collagen or keratin, but should hydrolyze casein, elastin, albumin, globulin-like proteins, and nonstructural proteins that are not essential for leather making. This process is called bating.
Lipases are used in the degreasing operation to hydrolyze fat particles embedded in the skin.
Amylases are used to soften skin, to bring out the grain, and to impart strength and flexibility to the skin. These enzymes are rarely used.
Preservation and conditioning
The natural fibers of leather break down with the passage of time. Acidic leathers are particularly vulnerable to red rot, which causes powdering of the surface and a change in consistency. Damage from red rot is aggravated by high temperatures and relative humidities. Although it is chemically irreversible, treatments can add handling strength and prevent disintegration of red rotted leather.
Exposure to long periods of low relative humidities (below 40%) can cause leather to become desiccated, irreversibly changing the fibrous structure of the leather. Chemical damage can also occur from exposure to environmental factors, including ultraviolet light, ozone, acid from sulfurous and nitrous pollutants in the air, or through a chemical action following any treatment with tallow or oil compounds. Both oxidation and chemical damage occur faster at higher temperatures.
There are few methods to maintain and clean leather goods properly such as using damp cloth and avoid using a wet cloth or soaking the leather in water. Various treatments are available such as conditioners. Saddle soap is used for cleaning, conditioning, and softening leather. Leather shoes are widely conditioned with shoe polish.
In modern culture
Due to its high resistance to abrasion and wind, leather found a use in rugged occupations. The enduring image of a cowboy in leather chaps gave way to the leather-jacketed and leather-helmeted aviator. When motorcycles were invented, some riders took to wearing heavy leather jackets to protect from road rash and wind blast; some also wear chaps or full leather pants to protect the lower body.
Leather's flexibility allows it to be formed and shaped into balls and protective gear. Subsequently, many sports use equipment made with leather, such as baseball gloves and the ball used in cricket and gridiron football.
Leather fetishism is the name popularly used to describe a fetishistic attraction to people wearing leather, or in certain cases, to the garments themselves.
Many rock groups (particularly heavy metal and punk groups in the 1970s and 80s) are well known for wearing leather clothing. Extreme metal bands (especially black metal bands) and Goth rock groups have extensive black leather clothing. Leather has become less common in the punk community over the last three decades, as there is opposition to the use of leather from punks who support animal rights.
Many cars and trucks come with optional or standard leather or "leather faced" seating.
Religious sensitivities
In countries with significant populations of individuals observing religions which place restrictions on material choices, vendors typically clarify the source of leather in their products. Such labeling helps facilitate religious observance, so, for example, a Muslim will not accidentally purchase pigskin or a Hindu can avoid cattleskin. Such taboos increase the demand for religiously neutral leathers such as ostrich and deer.
Judaism forbids the comfort of wearing leather shoes on Yom Kippur, Tisha B'Av, and during mourning. Also, see Leather in Judaism, Teffilin and Torah Scroll.
Jainism prohibits the use of leather, since it is obtained by killing animals.
Alternatives
Many forms of artificial leather have been developed, usually involving polyurethane or vinyl coatings applied to a cloth backing. Many names and brands for such artificial leathers exist, including "pleather", a portmanteau of "plastic leather", and the brand name Naugahyde.
Another alternative is cultured leather which is lab-grown using cell-culture methods, mushroom-based materials and gelatin-based textile made by upcycling meat industry waste. Leather made of fungi or mushroom-based materials are completely biodegradable.
| Technology | Fabrics and fibers | null |
18069 | https://en.wikipedia.org/wiki/Lubricant | Lubricant | A lubricant (sometimes shortened to lube) is a substance that helps to reduce friction between surfaces in mutual contact, which ultimately reduces the heat generated when the surfaces move. It may also have the function of transmitting forces, transporting foreign particles, or heating or cooling the surfaces. The property of reducing friction is known as lubricity.
In addition to industrial applications, lubricants are used for many other purposes. Other uses include cooking (oils and fats in use in frying pans and baking to prevent food sticking), to reduce rusting and friction in machinery, through the use of motor oil and grease, bioapplications on humans (e.g., lubricants for artificial joints), ultrasound examination, medical examination, and sexual intercourse. It is mainly used to reduce friction and to contribute to a better, more efficient functioning of a mechanism.
History
Lubricants have been in some use for thousands of years. Calcium soaps have been identified on the axles of chariots dated to 1400 BC. Building stones were slid on oil-impregnated lumber in the time of the pyramids. In the Roman era, lubricants were based on olive oil and rapeseed oil, as well as animal fats. The growth of lubrication accelerated in the Industrial Revolution with the accompanying use of metal-based machinery. Relying initially on natural oils, needs for such machinery shifted toward petroleum-based materials early in the 1900s. A breakthrough came with the development of vacuum distillation of petroleum, as described by the Vacuum Oil Company. This technology allowed the purification of very non-volatile substances, which are common in many lubricants.
Properties
A good lubricant generally possesses the following characteristics:
A high boiling point and low freezing point (in order to stay liquid within a wide range of temperature)
A high viscosity index
Thermal stability
Hydraulic stability
Demulsibility
Corrosion prevention
A high resistance to oxidation
Pour Point (the minimum temperature at which oil will flow under prescribed test conditions)
Formulation
Typically lubricants contain 90% base oil (most often petroleum fractions, called mineral oils) and less than 10% additives. Vegetable oils or synthetic liquids such as hydrogenated polyolefins, esters, silicones, fluorocarbons and many others are sometimes used as base oils. Additives deliver reduced friction and wear, increased viscosity, improved viscosity index, resistance to corrosion and oxidation, aging or contamination, etc.
Non-liquid lubricants include powders (dry graphite, PTFE, molybdenum disulphide, tungsten disulphide, etc.), PTFE tape used in plumbing, air cushion and others. Dry lubricants such as graphite, molybdenum disulphide and tungsten disulphide also offer lubrication at temperatures (up to 350 °C) higher than liquid and oil-based lubricants are able to operate. Limited interest has been shown in low friction properties of compacted oxide glaze layers formed at several hundred degrees Celsius in metallic sliding systems; however, practical use is still many years away due to their physically unstable nature.
Additives
A large number of additives are used to impart performance characteristics to the lubricants. Modern automotive lubricants contain as many as ten additives, comprising up to 20% of the lubricant, the main families of additives are:
Pour point depressants are compounds that prevent crystallization of waxes. Long chain alkylbenzenes adhere to small crystallites of wax, preventing crystal growth.
Anti-foaming agents are typically silicone compounds which increase surface tension in order to discourage foam formation.
Viscosity index improvers (VIIs) are compounds that allow lubricants to remain viscous at higher temperatures. Typical VIIs are polyacrylates and butadiene.
Antioxidants suppress the rate of oxidative degradation of the hydrocarbon molecules within the lubricant. At low temperatures, free radical inhibitors such as hindered phenols are used, e.g. butylated hydroxytoluene. At temperatures >90 °C, where the metals catalyze the oxidation process, dithiophosphates are more useful. In the latter application the additives are called metal deactivators.
Detergents ensure the cleanliness of engine components by preventing the formation of deposits on contact surfaces at high temperatures.
Corrosion inhibitors (rust inhibitors) are usually alkaline materials, such as alkylsulfonate salts, that absorb acids that would corrode metal parts.
Anti-wear additives form protective 'tribofilms' on metal parts, suppressing wear. They come in two classes depending on the strength with which they bind to the surface. Popular examples include phosphate esters and zinc dithiophosphates.
Extreme pressure (anti-scuffing) additives form protective films on sliding metal parts. These agents are often sulfur compounds, such as dithiophosphates.
Friction modifiers reduce friction and wear, particularly in the boundary lubrication regime where surfaces come into direct contact.
In 1999, an estimated 37,300,000 tons of lubricants were consumed worldwide. Automotive applications dominate, including electric vehicles but other industrial, marine, and metal working applications are also big consumers of lubricants. Although air and other gas-based lubricants are known (e.g., in fluid bearings), liquid lubricants dominate the market, followed by solid lubricants.
Lubricants are generally composed of a majority of base oil plus a variety of additives to impart desirable characteristics. Although generally lubricants are based on one type of base oil, mixtures of the base oils also are used to meet performance requirements.
Mineral oil
The term "mineral oil" is used to refer to lubricating base oils derived from crude oil. The American Petroleum Institute (API) designates several types of lubricant base oil:
Group I – Saturates < 90% and/or sulfur > 0.03%, and Society of Automotive Engineers (SAE) viscosity index (VI) of 80 to 120
Manufactured by solvent extraction, solvent or catalytic dewaxing, and hydro-finishing processes. Common Group I base oil are 150SN (solvent neutral), 500SN, and 150BS (brightstock)
Group II – Saturates > 90% and sulfur < 0.03%, and SAE viscosity index of 80 to 120
Manufactured by hydrocracking and solvent or catalytic dewaxing processes. Group II base oil has superior anti-oxidation properties since virtually all hydrocarbon molecules are saturated. It has water-white color.
Group III – Saturates > 90%, sulfur < 0.03%, and SAE viscosity index over 120
Manufactured by special processes such as isohydromerization. Can be manufactured from base oil or slax wax from dewaxing process.
Group IV – Polyalphaolefins (PAO)
Group V – All others not included above, such as naphthenics, polyalkylene glycols (PAG), and polyesters.
The lubricant industry commonly extends this group terminology to include:
Group I+ with a viscosity index of 103–108
Group II+ with a viscosity index of 113–119
Group III+ with a viscosity index of at least 140
Can also be classified into three categories depending on the prevailing compositions:
Paraffinic
Naphthenic
Aromatic
Synthetic oils
Petroleum-derived lubricant can also be produced using synthetic hydrocarbons (derived ultimately from petroleum), "synthetic oils".
These include:
Polyalpha-olefin (PAO)
Synthetic esters
Polyalkylene glycols (PAG)
Phosphate esters
Perfluoropolyether (PFPE)
Alkylated naphthalenes (AN)
Silicate esters
Ionic fluids
Multiply alkylated cyclopentanes (MAC)
Solid lubricants
PTFE: polytetrafluoroethylene (PTFE) is typically used as a coating layer on, for example, cooking utensils to provide a non-stick surface. Its usable temperature range up to 350 °C and chemical inertness make it a useful additive in special greases, where it can function both as a thickener and a lubricant. Under extreme pressures, PTFE powder or solids is of little value as it is soft and flows away from the area of contact. Ceramic or metal or alloy lubricants must be used then.
Inorganic solids: Graphite, hexagonal boron nitride, molybdenum disulfide and tungsten disulfide are examples of solid lubricants. Some retain their lubricity to very high temperatures. The use of some such materials is sometimes restricted by their poor resistance to oxidation (e.g., molybdenum disulfide degrades above 350 °C in air, but 1100 °C in reducing environments.
Metal/alloy: Metal alloys, composites and pure metals can be used as grease additives or the sole constituents of sliding surfaces and bearings. Cadmium and gold are used for plating surfaces which gives them good corrosion resistance and sliding properties, Lead, tin, zinc alloys and various bronze alloys are used as sliding bearings, or their powder can be used to lubricate sliding surfaces alone.
Aqueous lubrication
Aqueous lubrication is of interest in a number of technological applications. Strongly hydrated brush polymers such as PEG can serve as lubricants at liquid solid interfaces. By continuous rapid exchange of bound water with other free water molecules, these polymer films keep the surfaces separated while maintaining a high fluidity at the brush–brush interface at high compressions, thus leading to a very low coefficient of friction.
Biolubricant
Biolubricants are derived from vegetable oils and other renewable sources. They usually are triglyceride esters (fats obtained from plants and animals). For lubricant base oil use, the vegetable derived materials are preferred. Common ones include high oleic canola oil, castor oil, palm oil, sunflower seed oil and rapeseed oil from vegetable, and tall oil from tree sources. Many vegetable oils are often hydrolyzed to yield the acids which are subsequently combined selectively to form specialist synthetic esters. Other naturally derived lubricants include lanolin (wool grease, a natural water repellent).
Whale oil was a historically important lubricant, with some uses up to the latter part of the 20th century as a friction modifier additive for automatic transmission fluid.
In 2008, the biolubricant market was around 1% of UK lubricant sales in a total lubricant market of 840,000 tonnes/year.
, researchers at Australia's CSIRO have been studying safflower oil as an engine lubricant, finding superior performance and lower emissions than petroleum-based lubricants in applications such as engine-driven lawn mowers, chainsaws and other agricultural equipment. Grain-growers trialling the product have welcomed the innovation, with one describing it as needing very little refining, biodegradable, a bioenergy and biofuel. The scientists have reengineered the plant using gene silencing, creating a variety that produces up to 93% of oil, the highest currently available from any plant. Researchers at Montana State University’s Advanced Fuel Centre in the US studying the oil’s performance in a large diesel engine, comparing it with conventional oil, have described the results as a "game-changer".
Greases
Are solid or semi-solid lubricant produced by blending thickening agents within a liquid lubricant. Greases are typically composed of about 80% lubricating oil, around 5% to 10% thickener, and approximately 10% to 15% additives. In most common greases, the thickener is a light or alkali metal soap, forming a sponge-like structure that encapsulates the oil droplets. Beyond lubrication, greases are generally expected to provide corrosion protection, typically achieved through additives. To prevent drying out at higher temperatures, dry lubricants are also added. By selecting appropriate oils, thickeners, and additives, the properties of greases can be optimized for a wide range of applications. There are greases suited for high or extremely low temperatures, vacuum applications, water-resistant and weatherproof greases, highly pressure-resistant or creeping types, food-grade, or exceptionally adhesive greases.
Functions of lubricants
One of the largest applications for lubricants, in the form of motor oil, is protecting the internal combustion engines in motor vehicles and powered equipment.
Lubricant vs. anti-tack coating
Anti-tack or anti-stick coatings are designed to reduce the adhesive condition (stickiness) of a given material. The rubber, hose, and wire and cable industries are the largest consumers of anti-tack products but virtually every industry uses some form of anti-sticking agent. Anti-sticking agents differ from lubricants in that they are designed to reduce the inherently adhesive qualities of a given compound while lubricants are designed to reduce friction between any two surfaces.
Keep moving parts apart
Lubricants are typically used to separate moving parts in a system. This separation has the benefit of reducing friction, wear and surface fatigue, together with reduced heat generation, operating noise and vibrations. Lubricants achieve this in several ways. The most common is by forming a physical barrier i.e., a thin layer of lubricant separates the moving parts. This is analogous to hydroplaning, the loss of friction observed when a car tire is separated from the road surface by moving through standing water. This is termed hydrodynamic lubrication. In cases of high surface pressures or temperatures, the fluid film is much thinner and some of the forces are transmitted between the surfaces through the lubricant.
Reduce friction
Typically the lubricant-to-surface friction is much less than surface-to-surface friction in a system without any lubrication. Thus use of a lubricant reduces the overall system friction. Reduced friction has the benefit of reducing heat generation and reduced formation of wear particles as well as improved efficiency. Lubricants may contain polar additives known as friction modifiers that chemically bind to metal surfaces to reduce surface friction even when there is insufficient bulk lubricant present for hydrodynamic lubrication, e.g. protecting the valve train in a car engine at startup. The base oil itself might also be polar in nature and as a result inherently able to bind to metal surfaces, as with polyolester oils.
Transfer heat
Both gas and liquid lubricants can transfer heat. However, liquid lubricants are much more effective on account of their high specific heat capacity. Typically the liquid lubricant is constantly circulated to and from a cooler part of the system, although lubricants may be used to warm as well as to cool when a regulated temperature is required. This circulating flow also determines the amount of heat that is carried away in any given unit of time. High flow systems can carry away a lot of heat and have the additional benefit of reducing the thermal stress on the lubricant. Thus lower cost liquid lubricants may be used. The primary drawback is that high flows typically require larger sumps and bigger cooling units. A secondary drawback is that a high flow system that relies on the flow rate to protect the lubricant from thermal stress is susceptible to catastrophic failure during sudden system shut downs. An automotive oil-cooled turbocharger is a typical example. Turbochargers get red hot during operation and the oil that is cooling them only survives as its residence time in the system is very short (i.e. high flow rate). If the system is shut down suddenly (pulling into a service area after a high-speed drive and stopping the engine) the oil that is in the turbo charger immediately oxidizes and will clog the oil ways with deposits. Over time these deposits can completely block the oil ways, reducing the cooling with the result that the turbo charger experiences total failure, typically with seized bearings. Non-flowing lubricants such as greases and pastes are not effective at heat transfer although they do contribute by reducing the generation of heat in the first place.
Carry away contaminants and debris
Lubricant circulation systems have the benefit of carrying away internally generated debris and external contaminants that get introduced into the system to a filter where they can be removed. Lubricants for machines that regularly generate debris or contaminants such as automotive engines typically contain detergent and dispersant additives to assist in debris and contaminant transport to the filter and removal. Over time the filter will get clogged and require cleaning or replacement, hence the recommendation to change a car's oil filter at the same time as changing the oil. In closed systems such as gear boxes the filter may be supplemented by a magnet to attract any iron fines that get created.
It is apparent that in a circulatory system the oil will only be as clean as the filter can make it, thus it is unfortunate that there are no industry standards by which consumers can readily assess the filtering ability of various automotive filters. Poor automotive filters
significantly reduce the life of the machine (engine) as well as make the system inefficient.
Transmit power
Lubricants known as hydraulic fluid are used as the working fluid in hydrostatic power transmission. Hydraulic fluids comprise a large portion of all lubricants produced in the world. The automatic transmission's torque converter is another important application for power transmission with lubricants.
Protect against wear
Lubricants prevent wear by reducing friction between two parts. Lubricants may also contain anti-wear or extreme pressure additives to boost their performance against wear and fatigue.
Prevent corrosion and rusting
Many lubricants are formulated with additives that form chemical bonds with surfaces or that exclude moisture, to prevent corrosion and rust. It reduces corrosion between two metallic surfaces and avoids contact between these surfaces to avoid immersed corrosion.
Seal for gases
Lubricants will occupy the clearance between moving parts through the capillary force, thus sealing the clearance. This effect can be used to seal pistons and shafts.
Fluid types
Automotive
Motor oils
Petrol (Gasolines) engine oils
Diesel engine oils
Automatic transmission fluid
Gearbox fluids
Brake fluids
Hydraulic fluids
Air conditioning compressor oils
Tractor (one lubricant for all systems)
Universal Tractor Transmission Oil – UTTO
Super Tractor Oil Universal – STOU – includes engine
Other motors
2-stroke engine oils
Industrial
Hydraulic oils
Air compressor oils
Food-grade lubricant
Gas Compressor oils
Gear oils
Bearing and circulating system oils
Refrigerator compressor oils
Steam and gas turbine oils
Aviation
Gas turbine engine oils
Piston engine oils
Marine
Crosshead cylinder oils
Crosshead Crankcase oils
Trunk piston engine oils
Stern tube lubricants
"Glaze" formation (high-temperature wear)
A further phenomenon that has undergone investigation in relation to high-temperature wear prevention and lubrication is that of a compacted oxide layer glaze formation. Such glazes are generated by sintering a compacted oxide layer. Such glazes are crystalline, in contrast to the amorphous glazes seen in pottery. The required high temperatures arise from metallic surfaces sliding against each other (or a metallic surface against a ceramic surface). Due to the elimination of metallic contact and adhesion by the generation of oxide, friction and wear is reduced. Effectively, such a surface is self-lubricating.
As the "glaze" is already an oxide, it can survive to very high temperatures in air or oxidising environments. However, it is disadvantaged by it being necessary for the base metal (or ceramic) having to undergo some wear first to generate sufficient oxide debris.
Disposal and environmental impact
It is estimated that about 50% of all lubricants are released into the environment. Common disposal methods include recycling, burning, landfill and discharge into water, though typically disposal in landfill and discharge into water are strictly regulated in most countries, as even small amount of lubricant can contaminate a large amount of water. Most regulations permit a threshold level of lubricant that may be present in waste streams and companies spend hundreds of millions of dollars annually in treating their waste waters to get to acceptable levels.
Burning the lubricant as fuel, typically to generate electricity, is also governed by regulations mainly on account of the relatively high level of additives present. Burning generates both airborne pollutants and ash rich in toxic materials, mainly heavy metal compounds. Thus lubricant burning takes place in specialized facilities that have incorporated special scrubbers to remove airborne pollutants and have access to landfill sites with permits to handle the toxic ash.
Unfortunately, most lubricant that ends up directly in the environment is due to the general public discharging it onto the ground, into drains, and directly into landfills as trash. Other direct contamination sources include runoff from roadways, accidental spillages, natural or man-made disasters, and pipeline leakages.
Improvement in filtration technologies and processes has now made recycling a viable option (with the rising price of base stock and crude oil). Typically various filtration systems remove particulates, additives, and oxidation products and recover the base oil. The oil may get refined during the process. This base oil is then treated much the same as virgin base oil however there is considerable reluctance to use recycled oils as they are generally considered inferior. Basestock fractionally vacuum distilled from used lubricants has superior properties to all-natural oils, but cost-effectiveness depends on many factors. Used lubricant may also be used as refinery feedstock to become part of crude oil. Again, there is considerable reluctance to this use as the additives, soot, and wear metals will seriously poison/deactivate the critical catalysts in the process. Cost prohibits carrying out both filtration (soot, additives removal) and re-refining (distilling, isomerization, hydrocrack, etc.) however the primary hindrance to recycling still remains the collection of fluids as refineries need continuous supply in amounts measured in cisterns, rail tanks.
Occasionally, unused lubricant requires disposal. The best course of action in such situations is to return it to the manufacturer where it can be processed as a part of fresh batches.
Environment: Lubricants both fresh and used can cause considerable damage to the environment mainly due to their high potential of serious water pollution. Further, the additives typically contained in lubricant can be toxic to flora and fauna. In used fluids, the oxidation products can be toxic as well. Lubricant persistence in the environment largely depends upon the base fluid, however if very toxic additives are used they may negatively affect the persistence. Lanolin lubricants are non-toxic making them the environmental alternative which is safe for both users and the environment.
Societies and industry bodies
American Petroleum Institute (API)
Society of Tribologists and Lubrication Engineers (STLE)
National Lubricating Grease Institute (NLGI)
Society of Automotive Engineers (SAE)
Independent Lubricant Manufacturer Association (ILMA)
European Automobile Manufacturers Association (ACEA)
Japanese Automotive Standards Organization (JASO)
Petroleum Packaging Council (PPC)
Major publications
Peer reviewed
ASME Journal of Tribology
Tribology International
Tribology Transactions
Journal of Synthetic Lubricants
Tribology Letters
Lubrication Science
Trade periodicals
Tribology and Lubrication Technology
Fuels & Lubes International
Oiltrends
Lubes n' Greases
Compoundings
Chemical Market Review
Machinery lubrication
| Technology | Chemicals | null |
18071 | https://en.wikipedia.org/wiki/Llama | Llama | The llama (; or ) (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.
Llamas are social animals and live with others as a herd. Their wool is soft and contains only a small amount of lanolin. Llamas can learn simple tasks after a few repetitions. When using a pack, they can carry about 25 to 30% of their body weight for 8 to 13 km (5–8 miles). The name llama (also historically spelled "lama" or "glama") was adopted by European settlers from native Peruvians.
The ancestors of llamas are thought to have originated on the Great Plains of North America about 40 million years ago and subsequently migrated to South America about three million years ago during the Great American Interchange. By the end of the last ice age (10,000–12,000 years ago), camelids were extinct in North America. As of 2007, there were over seven million llamas and alpacas in South America. Some were imported to the United States and Canada late in the 20th century; their descendants now number more than 158,000 llamas and 100,000 alpacas.
In Aymara mythology, llamas are important beings. The Heavenly Llama is said to drink water from the ocean and urinates as it rains. According to Aymara eschatology, llamas will return to the water springs and ponds where they come from at the end of time.
Classification
Lamoids, or llamas (as they are more generally known as a group), consist of the vicuña (Lama vicugna), guanaco (Lama guanicoe), Suri alpaca, and Huacaya alpaca (Lama pacos), and the domestic llama (Lama glama). Guanacos and vicuñas live in the wild, while llamas and alpacas exist only as domesticated animals. Although early writers compared llamas to sheep, their similarity to the camel was soon recognized. They were included in the genus Camelus along with alpaca in the Systema Naturae (1758) of Carl Linnaeus. They were, however, separated by Georges Cuvier in 1800 under the name of lama along with the guanaco. DNA analysis has confirmed that the guanaco is the wild ancestor of the llama, while the vicuña is the wild ancestor of the alpaca.
The genus Lama is, with the two species of true camels, the sole existing representatives of a very distinct section of the Artiodactyla (even-toed ungulates) called Tylopoda, or "bump-footed", from the peculiar bumps on the soles of their feet. The Tylopoda consists of a single family, the Camelidae, and shares the order Artiodactyla with the Suina (pigs), the Tragulina (chevrotains), the Pecora (ruminants), and the Whippomorpha (hippos and cetaceans, which belong to Artiodactyla from a cladistic, if not traditional, standpoint). The Tylopoda have more or less affinity to each of the sister taxa, standing in some respects in a middle position between them, sharing some characteristics from each, but in others showing special modifications not found in any of the other taxa.
The 19th-century discoveries of a vast and previously unexpected extinct Paleogene fauna of North America, as interpreted by paleontologists Joseph Leidy, Edward Drinker Cope, and Othniel Charles Marsh, aided understanding of the early history of this family. Llamas were not always confined to South America; abundant llama-like remains were found in Pleistocene deposits in the Rocky Mountains and in Central America. Some of the fossil llamas were much larger than current forms. Some species remained in North America during the last ice ages. North American llamas are categorized as an extinct genus, Hemiauchenia. Llama-like animals would have been a common sight 25,000 years ago in modern-day California, Texas, New Mexico, Utah, Missouri, and Florida.
The camelid lineage has a good fossil record. Camel-like animals have been traced back through early Miocene forms from the thoroughly differentiated, modern species. Their characteristics became more general, and they lost those that distinguished them as camelids; hence, they were classified as ancestral artiodactyls. No fossils of these earlier forms have been found in the Old World, indicating that North America was the original home of camelids and that the ancestors of Old World camels crossed over via the Bering Land Bridge from North America. The formation of the Isthmus of Panama three million years ago allowed camelids to spread to South America as part of the Great American Interchange, where they evolved further. Meanwhile, North American camelids died out at the end of the Pleistocene.
Characteristics
A full-grown llama can reach a height of at the top of the head and can weigh between . At maturity, males can weigh 94.74 kg, while females can weigh 102.27 kg. At birth, a baby llama (called a cria) can weigh between . Llamas typically live for 15 to 25 years, with some individuals surviving 30 years or more.
The following characteristics apply especially to llamas. Dentition of adults: incisors canines , premolars , molars ; total 32. In the upper jaw, a compressed, sharp, pointed laniariform incisor near the hinder edge of the premaxilla is followed in the male at least by a moderate-sized, pointed, curved true canine in the anterior part of the maxilla. The isolated canine-like premolar that follows in the camels is not present. The teeth of the molar series, which are in contact with each other, consist of two very small premolars (the first almost rudimentary) and three broad molars, generally constructed like those of Camelus. In the lower jaw, the three incisors are long, spatulate, and procumbent; the outer ones are the smallest. Next to these is a curved, suberect canine, followed after an interval by an isolated minute and often deciduous simple conical premolar; then a contiguous series of one premolar and three molars, which differ from those of Camelus in having a small accessory column at the anterior outer edge.
The skull generally resembles that of Camelus, with a larger brain cavity and orbits and less-developed cranial ridges due to its smaller size. The nasal bones are shorter and broader and are joined by the premaxilla.
Vertebrae:
cervical 7,
dorsal 12,
lumbar 7,
sacral 4,
caudal 15 to 20.
The ears are rather long and slightly curved inward, characteristically known as "banana" shaped. There is no dorsal hump. The feet are narrow, the toes being more separated than in the camels, each having a distinct plantar pad. The tail is short, and the fiber is long, woolly, and soft.
In essential structural characteristics, as well as in general appearance and habits, all the animals of this genus very closely resemble each other, so whether they should be considered as belonging to one, two, or more species is a matter of controversy among naturalists.
The question is complicated by the circumstances of most individuals who have come under observation, either in a completely or partially domesticated state. Many are also descended from ancestors previously domesticated, a state that tends to produce a certain amount of variation from the original type. The four forms commonly distinguished by the inhabitants of South America are recognized as distinct species, though there are difficulties in defining their distinctive characteristics.
These are:
the llama, Lama glama (Linnaeus);
the alpaca, Lama pacos (Linnaeus);
the guanaco (from the Quechua huanaco), Lama guanicoe (Müller); and
the vicuña, Lama vicugna (Molina)
The llama and alpaca are only known in the domestic state and are variable in size and of many colors, often white, brown, or piebald. Some are grey or black. The guanaco and vicuña are wild. The guanaco is endangered; it has a nearly uniform light-brown color, passing into white below.
The guanaco and vicuña certainly differ: The vicuña is more petite, more slender in its proportions, and has a shorter head than the guanaco.
The vicuña lives in herds on the bleak and elevated parts of the mountain range bordering the region of perpetual snow, amidst rocks and precipices, occurring in various suitable localities throughout Peru, in the southern part of Ecuador, and as far south as the middle of Bolivia. Its manners very much resemble those of the chamois of the European Alps; it is as vigilant, wild, and timid.
Vicuña fiber is extremely delicate and soft and highly valued for weaving, but the quantity that each animal produces is small.
Alpacas are primarily descended from wild vicuña ancestors. In contrast, domesticated llamas are descended primarily from wild guanaco ancestors, although a considerable amount of hybridization between the two species has occurred.
Differential characteristics between llamas and alpacas include the llama's larger size, longer head, and curved ears. Alpaca fiber is generally more expensive but not always more valuable. Alpacas tend to have a more consistent color throughout the body. The most apparent visual difference between llamas and camels is that camels have a humps and llamas do not.
Llamas are not ruminants, pseudo-ruminants, or modified ruminants. They do have a complex three-compartment stomach that allows them to digest lower quality, high cellulose foods. The stomach compartments allow for fermentation of tricky foodstuffs, followed by regurgitation and re-chewing. Ruminants (cows, sheep, goats) have four compartments, whereas llamas have only three stomach compartments: the rumen, omasum, and abomasum.
In addition, the llama (and other camelids) have an extremely long and complex large intestine (colon). The large intestine's role in digestion is to reabsorb water, vitamins, and electrolytes from food waste passing through it. The length of the llama's colon allows it to survive on much less water than other animals. This is a major advantage in arid climates where they live.
Reproduction
Llamas have an unusual reproductive cycle for a large animal. Female llamas are induced ovulators. Through mating, the female releases an egg and is often fertilized on the first attempt. Female llamas do not go into estrus ("heat").
Like humans, llama males and females mature sexually at different rates. Females reach puberty at about 12 months old; males do not become sexually mature until around three years of age.
Mating
Llamas mate in a kush (lying down) position, similar to big cats and canines, which is unusual in a large animal. They mate for an extended time (20–45 minutes), also unusual in a large animal.
Gestation
The gestation period of a llama is 11.5 months (350 days). Dams (female llamas) do not lick off their babies, as they have an attached tongue that does not reach outside of the mouth more than . Rather, they will nuzzle and hum to their newborns.
Crias
A cria (from Spanish for "baby") is the name for a baby llama, alpaca, vicuña, or guanaco. Crias are typically born with all the herd's females gathering to protect against the male llamas and potential predators. Llamas give birth standing. Birth is usually quick and problem-free, over in less than 30 minutes. Most births occur between 8 am and noon, during the warmer daylight hours. This may increase cria survival by reducing fatalities due to hypothermia during cold Andean nights. This birthing pattern is considered a continuation of the birthing patterns observed in the wild. Their crias are up and standing, walking, and attempting to suckle within the first hour after birth. Crias are partially fed with llama milk that is lower in fat and salt and higher in phosphorus and calcium than cow or goat milk. A female llama will only produce about of milk at a time when she gives milk, so the cria must frequently suckle to receive the nutrients it requires.
Breeding methods
In harem mating, the male is left with females most of the year.
For field mating, a female is turned into a field with a male llama and left there for some time. This is the easiest method in terms of labor but the least useful in predicting a likely birth date. An ultrasound test can be performed, and together with the exposure dates, a better idea of when the cria is expected can be determined.
Hand mating is the most efficient method, but it requires the most work on the part of the human involved. A male and female llama are put into the same pen, and mating is monitored. They are then separated and re-mated every other day until one refuses the mating. Usually, one can get in two matings using this method, though some stud males routinely refuse to mate a female more than once. The separation presumably helps to keep the sperm count high for each mating and also helps to keep the condition of the female llama's reproductive tract more sound. If the mating is unsuccessful within two to three weeks, the female is mated again.
Nutrition
Options for feeding llamas are quite wide; various commercial and farm-based feeds are available. The major determining factors include feed cost, availability, nutrient balance and energy density required. Young, actively growing llamas require a greater concentration of nutrients than mature animals because of their smaller digestive tract capacities.
Behavior
Llamas that are well-socialized and trained to halter and lead after weaning are very friendly and pleasant to be around. They are extremely curious, and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.
Llamas are now utilized as certified therapy animals in nursing homes and hospitals. Rojo the Llama, located in the Pacific Northwest was certified in 2008. The Mayo Clinic says animal-assisted therapy can reduce pain, depression, anxiety, and fatigue. This type of therapy is growing in popularity, and several organizations throughout the United States participate.
When correctly reared, llamas spitting at a human is a rare thing. Llamas are very social herd animals, however, and sometimes spit at each other to discipline lower-ranked llamas. A llama's social rank in a herd is never static. They can always move up or down the social ladder by picking small fights. This is usually done between males to see which will become dominant. Their fights are visually dramatic, characterized by spitting, ramming each other with their chests, neck wrestling, and kicking, mainly to knock the other off balance. The females are usually only seen spitting to control other herd members. One may determine how agitated the llama is by the materials in the spit. The more irritated the llama is, the further back into each of the three stomach compartments it will try to draw materials from for its spit.
While the social structure might constantly change, they live as a family and care for each other. If one notices a strange noise or feels threatened, an alarm call - a loud, shrill sound that rhythmically rises and falls - is sent out, and all others become alert. They will often hum to each other as a form of communication.
The llama's groaning noises or going "mwa" (/mwaʰ/) is often a sign of fear or anger. Unhappy or agitated llamas will lay their ears back, while ears being perked upwards is a sign of happiness or curiosity.
An "orgle" is the mating sound of a llama or alpaca, made by the sexually aroused male. The sound is reminiscent of gargling but with a more forceful, buzzing edge. Males begin the sound when they become aroused and continue throughout copulation.
Guard behavior
Using llamas as livestock guards in North America began in the early 1980s, and some sheep producers have used llamas successfully since then. Some would even use them to guard their smaller cousins, the alpaca. They are used most commonly in the western regions of the United States, where larger predators, such as coyotes and feral dogs, are prevalent. Typically, a single gelding (castrated male) is used.
Research suggests using multiple guard llamas is not as effective as one. Multiple males tend to bond with one another rather than with the livestock and may ignore the flock. A gelded male of two years of age bonds closely with its new charges and is instinctively very effective in preventing predation. Some llamas bond more quickly to sheep or goats if introduced just before lambing. Many sheep and goat producers indicate a special bond quickly develops between lambs and their guard llama, and the llama is particularly protective of the lambs.
Using llamas as guards has reduced the losses to predators for many producers. The value of the livestock saved each year exceeds a llama's purchase cost and annual maintenance. Although not every llama is suited to the job, most are a viable, nonlethal alternative for reducing predation, requiring no training and little care.
Fiber
Llamas have a fine undercoat, which can be used for handicrafts and garments. The coarser outer guard hair is used for rugs, wall hangings, and lead ropes. The fiber comes in many colors, ranging from white or grey to reddish-brown, brown, dark brown, and black.
Medical uses
Doctors and researchers have determined that llamas possess antibodies that are well-suited to treat certain diseases. Scientists have been studying the way llamas might contribute to the fight against coronaviruses, including MERS and SARS-CoV-2 (which causes COVID-19).
History of domestication
Pre-Incan cultures
Scholar Alex Chepstow-Lusty has argued that the switch from a hunter-gatherer lifestyle to widespread agriculture was only possible because of the use of llama dung as fertilizer.
The Moche people frequently placed llamas and their parts in the burials of important people as offerings or provisions for the afterlife. The Moche of pre-Columbian Peru depicted llamas quite realistically in their ceramics.
Inca Empire
In the Inca Empire, llamas were the only beasts of burden, and many of the people dominated by the Inca had long traditions of llama herding. For the Inca nobility, the llama was symbolic, and llama figures were often buried with the dead.
In South America, llamas are still used as beasts of burden, as well as for the production of fiber and meat.
The Inca deity Urcuchillay was depicted in the form of a multicolored llama.
Carl Troll has argued that the large numbers of llamas found in the southern Peruvian highlands were an essential factor in the rise of the Inca Empire. It is worth considering the maximum extent of the Inca Empire roughly coincided with the most significant distribution of alpacas and llamas in Pre-Hispanic America. The link between the Andean biomes of puna and páramo, llama pastoralism and the Inca state is a matter of research.
Spanish Empire
One of the main uses for llamas at the time of the Spanish conquest was to bring down ore from the mines in the mountains. Gregory de Bolivar estimated that in his day, as many as 300,000 were employed in the transport of produce from the Potosí mines alone, but since the introduction of horses, mules, and donkeys, the importance of the llama as a beast of burden has greatly diminished.
According to Juan Ignacio Molina, the Dutch captain Joris van Spilbergen observed the use of hueques (possibly a llama type) by native Mapuches of Mocha Island as plow animals in 1614.
In Chile hueque, populations declined towards extinction in the 16th and 17th century being replaced by European livestock. The causes of its extinction are not clear. However, it is known that the introduction of sheep caused some competition among both domestic species. Anecdotal evidence of the mid-17th century shows that both species coexisted and suggests that there were many more sheep than hueques. The decline of hueques reached a point in the late 18th century when only the Mapuche from Mariquina and Huequén next to Angol raised the animal.
United States
Llamas were first imported into the US in the late 1800s as zoo exhibits. Restrictions on importation of livestock from South America due to hoof and mouth disease, combined with lack of commercial interest, resulted in the number of llamas staying low until the late 20th century. In the 1970s, interest in llamas as livestock began to grow, and the number of llamas increased as farmers bred and produced an increasing number of animals. Both the price and number of llamas in the US climbed rapidly in the 1980s and 1990s. With little market for llama fiber or meat in the US and the value of guard llamas limited, the primary value in llamas was in breeding more animals, a classic sign of a speculative bubble in agriculture. By 2002, there were almost 145,000 llamas in the US, according to the US Department of Agriculture, and animals sold for as much as $220,000. However, the lack of any end market for the animals resulted in a crash in both llama prices and the number of llamas; the Great Recession further dried up investment capital, and the number of llamas in the US began to decline as fewer animals were bred and older animals died of old age. By 2017, the number of llamas in the US had dropped below 40,000. A similar speculative bubble was experienced with the closely related alpaca, which burst shortly after the llama bubble.
Culture
Being an important animal and long-standing cultural icon in South America, Llamas gained in recent history cultural prominence in Western culture.
For example, the videogame company Maxis has used Llamas extensively as elements in their games, particularly in the widely popular game series The Sims, Llamas being the national symbol of the country the broader series of Sim games are set in. The online video game Fortnite uses piñata llamas as loot containers, which contain various in-game resources. Also the programming language Perl with its so-called Llama book has been associated with Llamas.
| Biology and health sciences | Artiodactyla | null |
18093 | https://en.wikipedia.org/wiki/Local%20Group | Local Group | The Local Group is the galaxy group that includes the Milky Way, where Earth is located. It has a total diameter of roughly , and a total mass of the order of .
It consists of two collections of galaxies in a "dumbbell" shape; the Milky Way and its satellites form one lobe, and the Andromeda Galaxy and its satellites constitute the other. The two collections are separated by about and are moving toward one another with a velocity of . The group itself is a part of the larger Virgo Supercluster, which may be a part of the Laniakea Supercluster.
The exact number of galaxies in the Local Group is unknown as some are occluded by the Milky Way; however, at least 80 members are known, most of which are dwarf galaxies.
The two largest members, the Andromeda and the Milky Way galaxies, are both spiral galaxies with masses of about solar masses each. Each has its own system of satellite galaxies:
The Andromeda Galaxy's satellite system consists of Messier 32 (M32), Messier 110 (M110), NGC 147, NGC 185, Andromeda I (And I), And II, And III, And V, And VI (also known as the Pegasus Dwarf Spheroidal Galaxy, or Pegasus dSph), And VII (a.k.a. the Cassiopeia Dwarf Galaxy), And VIII, And IX, And X, And XI, And XIX, And XXI and And XXII, plus several additional ultra-faint dwarf spheroidal galaxies.
The Milky Way's satellite galaxies system comprises the Sagittarius Dwarf Galaxy, Large Magellanic Cloud, Small Magellanic Cloud, Canis Major Dwarf Galaxy (disputed, considered by some not a galaxy), Ursa Minor Dwarf Galaxy, Draco Dwarf Galaxy, Carina Dwarf Galaxy, Sextans Dwarf Galaxy, Sculptor Dwarf Galaxy, Fornax Dwarf Galaxy, Leo I (a dwarf galaxy), Leo II (a dwarf galaxy), Ursa Major I Dwarf Galaxy and Ursa Major II Dwarf Galaxy, plus several additional ultra-faint dwarf spheroidal galaxies.
The Triangulum Galaxy (M33) is the third-largest member of the Local Group, with a mass of approximately , and is the third spiral galaxy. It is unclear whether the Triangulum Galaxy is a companion of the Andromeda Galaxy; the two galaxies are 750,000 light years apart, and experienced a close passage 2–4 billion years ago which triggered star formation across Andromeda's disk. The Pisces Dwarf Galaxy is equidistant from the Andromeda Galaxy and the Triangulum Galaxy, so it may be a satellite of either.
The other members of the group are likely gravitationally secluded from these large subgroups: IC 10, IC 1613, Phoenix Dwarf Galaxy, Leo A, Tucana Dwarf Galaxy, Cetus Dwarf Galaxy, Pegasus Dwarf Irregular Galaxy, Wolf–Lundmark–Melotte, Aquarius Dwarf Galaxy, and Sagittarius Dwarf Irregular Galaxy.
The membership of NGC 3109, with its companions Sextans A and the Antlia Dwarf Galaxy as well as Sextans B, Leo P, Antlia B and possibly Leo A, is uncertain due to extreme distances from the center of the Local Group. The Antlia-Sextans Group is unlikely to be gravitationally bound to the Local Group due to probably lying outside the Local Group's Zero-velocity surface—which would make it a true galaxy group of its own rather than a subgroup within the Local Group. This possible independence may, however, disappear as the Milky Way continues coalescing with Andromeda due to the increased mass, and density thereof, plausibly widening the radius of the zero-velocity surface of the Local Group.
History
The term "The Local Group" was introduced by Edwin Hubble in Chapter VI of his 1936 book The Realm of the Nebulae. There, he described it as "a typical small group of nebulae which is isolated in the general field" and delineated, by decreasing luminosity, its members to be M31, Milky Way, M33, Large Magellanic Cloud, Small Magellanic Cloud, M32, NGC 205, NGC 6822, NGC 185, IC 1613 and NGC 147. He also identified IC 10 as a possible part of the Local Group.
Component galaxies
Clickable map
List
Structure
Streams
Magellanic Stream, a stream of gas being stripped off the Magellanic Clouds due to their interaction with the Milky Way
Monoceros Ring, a ring of stars around the Milky Way that is proposed to consist of a stellar stream torn from the Canis Major Dwarf Galaxy
Virgo Strem, a steam formed from a dwarf galaxy.
Helni Stream
Future
The galaxies of the Local Group are likely to merge together under their own mutual gravitational attractions over a timescale of tens of billions of years into a single elliptical galaxy, with the coalescence of Andromeda and the Milky Way being the predominant event in this process.
Location
| Physical sciences | Notable galaxies | null |
18094 | https://en.wikipedia.org/wiki/Litre | Litre | The litre (Commonwealth spelling) or liter (American spelling) (SI symbols L and l, other symbol used: ℓ) is a metric unit of volume. It is equal to 1 cubic decimetre (dm3), 1000 cubic centimetres (cm3) or 0.001 cubic metres (m3). A cubic decimetre (or litre) occupies a volume of (see figure) and is thus equal to one-thousandth of a cubic metre.
The original French metric system used the litre as a base unit. The word litre is derived from an older French unit, the litron, whose name came from Byzantine Greek—where it was a unit of weight, not volume—via Late Medieval Latin, and which equalled approximately 0.831 litres. The litre was also used in several subsequent versions of the metric system and is accepted for use with the SI, although not an SI unit—the SI unit of volume is the cubic metre (m3). The spelling used by the International Bureau of Weights and Measures is "litre", a spelling which is shared by most English-speaking countries. The spelling "liter" is predominantly used in American English.
One litre of liquid water has a mass of almost exactly one kilogram, because the kilogram was originally defined in 1795 as the mass of one cubic decimetre of water at the temperature of melting ice (). Subsequent redefinitions of the metre and kilogram mean that this relationship is no longer exact.
Definition
A litre is a cubic decimetre, which is the volume of a cube 10 centimetres × 10 centimetres × 10 centimetres (1 L ≡ 1 dm3 ≡ 1000 cm3). Hence 1 L ≡ 0.001 m3 ≡ 1000 cm3; and 1 m3 (i.e. a cubic metre, which is the SI unit for volume) is exactly 1000 L.
From 1901 to 1964, the litre was defined as the volume of one kilogram of pure water at maximum density (+3.98 °C) and standard pressure. The kilogram was in turn specified as the mass of the International Prototype of the Kilogram (a specific platinum/iridium cylinder) and was intended to be of the same mass as the 1 litre of water referred to above. It was subsequently discovered that the cylinder was around 28 parts per million too large and thus, during this time, a litre was about . Additionally, the mass–volume relationship of water (as with any fluid) depends on temperature, pressure, purity and isotopic uniformity. In 1964, the definition relating the litre to mass was superseded by the current one. Although the litre is not an SI unit, it is accepted by the CGPM (the standards body that defines the SI) for use with the SI. CGPM defines the litre and its acceptable symbols.
A litre is equal in volume to the millistere, an obsolete non-SI metric unit formerly customarily used for dry measure.
Explanation
Litres are most commonly used for items (such as fluids and solids that can be poured) which are measured by the capacity or size of their container, whereas cubic metres (and derived units) are most commonly used for items measured either by their dimensions or their displacements. The litre is often also used in some calculated measurements, such as density (kg/L), allowing an easy comparison with the density of water.
One litre of water has a mass of almost exactly one kilogram when measured at its maximal density, which occurs at about 4 °C. It follows, therefore, that 1000th of a litre, known as one millilitre (1 mL), of water has a mass of about 1 g; 1000 litres of water has a mass of about 1000 kg (1 tonne or megagram). This relationship holds because the gram was originally defined as the mass of 1 mL of water; however, this definition was abandoned in 1799 because the density of water changes with temperature and, very slightly, with pressure.
It is now known that the density of water also depends on the isotopic ratios of the oxygen and hydrogen atoms in a particular sample. Modern measurements of Vienna Standard Mean Ocean Water, which is pure distilled water with an isotopic composition representative of the average of the world's oceans, show that it has a density of at its point of maximum density (3.984 °C) under one standard atmosphere (101.325 kPa) of pressure.
SI prefixes applied to the litre
The litre, though not an official SI unit, may be used with SI prefixes. The most commonly used derived unit is the millilitre, defined as one-thousandth of a litre, and also often referred to by the SI derived unit name "cubic centimetre". It is a commonly used measure, especially in medicine, cooking and automotive engineering. Other units may be found in the table below, where the more often used terms are in bold. However, some authorities advise against some of them; for example, in the United States, NIST advocates using the millilitre or litre instead of the centilitre. There are two international standard symbols for the litre: L and l. In the United States the former is preferred because of the risk that (in some fonts) the letter and the digit may be confused.
Non-metric conversions
| Physical sciences | Volume | null |
18102 | https://en.wikipedia.org/wiki/Linear%20map | Linear map | In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
If a linear map is a bijection then it is called a . In the case where , a linear map is called a linear endomorphism. Sometimes the term refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that and are real vector spaces (not necessarily with ), or it can be used to emphasize that is a function space, which is a common convention in functional analysis. Sometimes the term linear function has the same meaning as linear map, while in analysis it does not.
A linear map from to always maps the origin of to the origin of . Moreover, it maps linear subspaces in onto linear subspaces in (possibly of a lower dimension); for example, it maps a plane through the origin in to either a plane through the origin in , a line through the origin in , or just the origin in . Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations.
In the language of category theory, linear maps are the morphisms of vector spaces, and they form a category equivalent to the one of matrices.
Definition and first consequences
Let and be vector spaces over the same field .
A function is said to be a linear map if for any two vectors and any scalar the following two conditions are satisfied:
Additivity / operation of addition
Homogeneity of degree 1 / operation of scalar multiplication
Thus, a linear map is said to be operation preserving. In other words, it does not matter whether the linear map is applied before (the right hand sides of the above examples) or after (the left hand sides of the examples) the operations of addition and scalar multiplication.
By the associativity of the addition operation denoted as +, for any vectors and scalars the following equality holds:
Thus a linear map is one which preserves linear combinations.
Denoting the zero elements of the vector spaces and by and respectively, it follows that Let and in the equation for homogeneity of degree 1:
A linear map with viewed as a one-dimensional vector space over itself is called a linear functional.
These statements generalize to any left-module over a ring without modification, and to any right-module upon reversing of the scalar multiplication.
Examples
A prototypical example that gives linear maps their name is a function , of which the graph is a line through the origin.
More generally, any homothety centered in the origin of a vector space is a linear map (here is a scalar).
The zero map between two vector spaces (over the same field) is linear.
The identity map on any module is a linear operator.
For real numbers, the map is not linear.
For real numbers, the map is not linear (but is an affine transformation).
If is a real matrix, then defines a linear map from to by sending a column vector to the column vector . Conversely, any linear map between finite-dimensional vector spaces can be represented in this manner; see the , below.
If is an isometry between real normed spaces such that then is a linear map. This result is not necessarily true for complex normed space.
Differentiation defines a linear map from the space of all differentiable functions to the space of all functions. It also defines a linear operator on the space of all smooth functions (a linear operator is a linear endomorphism, that is, a linear map with the same domain and codomain). Indeed,
A definite integral over some interval is a linear map from the space of all real-valued integrable functions on to . Indeed,
An indefinite integral (or antiderivative) with a fixed integration starting point defines a linear map from the space of all real-valued integrable functions on to the space of all real-valued, differentiable functions on . Without a fixed starting point, the antiderivative maps to the quotient space of the differentiable functions by the linear space of constant functions.
If and are finite-dimensional vector spaces over a field , of respective dimensions and , then the function that maps linear maps to matrices in the way described in (below) is a linear map, and even a linear isomorphism.
The expected value of a random variable (which is in fact a function, and as such an element of a vector space) is linear, as for random variables and we have and , but the variance of a random variable is not linear.
Linear extensions
Often, a linear map is constructed by defining it on a subset of a vector space and then to the linear span of the domain.
Suppose and are vector spaces and is a function defined on some subset
Then a of to if it exists, is a linear map defined on that extends (meaning that for all ) and takes its values from the codomain of
When the subset is a vector subspace of then a (-valued) linear extension of to all of is guaranteed to exist if (and only if) is a linear map. In particular, if has a linear extension to then it has a linear extension to all of
The map can be extended to a linear map if and only if whenever is an integer, are scalars, and are vectors such that then necessarily
If a linear extension of exists then the linear extension is unique and
holds for all and as above.
If is linearly independent then every function into any vector space has a linear extension to a (linear) map (the converse is also true).
For example, if and then the assignment and can be linearly extended from the linearly independent set of vectors to a linear map on The unique linear extension is the map that sends to
Every (scalar-valued) linear functional defined on a vector subspace of a real or complex vector space has a linear extension to all of
Indeed, the Hahn–Banach dominated extension theorem even guarantees that when this linear functional is dominated by some given seminorm (meaning that holds for all in the domain of ) then there exists a linear extension to that is also dominated by
Matrices
If and are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from to can be represented by a matrix. This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if is a real matrix, then describes a linear map (see Euclidean space).
Let be a basis for . Then every vector is uniquely determined by the coefficients in the field :
If is a linear map,
which implies that the function f is entirely determined by the vectors . Now let be a basis for . Then we can represent each vector as
Thus, the function is entirely determined by the values of . If we put these values into an matrix , then we can conveniently use it to compute the vector output of for any vector in . To get , every column of is a vector
corresponding to as defined above. To define it more clearly, for some column that corresponds to the mapping ,
where is the matrix of . In other words, every column has a corresponding vector whose coordinates are the elements of column . A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen.
The matrices of a linear transformation can be represented visually:
Matrix for relative to :
Matrix for relative to :
Transition matrix from to :
Transition matrix from to :
Such that starting in the bottom left corner and looking for the bottom right corner , one would left-multiply—that is, . The equivalent method would be the "longer" method going clockwise from the same point such that is left-multiplied with , or .
Examples in two dimensions
In two-dimensional space R2 linear maps are described by 2 × 2 matrices. These are some examples:
rotation
by 90 degrees counterclockwise:
by an angle θ counterclockwise:
reflection
through the x axis:
through the y axis:
through a line making an angle θ with the origin:
scaling by 2 in all directions:
horizontal shear mapping:
skew of the y axis by an angle θ:
squeeze mapping:
projection onto the y axis:
If a linear map is only composed of rotation, reflection, and/or uniform scaling, then the linear map is a conformal linear transformation.
Vector space of linear maps
The composition of linear maps is linear: if and are linear, then so is their composition . It follows from this that the class of all vector spaces over a given field K, together with K-linear maps as morphisms, forms a category.
The inverse of a linear map, when defined, is again a linear map.
If and are linear, then so is their pointwise sum , which is defined by .
If is linear and is an element of the ground field , then the map , defined by , is also linear.
Thus the set of linear maps from to itself forms a vector space over , sometimes denoted . Furthermore, in the case that , this vector space, denoted , is an associative algebra under composition of maps, since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below.
Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication, the addition of linear maps corresponds to the matrix addition, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars.
Endomorphisms and automorphisms
A linear transformation is an endomorphism of ; the set of all such endomorphisms together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field (and in particular a ring). The multiplicative identity element of this algebra is the identity map .
An endomorphism of that is also an isomorphism is called an automorphism of . The composition of two automorphisms is again an automorphism, and the set of all automorphisms of forms a group, the automorphism group of which is denoted by or . Since the automorphisms are precisely those endomorphisms which possess inverses under composition, is the group of units in the ring .
If has finite dimension , then is isomorphic to the associative algebra of all matrices with entries in . The automorphism group of is isomorphic to the general linear group of all invertible matrices with entries in .
Kernel, image and the rank–nullity theorem
If is linear, we define the kernel and the image or range of by
is a subspace of and is a subspace of . The following dimension formula is known as the rank–nullity theorem:
The number is also called the rank of and written as , or sometimes, ; the number is called the nullity of and written as or . If and are finite-dimensional, bases have been chosen and is represented by the matrix , then the rank and nullity of are equal to the rank and nullity of the matrix , respectively.
Cokernel
A subtler invariant of a linear transformation is the cokernel, which is defined as
This is the dual notion to the kernel: just as the kernel is a subspace of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence
These can be interpreted thus: given a linear equation f(v) = w to solve,
the kernel is the space of solutions to the homogeneous equation f(v) = 0, and its dimension is the number of degrees of freedom in the space of solutions, if it is not empty;
the co-kernel is the space of constraints that the solutions must satisfy, and its dimension is the maximal number of independent constraints.
The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient space W/f(V) is the dimension of the target space minus the dimension of the image.
As a simple example, consider the map f: R2 → R2, given by f(x, y) = (0, y). Then for an equation f(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b) or equivalently stated, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) < V: the value of x is the freedom in a solution – while the cokernel may be expressed via the map W → R, : given a vector (a, b), the value of a is the obstruction to there being a solution.
An example illustrating the infinite-dimensional case is afforded by the map f: R∞ → R∞, with b1 = 0 and bn + 1 = an for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel (), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 ≠ 1). The reverse situation obtains for the map h: R∞ → R∞, with cn = an + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1.
Index
For a linear operator with finite-dimensional kernel and co-kernel, one may define index as:
namely the degrees of freedom minus the number of constraints.
For a transformation between finite-dimensional vector spaces, this is just the difference dim(V) − dim(W), by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom.
The index of an operator is precisely the Euler characteristic of the 2-term complex 0 → V → W → 0. In operator theory, the index of Fredholm operators is an object of study, with a major result being the Atiyah–Singer index theorem.
Algebraic classifications of linear transformations
No classification of linear maps could be exhaustive. The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space.
Let and denote vector spaces over a field and let be a linear map.
Monomorphism
is said to be injective or a monomorphism if any of the following equivalent conditions are true:
is one-to-one as a map of sets.
is monic or left-cancellable, which is to say, for any vector space and any pair of linear maps and , the equation implies .
is left-invertible, which is to say there exists a linear map such that is the identity map on .
Epimorphism
is said to be surjective or an epimorphism if any of the following equivalent conditions are true:
is onto as a map of sets.
is epic or right-cancellable, which is to say, for any vector space and any pair of linear maps and , the equation implies .
is right-invertible, which is to say there exists a linear map such that is the identity map on .
Isomorphism
is said to be an isomorphism if it is both left- and right-invertible. This is equivalent to being both one-to-one and onto (a bijection of sets) or also to being both epic and monic, and so being a bimorphism.
If is an endomorphism, then:
If, for some positive integer , the -th iterate of , , is identically zero, then is said to be nilpotent.
If , then is said to be idempotent
If , where is some scalar, then is said to be a scaling transformation or scalar multiplication map; see scalar matrix.
Change of basis
Given a linear map which is an endomorphism whose matrix is A, in the basis B of the space it transforms vector coordinates [u] as [v] = A[u]. As vectors change with the inverse of B (vectors coordinates are contravariant) its inverse transformation is [v] = B[v'].
Substituting this in the first expression
hence
Therefore, the matrix in the new basis is A′ = B−1AB, being B the matrix of the given basis.
Therefore, linear maps are said to be 1-co- 1-contra-variant objects, or type (1, 1) tensors.
Continuity
A linear transformation between topological vector spaces, for example normed spaces, may be continuous. If its domain and codomain are the same, it will then be a continuous linear operator. A linear operator on a normed linear space is continuous if and only if it is bounded, for example, when the domain is finite-dimensional. An infinite-dimensional domain may have discontinuous linear operators.
An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm (a function with small values can have a derivative with large values, while the derivative of 0 is 0). For a specific example, converges to 0, but its derivative does not, so differentiation is not continuous at 0 (and by a variation of this argument, it is not continuous anywhere).
Applications
A specific application of linear maps is for geometric transformations, such as those performed in computer graphics, where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix. Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames.
Another application of these transformations is in compiler optimizations of nested-loop code, and in parallelizing compiler techniques.
| Mathematics | Linear algebra | null |
18103 | https://en.wikipedia.org/wiki/Leyden%20jar | Leyden jar | A Leyden jar (or Leiden jar, or archaically, Kleistian jar) is an electrical component that stores a high-voltage electric charge (from an external source) between electrical conductors on the inside and outside of a glass jar. It typically consists of a glass jar with metal foil cemented to the inside and the outside surfaces, and a metal terminal projecting vertically through the jar lid to make contact with the inner foil. It was the original form of the capacitor (also called a condenser).
Its invention was a discovery made independently by German cleric Ewald Georg von Kleist on 11 October 1745 and by Dutch scientist Pieter van Musschenbroek of Leiden (Leyden), Netherlands, in 1745–1746.
The Leyden jar was used to conduct many early experiments in electricity, and its discovery was of fundamental importance in the study of electrostatics. It was the first means of accumulating and preserving electric charge in large quantities that could be discharged at the experimenter's will, thus overcoming a significant limit to early research into electrical conduction. Leyden jars are still used in education to demonstrate the principles of electrostatics.
Previous work
The Ancient Greeks already knew that pieces of amber could attract lightweight particles after being rubbed. The amber becomes electrified by the triboelectric effect, mechanical separation of charge in a dielectric material. The Greek word for amber is ἤλεκτρον ("ēlektron") and is the origin of the word "electricity". Thales of Miletus, a pre-Socratic philosopher, is thought to have accidentally commented on the phenomenon of electrostatic charging, due to his belief that even lifeless things have a soul in them, hence the popular analogy of the spark.
Around 1650, Otto von Guericke built a crude electrostatic generator: a sulphur ball that rotated on a shaft. When Guericke held his hand against the ball and turned the shaft quickly, a static electric charge built up. This experiment inspired the development of several forms of "friction machines", which greatly helped in the study of electricity.
Georg Matthias Bose (22 September 1710 – 17 September 1761) was a famous electrical experimenter in the early days of the development of electrostatics. He is credited with being the first to develop a way of temporarily storing static charges by using an insulated conductor (called a prime conductor). His demonstrations and experiments raised the interests of the German scientific community and the public in the development of electrical research.
Discovery
The Leyden jar was effectively discovered independently by two parties: German dean Ewald Georg von Kleist, who made the first discovery, and Dutch scientists Pieter van Musschenbroek and Andreas Cunaeus, who figured out why it only worked when held in the hand.
Von Kleist
Ewald Georg von Kleist was the dean at the cathedral of Cammin in Pomerania, a region now divided between Germany and Poland. Von Kleist is credited with first using the fluid analogy for electricity and demonstrated this to Bose by drawing sparks from water with his finger. He discovered the immense storage capability of the Leyden jar while attempting to demonstrate that a glass jar filled with alcohol would "capture" this fluid.
In October 1745, von Kleist tried to accumulate electricity in a small medicine bottle filled with alcohol with a nail inserted in the cork. He was following up on an experiment developed by Georg Matthias Bose where electricity had been sent through water to set alcoholic spirits alight. He attempted to charge the bottle from a large prime conductor (invented by Bose) suspended above his friction machine.
Von Kleist knew that the glass would provide an obstacle to the escape of the "fluid", and so was convinced that a substantial electric charge could be collected and held within it. He received a significant shock from the device when he accidentally touched the nail through the cork while still cradling the bottle in his other hand. He communicated his results to at least five different electrical experimenters, in several letters from November 1745 to March 1746, but did not receive any confirmation that they had repeated his results, until April 1746. Polish-Lithuanian physicist Daniel Gralath learned about von Kleist's experiment from seeing von Kleist's letter to Paul Swietlicki, written in November 1745. After Gralath's failed first attempt to reproduce the experiment in December 1745, he wrote to von Kleist for more information (and was told that the experiment would work better if the tube half-filled with alcohol was used). Gralath (in collaboration with ) succeeded in getting the intended effect on 5 March 1746, holding a small glass medicine bottle with a nail inside in one hand, moving it close to an electrostatic generator, and then moving the other hand close to the nail. Von Kleist didn't understand the significance of his conducting hand holding the bottle—and both he and his correspondents were loath to hold the device when told that the shock could throw them across the room. It took some time before von Kleist's student associates at Leyden worked out that the hand provided an essential element.
Musschenbroek and Cunaeus
The Leyden jar's invention was long credited to Pieter van Musschenbroek, the physics professor at Leiden University, who also ran a family foundry which cast brass cannonettes, and a small business (De Oosterse Lamp – "The Eastern Lamp") which made scientific and medical instruments for the new university courses in physics and for scientific gentlemen keen to establish their own 'cabinets' of curiosities and instruments.
Like von Kleist, Musschenbroek was also interested in, and attempting to repeat, Bose's experiment. During this time, Andreas Cunaeus, a lawyer, learned about this experiment from Musschenbroek, and attempted to duplicate the experiment at home with household items. Unaware of the "Rule of Dufay", that the experimental apparatus should be insulated, Cunaeus held his jar in his hand while charging it, and was thus the first to discover that such an experimental setup could deliver a severe shock. He reported his procedure and experience to Swiss-Dutch natural philosopher Jean-Nicolas-Sebastian Allamand, Musschenbroek's colleague. Allamand and Musschenbroek also received severe shocks. Musschenbroek communicated the experiment in a letter from 20 January 1746 to French entomologist René Antoine Ferchault de Réaumur, who was Musschenbroek's appointed correspondent at the Paris Academy. Abbé Jean-Antoine Nollet read this report, confirmed the experiment, and then read Musschenbroek's letter in a public meeting of the Paris Academy in April 1746 (translating from Latin to French).
Musschenbroek's outlet in France for the sale of his company's 'cabinet' devices was the Abbé Nollet (who started building and selling duplicate instruments in 1735). Nollet then gave the electrical storage device the name "Leyden jar" and promoted it as a special type of flask to his market of wealthy men with scientific curiosity. The "Kleistian jar" was therefore promoted as the Leyden jar, and as having been discovered by Pieter van Musschenbroek and his acquaintance Andreas Cunaeus. Musschenbroek, however, never claimed that he had invented it, and some think that Cunaeus was mentioned only to diminish credit to him.
Further developments
Within months after Musschenbroek's report about how to reliably create a Leyden jar, other electrical researchers were making and experimenting with their own Leyden jars. One of his expressed original interests was to see if the total possible charge could be increased.
Johann Heinrich Winckler, whose first experience with a single Leyden jar was reported in a letter to the Royal Society on 29 May 1746, had connected three Leyden jars together in a kind of electrostatic battery on 28 July 1746. In 1746, Abbé Nollet performed two experiments for the edification of King Louis XV of France, in the first of which he discharged a Leyden jar through 180 royal guardsmen, and in the second through a larger number of Carthusian monks; all of whom sprang into the air more or less simultaneously. The opinions of neither the king nor the experimental subjects have been recorded.
Daniel Gralath reported in 1747 that in 1746 he had conducted experiments with connecting two or three jars, probably in series.
In 1746–1748, Benjamin Franklin experimented with charging Leyden jars in series, and developed a system involving 11 panes of glass with thin lead plates glued on each side, and then connected together. He used the term "electrical battery" to describe his electrostatic battery in a 1749 letter about his electrical research in 1748. It is possible that Franklin's choice of the word battery was inspired by the humorous wordplay at the conclusion of his letter, where he wrote, among other things, about a salute to electrical researchers from a battery of guns. This is the first recorded use of the term electrical battery. The multiple and rapid developments for connecting Leyden jars during the period 1746–1748 resulted in a variety of divergent accounts in secondary literature about who made the first "battery" by connecting Leyden jars, whether they were in series or parallel, and who first used the term "battery". The term was later used for combinations of multiple electrochemical cells, the modern meaning of the term "battery".
The Swedish physicist, chemist, and meteorologist Torbern Bergman translated much of Benjamin Franklin's writings on electricity into German and continued to study electrostatic properties.
Starting in late 1756, Franz Aepinus, in a complicated combination of independent work and collaboration with Johan Wilcke, developed an "air condenser", a variation on the Leyden jar, by using air rather than glass as the dielectric. This functioning apparatus, without glass, created a problem for Benjamin Franklin's explanation of the Leyden jar, which maintained that the charge was located in the glass.
Design
A typical design consists of a glass jar with conducting tin foil coating the inner and outer surfaces. The foil coatings stop short of the mouth of the jar, to prevent the charge from arcing between the foils. A metal rod electrode projects through the nonconductive stopper at the mouth of the jar, electrically connected by some means (usually a hanging chain) to the inner foil, to allow it to be charged. The jar is charged by an electrostatic generator, or other source of electric charge, connected to the inner electrode while the outer foil is grounded. The inner and outer surfaces of the jar store equal but opposite charges.
The original form of the device is just a glass bottle partially filled with water, with a metal wire passing through a cork closing it. The role of the outer plate is provided by the hand of the experimenter. Soon John Bevis found (in 1747) that it was possible to coat the exterior of the jar with metal foil, and he also found that he could achieve the same effect by using a plate of glass with metal foil on both sides. These developments inspired William Watson in the same year to have a jar made with a metal foil lining both inside and outside, dropping the use of water.
Early experimenters (such as Benjamin Wilson in 1746) reported that the thinner the dielectric and the greater the surface, the greater the charge that could be accumulated.
Further developments in electrostatics revealed that the dielectric material was not essential, but increased the storage capability (capacitance) and prevented arcing between the plates. Two plates separated by a small distance also act as a capacitor, even in a vacuum.
Storage of the charge
It was initially believed that the charge was stored in the water in early Leyden jars. In the 1700s American statesman and scientist Benjamin Franklin performed extensive investigations of both water-filled and foil Leyden jars, which led him to conclude that the charge was stored in the glass, not in the water. A popular experiment, due to Franklin, which seems to demonstrate this involves taking a jar apart after it has been charged and showing that little charge can be found on the metal plates, and therefore it must be in the dielectric. The first documented instance of this demonstration is in a 1749 letter by Franklin. Franklin designed a "dissectible" Leyden jar (right), which was widely used in demonstrations. The jar is constructed out of a glass cup nested between two fairly snugly fitting metal cups. When the jar is charged with a high voltage and carefully dismantled, it is discovered that all the parts may be freely handled without discharging the jar. If the pieces are re-assembled, a large spark may still be obtained from it.
This demonstration appears to suggest that capacitors store their charge inside their dielectric. This theory was taught throughout the 1800s. However, this phenomenon is a special effect caused by the high voltage on the Leyden jar. In the dissectible Leyden jar, charge is transferred to the surface of the glass cup by corona discharge when the jar is disassembled; this is the source of the residual charge after the jar is reassembled. Handling the cup while disassembled does not provide enough contact to remove all the surface charge. Soda glass is hygroscopic and forms a partially conductive coating on its surface, which holds the charge. Addenbrooke (1922) found that in a dissectible jar made of paraffin wax, or glass baked to remove moisture, the charge remained on the metal plates. Zeleny (1944) confirmed these results and observed the corona charge transfer.
If a charged Leyden jar is discharged by shorting the inner and outer coatings and left to sit for a few minutes, the jar will recover some of its previous charge, and a second spark can be obtained from it. Often this can be repeated, and a series of 4 or 5 sparks, decreasing in length, can be obtained at intervals. This effect is caused by dielectric absorption.
Capacity
The Leyden jar is a high-voltage device; it is estimated that at a maximum the early Leyden jars could be charged to 20,000 to 60,000 volts. The center rod electrode has a metal ball on the end to prevent leakage of the charge into the air by corona discharge. It was first used in electrostatics experiments, and later in high-voltage equipment such as spark-gap radio transmitters and electrotherapy machines.
Originally, the amount of capacitance was measured in number of 'jars' of a given size, or through the total coated area, assuming reasonably standard thickness and composition of the glass. A typical Leyden jar of one pint size has a capacitance of about 1 nF.
Uses
Beginning in the late 18th century it was used in the medical field of electrotherapy to treat a variety of diseases by electric shock. By the middle of the 19th century, the Leyden jar had become common enough for writers to assume their readers knew of and understood its basic operation. Around the turn of the century it began to be widely used in spark-gap transmitters and medical electrotherapy equipment.
The development of the new technology of radio in the early 20th century encouraged the reduction in the size of Leyden jars as well as the reduction of undesired inductance and resistance. These improvements along with improved dielectrics caused the Leyden jar to evolve into the modern compact form of capacitor.
| Technology | Components | null |
18111 | https://en.wikipedia.org/wiki/Lepus%20%28constellation%29 | Lepus (constellation) | Lepus is a constellation lying just south of the celestial equator. Its name is Latin for hare. It is located below—immediately south—of Orion (the hunter), and is sometimes represented as a hare being chased by Orion or by Orion's hunting dogs.
Although the hare does not represent any particular figure in Greek mythology, Lepus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations.
History and mythology
Lepus is most often represented as a hare being hunted by Orion, whose hunting dogs (Canis Major and Canis Minor) pursue it. The constellation is also associated with the Moon rabbit.
Four stars of this constellation (α, β, γ, δ Lep) form a quadrilateral and are known as ‘Arsh al-Jawzā', "the Throne of Jawzā'" or Kursiyy al-Jawzā' al-Mu'akhkhar, "the Hindmost Chair of Jawzā'" and al-Nihāl, "the Camels Quenching Their Thirst" in Arabic.
Features
Stars
There are a fair number of bright stars, both single and double, in Lepus. Alpha Leporis, the brightest star of Lepus, is a white supergiant of magnitude 2.6, 1300 light-years from Earth. Its traditional name, Arneb (أرنب ’arnab), means "hare" in Arabic. Beta Leporis, traditionally known as Nihal (Arabic for "quenching their thirst"), is a yellow giant of magnitude 2.8, 159 light-years from Earth. Gamma Leporis is a double star divisible in binoculars. The primary is a yellow star of magnitude 3.6, 29 light-years from Earth. The secondary is an orange star of magnitude 6.2. Delta Leporis is a yellow giant of magnitude 3.8, 112 light-years from Earth. Epsilon Leporis is an orange giant of magnitude 3.2, 227 light-years from Earth. Kappa Leporis is a double star divisible in medium aperture amateur telescopes, 560 light-years from Earth. The primary is a blue-white star of magnitude 4.4 and the secondary is a star of magnitude 7.4.
There are several variable stars in Lepus. R Leporis is a Mira variable star. It is also called "Hind's Crimson Star" for its striking red color and because it was named for John Russell Hind. It varies in magnitude from a minimum of 9.8 to a maximum of 7.3, with a period of 420 days. R Leporis is at a distance of 1500 light-years. The color intensifies as the star brightens. It can be as dim as magnitude 12 and as bright as magnitude 5.5. T Leporis is also a Mira variable observed in detail by ESO's Very Large Telescope Interferometer. RX Leporis is a semi-regular red giant that has a period of 2 months. It has a minimum magnitude of 7.4 and a maximum magnitude of 5.0.
Deep-sky objects
There is one Messier object in Lepus, M79. It is a globular cluster of magnitude 8.0, 42,000 light-years from Earth. One of the few globular clusters visible in the Northern Celestial Hemisphere's winter, it is a Shapley class V cluster, which means that it has an intermediate concentration towards its center. It is often described as having a "starfish" shape.
M79 was discovered in 1780 by Pierre Méchain.
| Physical sciences | Other | Astronomy |
18112 | https://en.wikipedia.org/wiki/Lupus%20%28constellation%29 | Lupus (constellation) | Lupus is a constellation of the mid-Southern Sky. Its name is Latin for wolf. Lupus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations but was long an asterism associated with the just westerly, larger constellation Centaurus.
History and mythology
In ancient times, the constellation was considered an asterism within Centaurus, and was considered to have been an arbitrary animal, killed, or about to be killed, on behalf of, or for, Centaurus. An alternative visualization, attested by Eratosthenes, saw this constellation as a wineskin held by Centaurus. It was not separated from Centaurus until Hipparchus of Bithynia named it ( meaning "beast") in the 2nd century BC.
The Greek constellation is probably based on the Babylonian figure known as the Mad Dog (UR.IDIM). This was a strange hybrid creature that combined the head and torso of a man with the legs and tail of a lion (the cuneiform sign 'UR' simply refers to a large carnivore; lions, wolves and dogs are all included). It is often found in association with the sun god and another mythical being called the Bison-man, which is supposedly related to the Greek constellation of Centaurus.
In Arab folk astronomy, Lupus, together with Centaurus were collectively called الشماريخ , meaning the dense branches of the date palm's fruit.
Later, in Islamic Medieval astronomy, it was named السبع , which is a term used for any predatory wild beast (same as the Greek ), as a separate constellation, but drawn together with Centaurus. In some manuscripts of Al-Sufi's Book of Fixed Stars and celestial globes, it was drawn as a lion; in others, it is drawn as a wolf, both conforming to the Sab''' name.
In Europe, no particular animal was associated with it until the Latin translation of Ptolemy's work identified it with the wolf.
Characteristics
Lupus is bordered by six different constellations, although one of them (Hydra) merely touches at the corner. The other five are Scorpius (the scorpion), Norma (the right angle), Circinus (the compass), Libra (the balance scale), and Centaurus (the centaur). Covering 333.7 square degrees and 0.809% of the night sky, it ranks 46th of the 88 modern constellations. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is Lup. The official constellation boundaries are defined by a twelve-sided polygon (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and, while the declination coordinates are between −29.83° and −55.58°. The whole constellation is visible to observers south of latitude 34°N.
Features
Stars
Overall, there are 127 stars within the constellation's borders brighter than or equal to apparent magnitude 6.5. In his book Star Names and Their Meanings, R. H. Allen gave the names Yang Mun for Alpha Lupi, the brightest star in Lupus, and KeKwan for the blue giant Beta Lupi, both from Chinese. However, the first name is in error; both stars were part of a large Chinese constellation known in modern transliteration as Qíguān, the Imperial Guards.
Most of the brightest stars in Lupus are massive members of the nearest OB association, Scorpius–Centaurus.
Alpha Lupi is an ageing blue giant star of spectral type B1.5 III that is 460 ± 10 light-years distant from Earth. It is a Beta Cephei variable, pulsating in brightness by 0.03 of a magnitude every 7 hours and 6 minutes.
Deep-sky objects
Towards the north of the constellation are globular clusters NGC 5824 and NGC 5986, and close by the dark nebula B 228. To the south are two open clusters, NGC 5822 and NGC 5749, as well as globular cluster NGC 5927 on the eastern border with Norma. On the western border are two spiral galaxies and the Wolf–Rayet planetary nebula IC 4406, containing some of the hottest stars in existence. IC 4406, also called the Retina Nebula, is a cylindrical nebula at a distance of 5,000 light-years. It has dust lanes throughout its center. Another planetary nebula, NGC 5882, is towards the center of the constellation. The transiting exoplanet Lupus-TR-3b lies in this constellation. The historic supernova SN 1006 is described by various sources as appearing on April 30 to May 1, 1006, in the constellation of Lupus.
ESO 274-1 is a spiral galaxy seen from edge-on that requires an amateur telescope with at least 12 inches of aperture to view. It can be found by using Lambda Lupi and Mu Lupi as markers, and can only be seen under very dark skies. It is 9 arcminutes by 0.7 arcminutes with a small, elliptical nucleus.
| Physical sciences | Other | Astronomy |
18113 | https://en.wikipedia.org/wiki/Lyra | Lyra | , from ; pronounced: ) is a small constellation. It is one of the 48 listed by the 2nd century astronomer Ptolemy, and is one of the modern 88 constellations recognized by the International Astronomical Union. Lyra was often represented on star maps as a vulture or an eagle carrying a lyre, and hence is sometimes referred to as Vultur Cadens or Aquila Cadens ("Falling Vulture" or "Falling Eagle"), respectively. Beginning at the north, Lyra is bordered by Draco, Hercules, Vulpecula, and Cygnus. Lyra is nearly overhead in temperate northern latitudes shortly after midnight at the start of summer. From the equator to about the 40th parallel south it is visible low in the northern sky during the same (thus winter) months.
Vega, Lyra's brightest star, is one of the brightest stars in the night sky, and forms a corner of the famed Summer Triangle asterism. Beta Lyrae is the prototype of a class of binary stars known as Beta Lyrae variables. These binary stars are so close to each other that they become egg-shaped and material flows from one to the other. Epsilon Lyrae, known informally as the Double Double, is a complex multiple star system. Lyra also hosts the Ring Nebula, the second-discovered and best-known planetary nebula.
History
In Greek mythology, Lyra represents the lyre of Orpheus. Orpheus's music was said to be so great that even inanimate objects such as rocks could be charmed. Joining Jason and the Argonauts, his music was able to quell the voices of the dangerous Sirens, who sang tempting songs to the Argonauts.
At one point, Orpheus married Eurydice, a nymph. While fleeing from an attack by Aristaeus, she stepped on a snake that bit her, killing her. To reclaim her, Orpheus entered the Underworld, where the music from his lyre charmed Hades, the god of the Underworld. Hades relented and let Orpheus bring Eurydice back, on the condition that he never once look back until outside. Unfortunately, near the very end, Orpheus faltered and looked back, causing Eurydice to be left in the Underworld forever. Orpheus spent the rest of his life strumming his lyre while wandering aimlessly through the land, rejecting all marriage offers from women.
There are two main competing myths relating to the death of Orpheus. According to Eratosthenes, Orpheus failed to make a necessary sacrifice to Dionysus due to his regard for Apollo as the supreme deity instead. Dionysus then sent his followers to rip Orpheus apart. Ovid tells a rather different story, saying that women, in retribution for Orpheus's rejection of marriage offers, ganged up and threw stones and spears. At first, his music charmed them as well, but eventually their numbers and clamor overwhelmed his music and he was hit by the spears. Both myths then state that his lyre was placed in the sky by Zeus and Orpheus's bones were buried by the muses. In a third myth, he was killed by the Thracian women because he looked on the rites of Father Liber (Dionysus).
The Roman book , attributed to Hyginus, also records another myth about Lyra, which said that it belonged to Theseus "for he was skilful in all the arts and seems to have learned the lyre as well". The book reports that the neighbouring constellation now known as Hercules was said to depict many different mythical figures, including Theseus, Orpheus, or the musician Thamyris. The proximity of these two constellations and Corona Borealis (perhaps a symbol of Theseus' royalty) could indicate that the three constellations were invented as a group.
Vega and its surrounding stars are also treated as a constellation in other cultures. The area corresponding to Lyra was seen by the Arabs as a vulture or an eagle diving with folded wings. In Wales, Lyra is known as King Arthur's Harp (Talyn Arthur), and King David's harp. The Persian Hafiz called it the Lyre of Zurah.
It has been called the Manger of the Infant Saviour, Praesepe Salvatoris. In Australian Aboriginal astronomy, Lyra is known by the Boorong people in Victoria as the Malleefowl constellation. Lyra was known as Urcuchillay by the Incas and was worshipped as an animal deity.
Characteristics
Lyra is bordered by Vulpecula to the south, Hercules to the west, Draco to the north, and Cygnus to the east. Covering 286.5 square degrees, it ranks 52nd of the 88 modern constellations in size. It appears prominently in the northern sky during the Northern Hemisphere's summer, and the whole constellation is visible for at least part of the year to observers north of latitude 42°S. Its main asterism consists of six stars, and 73 stars in total are brighter than magnitude 6.5. The constellation's boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a 17-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The International Astronomical Union (IAU) adopted the three-letter abbreviation "Lyr" for the constellation in 1922.
Features
Stars
German cartographer Johann Bayer used the Greek letters alpha through nu to label the most prominent stars in the constellation. English astronomer John Flamsteed observed and labelled two stars each as delta, epsilon, zeta and nu. He added pi and rho, not using xi and omicron as Bayer used these letters to denote Cygnus and Hercules on his map.
The brightest star in the constellation is Vega (Alpha Lyrae), a main-sequence star of spectral type A0Va. Only 7.7 parsecs distant, Vega is a Delta Scuti variable, varying between magnitudes −0.02 and 0.07 over 0.2 days. On average, it is the second-brightest star of the northern hemisphere (after Arcturus) and the fifth-brightest star in all, surpassed only by Arcturus, Alpha Centauri, Canopus, and Sirius. Vega was the pole star in the year 12,000 BCE, and will again become the pole star around 14,000 CE.
Vega is one of the most magnificent of all stars, and has been called "arguably the next most important star in the sky after the Sun". Vega was the first star other than the Sun to be photographed, as well as the first to have a clear spectrum recorded, showing absorption lines for the first time. The star was the first single main-sequence star other than the Sun to be known to emit X-rays, and is surrounded by a circumstellar debris disk, similar to the Kuiper Belt. Vega forms one corner of the famous Summer Triangle asterism; along with Altair and Deneb, these three stars form a prominent triangle during the northern hemisphere summer.
Vega also forms one vertex of a much smaller triangle, along with Epsilon and Zeta Lyrae. Zeta forms a wide binary star visible in binoculars, consisting of an Am star and an F-type subgiant. The Am star has an additional close companion, bringing the total number of stars in the system to three. Epsilon is a more famous wide binary that can even be separated by the naked eye under excellent conditions. Both components are themselves close binaries which can be seen with telescopes to consist of A- and F-type stars, and a faint star was recently found to orbit component C as well, for a total of five stars.
In contrast to Zeta and Epsilon Lyrae, Delta Lyrae is an optical double, with the two stars simply lying along the same line of sight east of Zeta. The brighter and closer of the two, Delta2 Lyrae, is a 4th-magnitude red bright giant that varies semiregularly by around 0.2 magnitudes with a dominant period of 79 days, while the fainter Delta1 Lyrae is a spectroscopic binary consisting of a B-type primary and an unknown secondary. Both systems, however, have very similar radial velocities, and are the two brightest members of a sparse open cluster known as the Delta Lyrae cluster.
South of Delta is Sulafat (Gamma Lyrae), a blue giant and the second-brightest star in the constellation. Around 190 parsecs distant, it has been referred to as a "superficially normal" star.
The final star forming the lyre's figure is Sheliak (Beta Lyrae), also a binary composed of a blue bright giant and an early B-type star. In this case, the stars are so close together that the larger giant is overflowing its Roche lobe and transferring material to the secondary, forming a semidetached system. The secondary, originally the less massive of the two, has accreted so much mass that it is now substantially more massive, albeit smaller, than the primary, and is surrounded by a thick accretion disk. The plane of the orbit is aligned with Earth and the system thus shows eclipses, dropping nearly a full magnitude from its 3rd-magnitude baseline every 13 days, although its period is increasing by around 19 seconds per year. It is the prototype of the Beta Lyrae variables, eclipsing semidetached binaries of early spectral types in which there are no exact onsets of eclipses, but rather continuous changes in brightness.
Another easy-to-spot variable is the bright R Lyrae, north of the main asterism. Also known as 13 Lyrae, it is a 4th-magnitude red giant semiregular variable that varies by several tenths of a magnitude. Its periodicity is complex, with several different periods of varying lengths, most notably one of 46 days and one of 64 days. Even further north is FL Lyrae, a much fainter 9th-magnitude Algol variable that drops by half a magnitude every 2.18 days during the primary eclipse. Both components are main-sequence stars, the primary being late F-type and the secondary late G-type. The system was one of the first main-sequence eclipsing binaries containing G-type star to have its properties known as well as the better-studied early-type eclipsing binaries.
At the very northernmost edge of the constellation is the even fainter V361 Lyrae, an eclipsing binary that does not easily fall into one of the traditional classes, with features of Beta Lyrae, W Ursae Majoris, and cataclysmic variables. It may be a representative of a very brief phase in which the system is transitioning into a contact binary. It can be found less than a degree away from the naked-eye star 16 Lyrae, a 5th-magnitude A-type subgiant located around 37 parsecs distant.
The brightest star not included in the asterism and the westernmost cataloged by Bayer or Flamsteed is Kappa Lyrae, a typical red giant around 73 parsecs distant. Similar bright orange or red giants include the 4th-magnitude Theta Lyrae, Lambda Lyrae, and HD 173780. Lambda is located just south of Gamma, Theta is positioned in the east, and HD 173780, the brightest star in the constellation with no Bayer or Flamsteed designation, is more southernly. Just north of Theta and of almost exactly the same magnitude is Eta Lyrae, a blue subgiant with a near-solar metal abundance. Also nearby is the faint HP Lyrae, a post-asymptotic giant branch (AGB) star that shows variability. The reason for its variability is still a mystery: first cataloged as an eclipsing binary, it was theorized to be an RV Tauri variable in 2002, but if so, it would be by far the hottest such variable discovered.
In the extreme east is RR Lyrae, the prototype of the large class of variables known as RR Lyrae variables, which are pulsating variables similar to Cepheids, but are evolved population II stars of spectral types A and F. Such stars are usually not found in a galaxy's thin disk, but rather in the galactic halo. Such stars serve as standard candles, and thus are a reliable way to calculate distances to the globular clusters in which they reside. RR Lyrae itself varies between magnitudes 7 and 8 while exhibiting the Blazhko effect. The easternmost star designated by Flamsteed, 19 Lyrae, is also a small-amplitude variable, an Alpha2 Canum Venaticorum variable with a period of just over one day.
Another evolved star is the naked-eye variable XY Lyrae, a red bright giant just north of Vega that varies between 6th and 7th magnitudes over a period of 120 days. Also just visible to the naked eye is the peculiar classical Cepheid V473 Lyrae. It is unique in that it is the only known Cepheid in the Milky Way to undergo periodic phase and amplitude changes, analogous to the Blazhko effect in RR Lyrae stars. At 1.5 days, its period was the shortest known for a classical Cepheid at the time of its discovery. W and S Lyrae are two of the many Mira variables in Lyra. W varies between 7th and 12th magnitudes over approximately 200 days, while S, slightly fainter, is a silicate carbon star, likely of the J-type. Another evolved star is EP Lyrae, a faint RV Tauri variable and an "extreme example" of a post-AGB star. It and a likely companion are surrounded by a circumstellar disk of material.
Rather close to Earth at a distance of only is Gliese 758. The sunlike primary star has a brown dwarf companion, the coldest to have been imaged around a sunlike star in thermal light when it was discovered in 2009. Only slightly farther away is V478 Lyrae, an eclipsing RS Canum Venaticorum variable whose primary star shows active starspot activity.
One of the most peculiar systems in Lyra is MV Lyrae, a nova-like star consisting of a red dwarf and a white dwarf. Originally classified as a VY Sculptoris star due to spending most time at maximum brightness, since around 1979 the system has been dominantly at minimum brightness, with periodic outbursts. Its nature is still not fully understood. Another outbursting star is AY Lyrae, an SU Ursae Majoris-type dwarf nova that has undergone several superoutbursts. Of the same type is V344 Lyrae, notable for an extremely short period between superoutbursts coupled with one of the highest amplitudes for such a period. The true nova HR Lyrae flared in 1919 to a maximum magnitude of 6.5, over 9.5 magnitudes higher than in quiescence. Some of its characteristics are similar to those of recurring novae.
Deep-sky objects
M57, also known as the "Ring Nebula" and NGC 6720, at a distance of 2,550 light-years from Earth is one of the best known planetary nebulae and the second to be discovered; its integrated magnitude is 8.8. It was discovered in 1779 by Antoine Darquier, 15 years after Charles Messier discovered the Dumbbell Nebula. Astronomers have determined that it is between 6,000 and 8,000 years old; it is approximately one light-year in diameter. The outer part of the nebula appears red in photographs because of emission from ionized hydrogen. The middle region is colored green; doubly ionized oxygen emits greenish-blue light. The hottest region, closest to the central star, appears blue because of emission from helium. The central star itself is a white dwarf with a temperature of 120,000 kelvins. In telescopes, the nebula appears as a visible ring with a green tinge; it is slightly elliptical because its three-dimensional shape is a torus or cylinder seen from a slight angle. It can be found halfway between Gamma Lyrae and Beta Lyrae.
Another planetary nebula in Lyra is Abell 46. The central star, V477 Lyrae, is an eclipsing post-common-envelope binary, consisting of a white dwarf primary and an oversized secondary component due to recent accretion. The nebula itself is of relatively low surface brightness compared to the central star, and is undersized for the primary's mass for reasons not yet fully understood.
NGC 6791 is a cluster of stars in Lyra. It contains three age groups of stars: 4 billion year-old white dwarfs, 6 billion year-old white dwarfs and 8 billion year-old normal stars.
NGC 6745 is an irregular spiral galaxy in Lyra that is at a distance of 208 million light-years. Several million years ago, it collided with a smaller galaxy, which created a region filled with young, hot, blue stars. Astronomers do not know if the collision was simply a glancing blow or a prelude to a full-on merger, which would end with the two galaxies incorporated into one larger, probably elliptical galaxy.
A remarkable long-duration gamma-ray burst was GRB 050525A, which flared in 2005. The afterglow re-brightened at 33 minutes after the original burst, only the third found to exhibit such an effect in the timeframe, and unable to be completely explained by known phenomena. The light curve observed over the next 100 days was consistent with that of a supernova or even a hypernova, dubbed SN 2005nc. The host galaxy proved elusive to find at first, although it was subsequently identified.
Exoplanets
In orbit around the orange subgiant star HD 177830 is one of the earliest exoplanets to be detected. A jovian-mass planet, it orbits in an eccentric orbit with a period of 390 days. A second planet closer to the star was discovered in 2011. Visible to the naked eye are HD 173416, a yellow giant hosting a planet over twice the mass of Jupiter discovered in 2009; and HD 176051, a low-mass binary star containing another high-mass planet. Just short of naked-eye visibility is HD 178911, a triple system consisting of a close binary and a visually separable sunlike star. The sunlike star has a planet with over 6 Jupiter masses discovered in 2001, the second found in a triple system after that of 16 Cygni.
One of the most-studied exoplanets in the night sky is TrES-1b, in orbit around the star GSC 02652-01324. Detected from a transit of its parent star, the planet has around 3/4 the mass of Jupiter, yet orbits its parent star in only three days. The transits have been reported to have anomalies multiple times. Originally thought to be possibly due to the presence of an Earth-like planet, it is now accepted that the irregularities are due to a large starspot. Also discovered by the transit method is WASP-3b, with 1.75 times the mass of Jupiter. At the time of its discovery, it was one of the hottest known exoplanets, in orbit around the F-type main-sequence star WASP-3. Similar to TrES-1b, irregularities in the transits had left open the possibility of a second planet, although this now appears unlikely as well.
Lyra is one of three constellations (along with neighboring Cygnus and Draco) to be in the Kepler Mission's field of view, and as such it contains many more known exoplanets than most constellations. One of the first discovered by the mission is Kepler-7b, an extremely low-density exoplanet with less than half the mass of Jupiter, yet nearly 1.5 times the radius. Almost as sparse is Kepler-8b, only slightly more massive and of a similar radius. The Kepler-20 system contains five known planets; three of them are only slightly smaller than Neptune, while the other two are some of the first Earth-sized exoplanets to be discovered. Kepler-37 is another star with an exoplanet discovered by Kepler; the planet is the smallest known extrasolar planet known as of February 2013.
In April 2013, it was announced that of the five planets orbiting Kepler-62, at least two—Kepler-62e and Kepler-62f—are within the boundaries of the habitable zone of that star, where scientists think liquid water could exist, and are both candidates for being a solid, rocky, earth-like planet. The exoplanets are 1.6 and 1.4 times the diameter of Earth respectively, with their star Kepler-62 at a distance of 1,200 light-years.
| Physical sciences | Other | Astronomy |
18120 | https://en.wikipedia.org/wiki/Lysosome | Lysosome | A lysosome () is a single membrane-bound organelle found in many animal cells. They are spherical vesicles that contain hydrolytic enzymes that digest many kinds of biomolecules. A lysosome has a specific composition, of both its membrane proteins and its lumenal proteins. The lumen's pH (~4.5–5.0) is optimal for the enzymes involved in hydrolysis, analogous to the activity of the stomach. Besides degradation of polymers, the lysosome is involved in cell processes of secretion, plasma membrane repair, apoptosis, cell signaling, and energy metabolism.
Lysosomes are degradative organelles that act as the waste disposal system of the cell by digesting used materials in the cytoplasm, from both inside and outside the cell. Material from outside the cell is taken up through endocytosis, while material from the inside of the cell is digested through autophagy. The sizes of the organelles vary greatly—the larger ones can be more than 10 times the size of the smaller ones. They were discovered and named by Belgian biologist Christian de Duve, who eventually received the Nobel Prize in Physiology or Medicine in 1974.
Lysosomes contain more than 60 different enzymes, and have more than 50 membrane proteins. Enzymes of the lysosomes are synthesized in the rough endoplasmic reticulum and exported to the Golgi apparatus upon recruitment by a complex composed of CLN6 and CLN8 proteins. The enzymes are transported from the Golgi apparatus to lysosomes in small vesicles, which fuse with larger acidic vesicles. Enzymes destined for a lysosome are tagged with the molecule mannose 6-phosphate, so that they are properly sorted into acidified vesicles.
In 2009, Marco Sardiello and co-workers discovered that the synthesis of most lysosomal enzymes and membrane proteins is controlled by transcription factor EB (TFEB), which promotes the transcription of nuclear genes. Mutations in the genes for these enzymes are responsible for more than 50 different human genetic disorders collectively known as lysosomal storage diseases. These diseases result in an accumulation of specific substrates, due to the inability to break them down. These genetic defects are related to several neurodegenerative disorders, cancers, cardiovascular diseases, and aging-related diseases.
Etymology and pronunciation
The word lysosome (, ) is Neo-Latin that uses the combining forms lyso- (referring to lysis and derived from the Latin lysis, meaning "to loosen", via Ancient Greek λύσις [lúsis]), and -some, from soma, "body", yielding "body that lyses" or "lytic body". The adjectival form is lysosomal. The forms *lyosome and *lyosomal are much rarer; they use the lyo- form of the prefix but are often treated by readers and editors as mere unthinking replications of typos, which has no doubt been true as often as not.
Discovery
Christian de Duve, at the Laboratory of Physiological Chemistry at the Catholic University of Louvain in Belgium, had been studying the mechanism of action of insulin in liver cells. By 1949, he and his team had focused on the enzyme called glucose 6-phosphatase, which is the first crucial enzyme in sugar metabolism and the target of insulin. They already suspected that this enzyme played a key role in regulating blood sugar levels. However, even after a series of experiments, they failed to purify and isolate the enzyme from the cellular extracts. Therefore, they tried a more arduous procedure of cell fractionation, by which cellular components are separated based on their sizes using centrifugation.
They succeeded in detecting the enzyme activity from the microsomal fraction. This was the crucial step in the serendipitous discovery of lysosomes. To estimate this enzyme activity, they used that of the standardized enzyme acid phosphatase and found that the activity was only 10% of the expected value. One day, the enzyme activity of purified cell fractions which had been refrigerated for five days was measured. Surprisingly, the enzyme activity was increased to normal of that of the fresh sample. The result was the same no matter how many times they repeated the estimation, and led to the conclusion that a membrane-like barrier limited the accessibility of the enzyme to its substrate, and that the enzymes were able to diffuse after a few days (and react with their substrate). They described this membrane-like barrier as a "saclike structure surrounded by a membrane and containing acid phosphatase."
It became clear that this enzyme from the cell fraction came from membranous fractions, which were definitely cell organelles, and in 1955 De Duve named them "lysosomes" to reflect their digestive properties. The same year, Alex B. Novikoff from the University of Vermont visited de Duve's laboratory, and successfully obtained the first electron micrographs of the new organelle. Using a staining method for acid phosphatase, de Duve and Novikoff confirmed the location of the hydrolytic enzymes of lysosomes using light and electron microscopic studies. de Duve won the Nobel Prize in Physiology or Medicine in 1974 for this discovery.
Originally, De Duve had termed the organelles the "suicide bags" or "suicide sacs" of the cells, for their hypothesized role in apoptosis. However, it has since been concluded that they only play a minor role in cell death.
Function and structure
Lysosomes contain a variety of enzymes, enabling the cell to break down various biomolecules it engulfs, including peptides, nucleic acids, carbohydrates, and lipids (lysosomal lipase). The enzymes responsible for this hydrolysis require an acidic environment for optimal activity.
In addition to being able to break down polymers, lysosomes are capable of fusing with other organelles & digesting large structures or cellular debris; through cooperation with phagosomes, they are able to conduct autophagy, clearing out damaged structures. Similarly, they are able to break down virus particles or bacteria in phagocytosis of macrophages.
The size of lysosomes varies from 0.1 μm to 1.2 μm. With a pH ranging from ~4.5–5.0, the interior of the lysosomes is acidic compared to the slightly basic cytosol (pH 7.2). The lysosomal membrane protects the cytosol, and therefore the rest of the cell, from the degradative enzymes within the lysosome. The cell is additionally protected from any lysosomal acid hydrolases that drain into the cytosol, as these enzymes are pH-sensitive and do not function well or at all in the alkaline environment of the cytosol. This ensures that cytosolic molecules and organelles are not destroyed in case there is leakage of the hydrolytic enzymes from the lysosome.
The lysosome maintains its pH differential by pumping in protons (H+ ions) from the cytosol across the membrane via proton pumps and chloride ion channels. Vacuolar-ATPases are responsible for transport of protons, while the counter transport of chloride ions is performed by ClC-7 Cl−/H+ antiporter. In this way a steady acidic environment is maintained.
It sources its versatile capacity for degradation by import of enzymes with specificity for different substrates; cathepsins are the major class of hydrolytic enzymes, while lysosomal alpha-glucosidase is responsible for carbohydrates, and lysosomal acid phosphatase is necessary to release phosphate groups of phospholipids.
Recent research also indicates that lysosomes can act as a source of intracellular calcium.
Formation
Many components of animal cells are recycled by transferring them inside or embedded in sections of membrane. For instance, in endocytosis (more specifically, macropinocytosis), a portion of the cell's plasma membrane pinches off to form vesicles that will eventually fuse with an organelle within the cell. Without active replenishment, the plasma membrane would continuously decrease in size. It is thought that lysosomes participate in this dynamic membrane exchange system and are formed by a gradual maturation process from endosomes.
The production of lysosomal proteins suggests one method of lysosome sustainment. Lysosomal protein genes are transcribed in the nucleus in a process that is controlled by transcription factor EB (TFEB). mRNA transcripts exit the nucleus into the cytosol, where they are translated by ribosomes. The nascent peptide chains are translocated into the rough endoplasmic reticulum, where they are modified. Lysosomal soluble proteins exit the endoplasmic reticulum via COPII-coated vesicles after recruitment by the EGRESS complex (ER-to-Golgi relaying of enzymes of the lysosomal system), which is composed of CLN6 and CLN8 proteins. COPII vesicles then deliver lysosomal enzymes to the Golgi apparatus, where a specific lysosomal tag, mannose 6-phosphate, is added to the peptides. The presence of these tags allow for binding to mannose 6-phosphate receptors in the Golgi apparatus, a phenomenon that is crucial for proper packaging into vesicles destined for the lysosomal system.
Upon leaving the Golgi apparatus, the lysosomal enzyme-filled vesicle fuses with a late endosome, a relatively acidic organelle with an approximate pH of 5.5. This acidic environment causes dissociation of the lysosomal enzymes from the mannose 6-phosphate receptors. The enzymes are packed into vesicles for further transport to established lysosomes. The late endosome itself can eventually grow into a mature lysosome, as evidenced by the transport of endosomal membrane components from the lysosomes back to the endosomes.
Pathogen entry
As the endpoint of endocytosis, the lysosome also acts as a safeguard in preventing pathogens from being able to reach the cytoplasm before being degraded. Pathogens often hijack endocytotic pathways such as pinocytosis in order to gain entry into the cell. The lysosome prevents easy entry into the cell by hydrolyzing the biomolecules of pathogens necessary for their replication strategies; reduced lysosomal activity results in an increase in viral infectivity, including HIV. In addition, AB5 toxins such as cholera hijack the endosomal pathway while evading lysosomal degradation.
Clinical significance
Lysosomes are involved in a group of genetically inherited deficiencies, or mutations called lysosomal storage diseases (LSD), inborn errors of metabolism caused by a dysfunction of one of the enzymes. The rate of incidence is estimated to be 1 in 5,000 births, and the true figure expected to be higher as many cases are likely to be undiagnosed or misdiagnosed. The primary cause is deficiency of an acid hydrolase. Other conditions are due to defects in lysosomal membrane proteins that fail to transport the enzyme, non-enzymatic soluble lysosomal proteins. The initial effect of such disorders is accumulation of specific macromolecules or monomeric compounds inside the endosomal–autophagic–lysosomal system. This results in abnormal signaling pathways, calcium homeostasis, lipid biosynthesis and degradation and intracellular trafficking, ultimately leading to pathogenetic disorders. The organs most affected are brain, viscera, bone and cartilage.
There is no direct medical treatment to cure LSDs. The most common LSD is Gaucher's disease, which is due to deficiency of the enzyme glucocerebrosidase. Consequently, the enzyme substrate, the fatty acid glucosylceramide accumulates, particularly in white blood cells, which in turn affects spleen, liver, kidneys, lungs, brain and bone marrow. The disease is characterized by bruises, fatigue, anaemia, low blood platelets, osteoporosis, and enlargement of the liver and spleen. As of 2017, enzyme replacement therapy is available for treating 8 of the 50-60 known LDs.
The most severe and rarely found, lysosomal storage disease is inclusion cell disease.
Metachromatic leukodystrophy is another lysosomal storage disease that also affects sphingolipid metabolism.
Dysfunctional lysosome activity is also heavily implicated in the biology of aging, and age-related diseases such as Alzheimer's, Parkinson's, and cardiovascular disease.
Different enzymes present in Lysosomes
Lysosomotropism
Weak bases with lipophilic properties accumulate in acidic intracellular compartments like lysosomes. While the plasma and lysosomal membranes are permeable for neutral and uncharged species of weak bases, the charged protonated species of weak bases do not permeate biomembranes and accumulate within lysosomes. The concentration within lysosomes may reach levels 100 to 1000 fold higher than extracellular concentrations. This phenomenon is called lysosomotropism, "acid trapping" or "proton pump" effect. The amount of accumulation of lysosomotropic compounds may be estimated using a cell-based mathematical model.
A significant part of the clinically approved drugs are lipophilic weak bases with lysosomotropic properties. This explains a number of pharmacological properties of these drugs, such as high tissue-to-blood concentration gradients or long tissue elimination half-lives; these properties have been found for drugs such as haloperidol, levomepromazine, and amantadine. However, high tissue concentrations and long elimination half-lives are explained also by lipophilicity and absorption of drugs to fatty tissue structures. Important lysosomal enzymes, such as acid sphingomyelinase, may be inhibited by lysosomally accumulated drugs. Such compounds are termed FIASMAs (functional inhibitor of acid sphingomyelinase) and include for example fluoxetine, sertraline, or amitriptyline.
Ambroxol is a lysosomotropic drug of clinical use to treat conditions of productive cough for its mucolytic action. Ambroxol triggers the exocytosis of lysosomes via neutralization of lysosomal pH and calcium release from acidic calcium stores. Presumably for this reason, Ambroxol was also found to improve cellular function in some disease of lysosomal origin such as Parkinson's or lysosomal storage disease.
Systemic lupus erythematosus
Impaired lysosome function is prominent in systemic lupus erythematosus preventing macrophages and monocytes from degrading neutrophil extracellular traps and immune complexes. The failure to degrade internalized immune complexes stems from chronic mTORC2 activity, which impairs lysosome acidification. As a result, immune complexes in the lysosome recycle to the surface of macrophages causing an accumulation of nuclear antigens upstream of multiple lupus-associated pathologies.
Controversy in botany
By scientific convention, the term lysosome is applied to these vesicular organelles only in animals, and the term vacuole is applied to those in plants, fungi and algae (some animal cells also have vacuoles). Discoveries in plant cells since the 1970s started to challenge this definition. Plant vacuoles are found to be much more diverse in structure and function than previously thought. Some vacuoles contain their own hydrolytic enzymes and perform the classic lysosomal activity, which is autophagy. These vacuoles are therefore seen as fulfilling the role of the animal lysosome. Based on de Duve's description that "only when considered as part of a system involved directly or indirectly in intracellular digestion does the term lysosome describe a physiological unit", some botanists strongly argued that these vacuoles are lysosomes. However, this is not universally accepted as the vacuoles are strictly not similar to lysosomes, such as in their specific enzymes and lack of phagocytic functions. Vacuoles do not have catabolic activity and do not undergo exocytosis as lysosomes do.
| Biology and health sciences | Organelles and other cell parts | null |
18131 | https://en.wikipedia.org/wiki/Los%20Angeles%20International%20Airport | Los Angeles International Airport | Los Angeles International Airport is the primary international airport serving Los Angeles and its surrounding metropolitan area, in the U.S. state of California. LAX is located in the Westchester neighborhood of the city of Los Angeles, southwest of downtown Los Angeles, with the commercial and residential areas of Westchester to the north, the city of El Segundo to the south, and the city of Inglewood to the east. LAX is the closest airport to the Westside and the South Bay.
The airport is operated by Los Angeles World Airports (LAWA), a branch of the Los Angeles city government, that also operates the Van Nuys Airport for general aviation. The airport covers of land and has four parallel runways.
In 2023, LAX handled 75,050,875 passengers, making it the world's eighth-busiest airport, according to the Airports Council International rankings. As the largest and busiest international airport on the West Coast of the United States, LAX is a major international gateway for the country, serving as a connection point for passengers traveling internationally (such as East and Southeast Asia, Australasia, Mexico, and Central America).
The airport holds the record for the world's busiest origin and destination airport, because relative to other airports, many more travelers begin or end their trips in Los Angeles than use it as a connection. In 2019, LAWA reported approximately 88 percent of travelers at LAX were origination and destination passengers, and 12 percent were connecting. It is also the only airport to rank among the top five U.S. airports for both passenger and cargo traffic. LAX serves as a hub, focus city, or operating base for more passenger airlines than any other airport in the United States.
Although LAX is the busiest airport in the Greater Los Angeles area, several other airports serve the region including Burbank, John Wayne (Orange County), Long Beach, Ontario, and San Bernardino.
History
In 1926, the Los Angeles City Council and the Chamber of Commerce recognized the need for the city to have its own airport to tap into the fledgling, but quickly growing, aviation industry. Several locations were considered, but the final choice was a field in the southern part of Westchester. The location had been promoted by real estate agent William W. Mines, and Mines Field as it was known had already been selected to host the 1928 National Air Races. On August 13, 1928 the city leased the land and the newly formed Department of Airports began converting the fields, once used to grow wheat, barley, and lima beans, into dirt landing strips.
The airport opened on October 1, 1928 and the first structure, Hangar No. 1, was erected in 1929. The building still stands at the airport, remaining in active use and listed on the National Register of Historic Places. Over the next year, the airport started to come together: the dirt runway was replaced with an all-weather surface and more hangars, a restaurant, and a control tower were built. On June 7, 1930, the facility was dedicated and renamed Los Angeles Municipal Airport.
The airport was used by private pilots and flying schools, but the city’s vision was that Los Angeles would become the main passenger hub for the area. However, the airport failed to entice any carriers away from the established Burbank Airport or the Grand Central Airport in Glendale.
World War II put a pause on any further development of the airport for passenger use. Before the United States entered the war, the aviation manufacturers located around the airport were busy providing aircraft for the Allied powers, while the flying schools found themselves in high demand. In January 1942, the military assumed control of the airport, stationing fighter planes there, and building naval gun batteries in the ocean dunes to the west.
Meanwhile, airport managers published a master plan for the land and, in early 1943, convinced voters to back a $12.5 million bond for airport improvements. With a plan and funding in place, the airlines were finally convinced to make the move.
After the end of the War, four temporary terminals were quickly erected on the north side of the airport and, on December 9, 1946, American Airlines, Trans World Airlines (TWA), United Airlines, Southwest Airways, and Western Airlines began passenger operations at the airport, with Pan American Airways (Pan Am) joining the next month. The airport was renamed Los Angeles International Airport in 1949.
The temporary terminals remained in place for 15 years but quickly became inadequate, especially as air travel entered the "jet age" and other cities invested in modern facilities. Airport leaders once again convinced voters to back a $59 million bond on June 5, 1956.
The current layout of the passenger facilities was established in 1958 with a plan to build a series of terminals and parking facilities, arranged in the shape of the letter U, in the central portion of the property. The original plan called for the terminal buildings to be connected at the center of the property by a huge steel-and-glass dome. The dome was never built, but a smaller Theme Building, constructed in the central area, became a focal point for people coming to the airport.
The first of the new passenger buildings, Terminals 7 and 8, were opened for United Airlines on June 25, 1961, following opening festivities that lasted several days. Terminals 2, 3, 4, 5, and 6 opened later that same year.
There was a major expansion of the airport in the early 1980s, ahead of the 1984 Summer Olympic Games. In November 1983, a second-level roadway was added, Terminal 1 opened in January 1984 and the Tom Bradley International Terminal opened in June 1984. The original terminals also received expansions and updates in the 1980s.
Since 2008, the airport has been undergoing another major expansion. All of the terminals are being refurbished, and the Tom Bradley International Terminal was substantially rebuilt, with a West Gates satellite concourse added. Outside of the terminal area, the LAX West Intermodal Transportation Facility with 4,300 parking spaces opened in 2021, replacing the former Lot C. A new LAX/Metro Transit Center station and a LAX Consolidated Rent-A-Car Facility (ConRAC) are being built. All will be connected to the terminal area by the LAX Automated People Mover. Altogether, those projects are expected to cost of $30 billion and bring LAX's total gates from 146 to 182.
The "X" in LAX
Before the 1930s, US airports used a two-letter abbreviation and "LA" served as the designation for Los Angeles Airport. With rapid growth in the aviation industry, in 1947, the identifiers were expanded to three letters, and "LA" received an extra letter to become "LAX". The "X" does not have any specific meaning. "LAX" is also used for the Port of Los Angeles in San Pedro and by Amtrak for Union Station in Downtown Los Angeles.
Infrastructure
Airfield
Runways 24R/06L and 24L/06R (designated the North Airfield Complex) are north of the airport terminals, while runways 25R/07L and 25L/07R (designated the South Airfield Complex) are south of the airport terminals.
LAX is located with the Pacific Ocean to the west and residential communities on all other sides. Since 1972, Los Angeles World Airports has adopted a "Preferential Runway Use Policy" to minimize noise levels in the communities closest to LAX.
Typically, the loudest operations at an airport are from departing aircraft, with engines operating at high power, so during daytime hours (6:30am to midnight), LAX prefers to operate under the "Westerly Operations" air traffic pattern, named for the prevailing west winds. Under "Westerly Operations", departing aircraft take off to the west, over the ocean, and arriving aircraft approach from the east. To reduce noise to areas north and south of the airport, LAX prefers to use the "inboard" runways (06R/24L and 07L/25R) for departures, closest to the central terminal area and further from residential areas, and the "outboard" runways for arrivals. Historically, over 90% of flights have used the "inboard" departures and "outboard" arrivals scheme.
During night-time hours, when there are fewer aircraft operations and residential areas tend to be more noise sensitive, additional changes are made to reduce noise. Between 10pm and 7am, air traffic controllers try to use the "outboard" runways as little as possible and, between midnight and 6:30am, the air traffic pattern shifts to "Over-Ocean Operations", under which departing aircraft continue to take off to the west, but arriving aircraft also approach from the west, over the ocean.
There are times when the Over-Ocean and Westerly operations are not possible, particularly when the winds originate from the east, typically during inclement weather and when Santa Ana winds occur. In those cases, the airport shifts to the non-preferred "Easterly Operations" air traffic pattern, under which departing aircraft take off to the east, and arriving aircraft approach from the west.
The South Airfield Complex tends to see more operations than the North, because there are a larger number of passenger gates and air cargo operations areas on the south side of the airport grounds. In 2007, the southernmost runway (07R/25L) was moved to the south to accommodate a new central taxiway. Runways in the North Airfield Complex are separated by . There were plans to increase the separation by , which would have allowed a central taxiway between runways to have been built, but faced opposition from residents living north of LAX. These plans were scrapped in 2016, in favor of lifting a gate cap at the airport and building a new park on the airport's north side.
Terminals
Theme Building
The distinctive Theme Building in the Googie style was built in 1961 and resembles a flying saucer that has landed on its four legs. A restaurant with a sweeping view of the airport is suspended beneath two arches that form the legs. The Los Angeles City Council designated the building a Los Angeles Historic-Cultural Monument in 1992. A $4 million renovation, with retro-futuristic interior and electric lighting designed by Walt Disney Imagineering, was completed before the Encounter Restaurant opened there in 1997 but is no longer in business. Visitors are able to take the elevator up to the observation deck of the "Theme Building", which had previously been closed after the September 11, 2001 attacks for security reasons. A memorial to the victims of the 9/11 attacks is located on the grounds, as three of the four hijacked planes were originally destined for LAX. The Bob Hope USO expanded and relocated to the first floor of the Theme Building in 2018.
Recent and future developments
LAWA currently has several plans to modernize LAX, at a cost of $30 billion. These include terminal and runway improvements, which will "enhance the passenger experience, reduce overcrowding, and provide airport access to the latest class of very large passenger aircraft"; this will bring the number of LAX's total gates from 146 to 182.
Recently completed improvements include:
Renovations of Terminal 1 (completed 2018), Terminals 7 and 8 (completed 2019), and Terminals 2 and 3 (completed 2023).
Terminal 1.5, a junction building connecting Terminals 1 and 2, with a bus gate to take passengers to boarding gates in the Tom Bradley International Terminal (completed 2021)
The Midfield Satellite Concourse (aka West Gates at Tom Bradley International Terminal) adding 15 gates (completed 2021)
The Economy Parking facility, a 4,300-stall parking structure with passenger pick-up/drop-off areas, to later be connected to the terminal area by the APM (completed 2021)
A new Los Angeles Airport Police headquarters (completed 2021)
Future improvements include:
Reconstruction of Terminals 4 and 5, and modernization of Terminal 6 (all under construction)
Expansion of the Midfield Satellite Concourse adding 8 narrow-body gates (under construction)
LAX Automated People Mover (APM) (under construction)
LAX/Metro Transit Center station, a Los Angeles Metro Rail and bus station, connected to the terminal area by the APM (under construction)
LAX Consolidated Rent-A-Car Facility, connected to the terminal area by the APM (under construction)
Roadway improvements, providing improved access to the above facilities and the Central Terminal Area (under construction)
Airlines and destinations
Passenger
, Los Angeles International Airport has non-stop passenger flights scheduled to 112 domestic and 79 international destinations in 41 countries, operated by 71 airlines.
Cargo
Traffic and statistics
It is the world's eighth-busiest airport by passenger traffic and eleventh-busiest by cargo traffic, serving over 87 million passengers and 2 million tons of freight and mail in 2018. It is the busiest airport in the state of California, and the fifth-busiest (2022) airport by passenger boardings in the United States. In terms of international passengers, the second busiest airport for international traffic in the United States, behind only JFK in New York City.
The number of aircraft movements (landings and takeoffs) was 700,362 in 2017, the third most of any airport in the world.
Top domestic destinations
Top international destinations
Airline market share
Ground transportation and access
Transiting between terminals
In the secure area of the airport, tunnels or above-ground connectors link all the terminals except for the regional terminal.
LAX Shuttle route A operates in a counter-clockwise loop around the Central Terminal Area, providing frequent service for connecting passengers. However, connecting passengers who use these shuttles must leave and then later re-enter security.
LAX Shuttle routes
LAX operates several shuttle routes to connect passengers and employees around the airport area:
Route A – Terminal Connector operates in a counter-clockwise loop around the Central Terminal Area, providing frequent service for connecting passengers. However, connecting passengers who use these shuttles must leave and then later re-enter security.
Route C – City Bus Center connects the Central Terminal Area and the LAX City Bus Center which is served by transit buses from Beach Cities Transit, Culver CityBus, Los Angeles Metro, Santa Monica Big Blue Bus and Torrance Transit. Buses on this route also serve the Employee South Lot.
Route E – Economy Parking connects the Central Terminal Area and the West Intermodal Transportation Facility, the airport's economy parking garage.
Route M – Metro Connector connects the Central Terminal Area with the Aviation/LAX station on the Metro C Line and the Aviation/Century station on the C Line and K Line. Buses also stop at the "Remote Rental Car Depot", a bus stop served by shuttles to smaller rental car companies.
Route X – LAX Employee Lots connects the Central Terminal Area and the Employee Parking Lots. The route has three service patterns: the East Lot route only stops at Terminals 1, 2, 3, and B; the West Lot route only stops at Terminals 4, 5, 6, and 7; and the South Lot route stops at all terminals and also stops at the City Bus Center as Route C.
Transit buses
Most transit buses operate from the LAX City Bus Center, which is located away from the Central Terminal Area on 96th Street, east of Sepulveda Boulevard.
LAX Shuttle route C offers free connections between the LAX City Bus Center and the Central Terminal Area.
The LAX City Bus Center is served by Beach Cities Transit line 109 to Redondo Beach, Culver CityBus lines 6 and Rapid 6 to Culver City and UCLA, Los Angeles Metro Bus lines to South Gate, to Norwalk, to Downey and to Long Beach, Santa Monica Big Blue Bus lines 3 and Rapid 3 to Santa Monica, and Torrance Transit line 8 to Torrance. During the overnight hours, Los Angeles Metro line offers service to Downtown Los Angeles.
The LAX City Bus Center will eventually be replaced by the LAX/Metro Transit Center station, which will be connected to the rest of LAX by the Automated People Mover system.
There is also a bus stop at Sepulveda Boulevard and Century Boulevard that is a walk away from Terminals 1 and 7/8 that is served by LADOT Commuter Express line to Sylmar and Encino. This bus stop is also served by some of the same routes as the LAX City Bus Center: Los Angeles Metro lines 40 (overnight only), 117 and 232 and Torrance Transit line 8.
FlyAway Bus
The FlyAway bus is a nonstop motorcoach/shuttle service run by LAWA, which provides scheduled service between LAX and Union Station in Downtown LA or the FlyAway terminal at the Van Nuys Airport in the San Fernando Valley.
FlyAway buses stop at every LAX terminal in a counter-clockwise direction, starting at terminal 1. The service hours vary based on the line, with most leaving on or near the top of the hour. Buses use the regional system of high-occupancy vehicle lanes and high-occupancy toll lanes (Metro ExpressLanes) to expedite their trips.
Metro Rail and the LAX Automated People Mover
LAX does not currently have a direct connection to the Los Angeles Metro Rail system. LAX Shuttle route G offers free connections between the Central Terminal Area and the Aviation/LAX station on the C Line, away.
The LAX Automated People Mover (APM), currently under construction by LAWA, is a rail line that will connect the terminal area with long- and short-term parking facilities, a connection to the Los Angeles Metro Rail and other transit at the LAX/Metro Transit Center, and a consolidated facility for all airport rental car agencies.
The APM project is estimated to cost $5.5 billion and is scheduled to begin operation in 2025, with the connection to Metro Rail opening thereafter.
LAWA does not operate shuttles to get to the Metro K Line; however, one seeking to get to/from LAX and the K Line can travel to Aviation/LAX station on LAWA Route M (Metro Connector), and from there take the C and K Line Link (line 857) to Westchester/Veterans station while the rest of the K Line connecting to the APM is being built.
Freeways and roads
LAX's terminals are immediately west of the interchange between Century Boulevard and Sepulveda Boulevard (State Route 1). Interstate 405 can be reached to the east via Century Boulevard. Interstate 105 is to the south via Sepulveda Boulevard, through the Airport Tunnel that crosses under the airport runways.
Taxis, ride-share and private shuttles
Arriving passengers take a shuttle or walk to the LAXit waiting area east of Terminal 1 for taxi or ride-share pickups. Taxi services are operated by nine city-authorized taxi companies and regulated by Authorized Taxicab Supervision Inc. (ATS). ATS queues up taxis at the LAXit waiting area.
A number of private shuttle companies also offer limousine and bus services to LAX.
Other facilities
The airport has the administrative offices of Los Angeles World Airports.
Continental Airlines once had its corporate headquarters on the airport property. At a 1962 press conference in the office of Mayor of Los Angeles Sam Yorty, Continental Airlines announced that it planned to move its headquarters to Los Angeles in July 1963. In 1963 Continental Airlines headquarters moved to a two-story, $2.3 million building on the grounds of the airport. The July 2009 Continental Magazine issue stated that the move "underlined Continental Airlines western and Pacific orientation". On July 1, 1983 the airline's headquarters were relocated to the America Tower in the Neartown area of Houston.
In addition to Continental Airlines, Western Airlines and Flying Tiger Line also had their headquarters at LAX.
Flight Path Museum LAX
The Flight Path Museum LAX, formerly known as the Flight Path Learning Center, is a museum located at 6661 Imperial Highway and was formerly known as the "West Imperial Terminal". This building used to house some charter flights. It sat empty for 10 years until it was re-opened as a learning center for LAX.
The center contains information on the history of aviation, several pictures of the airport, as well as aircraft scale models, flight attendant uniforms, and general airline memorabilia such as playing cards, china, magazines, signs, and a TWA gate information sign.
The museum's library contains an extensive collection of rare items such as aircraft manufacturer company newsletters/magazines, technical manuals for both military and civilian aircraft, industry magazines dating back to World War II and before, historic photographs and other invaluable references on aircraft operation and manufacturing.
The museum has on display "The Spirit of Seventy-Six," a DC-3 that flew in commercial airline service, before serving as a corporate aircraft for Union 76 Oil Company for 32 years. The plane was built in the Douglas Aircraft Company plant in Santa Monica in January 1941, which was a major producer of both commercial and military aircraft.
Accidents and incidents
During its history there have been numerous incidents, but only the most notable are summarized below:
1930s
On January 23, 1939, the sole prototype Douglas 7B twin-engine attack bomber, designed and built as a company project, suffered a loss of the vertical fin and rudder during a demonstration flight over Mines Field, flat spun into the parking lot of North American Aviation, and burned. Another source states that the test pilot, in an attempt to impress the Gallic passenger, attempted a snap roll at low altitude with one engine feathered, resulting in a fatal spin. Douglas test pilot Johnny Cable bailed out at 300 feet, his chute unfurled but did not have time to deploy, he was killed on impact, the flight engineer John Parks rode in the airframe and died, but 33-year-old French Air Force Capt. Paul Chemidlin, riding in the aft fuselage near the top turret, survived with a broken leg, severe back injuries, and a slight concussion. The presence of Chemidlin, a representative of a foreign purchasing mission, caused a furor in Congress by isolationists over neutrality and export laws. The type was developed as the Douglas DB-7.
1940s
On June 1, 1940, the first Douglas R3D-1 for the U.S. Navy, BuNo 1901, crashed at Mines Field, before delivery. The Navy later acquired the privately owned DC-5 prototype, from William E. Boeing as a replacement.
On November 20, 1940, the prototype NA-73X Mustang, NX19998, first flown October 26, 1940, by test pilot Vance Breese, crashed. According to P-51 designer Edgar Schmued, the NA-73 was lost because test pilot Paul Balfour refused, before a high-speed test run, to go through the takeoff and flight test procedure with Schmued while the aircraft was on the ground, claiming "one airplane was like another". After making two high speed passes over Mines Field, he forgot to put the fuel valve on "reserve" and during the third pass ran out of fuel. An emergency landing in a freshly plowed field caused the wheels to dig in, the aircraft flipped over, the airframe was not rebuilt, the second aircraft being used for subsequent testing.
On October 26, 1944, WASP pilot Gertrude Tompkins Silver of the 601st Ferrying Squadron, fifth Ferrying Group, Love Field, Dallas, Texas, departed Los Angeles Airport, in a North American P-51D Mustang, 44-15669, at 1600 hrs PWT, headed for the East Coast. She took off into the wind, into an offshore fog bank, and was expected that night at Palm Springs. She never arrived. Owing to a paperwork foul-up, a search did not get under way for several days, and while the eventual search of land and sea was massive, it failed to find a trace of Silver or her plane. She is the only missing WASP pilot. She had married Sgt. Henry Silver one month before her disappearance.
1950s
On June 30, 1956, United Airlines Flight 718 collided with TWA Flight 2 over the Grand Canyon, killing 128 people. Both aircraft departed LAX, with Flight 718 bound for Chicago Midway, and Flight 2 bound for Kansas City. The cause was found to be issued within the US air traffic control system and aviation law.
1960s
On January 13, 1969, Scandinavian Airlines System Flight 933, a Douglas DC-8-62, crashed into Santa Monica Bay, approximately west of LAX at 7:21 pm, local time. The aircraft was operating as flight SK933, nearing the completion of a flight from Seattle. Of nine crewmembers, three drowned, while 12 of the 36 passengers also drowned.
On January 18, 1969, United Airlines Flight 266, a Boeing 727-100 bearing the registration number N7434U, crashed into Santa Monica Bay approximately west of LAX at 6:21 pm local time. The aircraft was destroyed, resulting in the death of all 32 passengers and six crew members aboard.
1970s
On the evening of June 6, 1971, Hughes Airwest Flight 706, a Douglas DC-9 jetliner that had departed LAX on a flight to Salt Lake City, Utah, was struck nine minutes after takeoff by a U.S. Marine Corps McDonnell Douglas F-4 Phantom II fighter jet over the San Gabriel Mountains. The midair collision killed all 44 passengers and five crew members aboard the DC-9 airliner and one of two crewmen aboard the military jet.
On August 4, 1971, Continental Airlines Flight 712, a Boeing 707, collided in midair with a Cessna 150 over Compton. Although the Cessna was destroyed upon landing, there were no fatalities.
On August 6, 1974, a bomb exploded near the Pan Am ticketing area at Terminal 2; three people were killed and 35 were injured.
On March 1, 1978, two tires burst in succession on a McDonnell Douglas DC-10-10 on Continental Airlines Flight 603 during its takeoff roll at LAX and the plane, bound for Honolulu, veered off the runway. A third tire burst and the DC-10's left landing gear collapsed, causing a fuel tank to rupture. Following the aborted takeoff, spilled fuel ignited and enveloped the center portion of the aircraft in flames. During the ensuing emergency evacuation, a husband and wife died when they exited the passenger cabin onto the wing and dropped down directly into the flames. Two additional passengers died of their injuries approximately three months after the accident; 74 others aboard the plane were injured, as were 11 firemen battling the fire.
On the evening of March 10, 1979, Swift Aire Flight 235, a twin-engine Aerospatiale Nord 262A-33 turboprop en route to Santa Maria, was forced to ditch in Santa Monica Bay after experiencing engine problems upon takeoff from LAX. The pilot, co-pilot, and a female passenger drowned when they were unable to exit the aircraft after the ditching. The female flight attendant and the three remaining passengers—two men and a pregnant woman—survived and were rescued by several pleasure boats and other watercraft in the vicinity.
1980s
In January 1985, a woman was found dead in a suitcase that was lying on the baggage carousel for a while. The suitcase had arrived on a Lufthansa flight. The woman was later discovered to have been an Iranian citizen who had recently married another Iranian with UGreen card status. She had been denied a US visa in West Germany and therefore decided to enter the US in this way.
On August 31, 1986, Aeroméxico Flight 498, a DC-9 en route from Mexico City, Mexico, to Los Angeles, began its descent into LAX when a Piper Cherokee collided with the DC-9's left horizontal stabilizer over Cerritos, causing the DC-9 to crash into a residential neighborhood. All 67 people on the two aircraft were killed, in addition to 15 people on the ground. 5 homes were destroyed and an additional 7 were damaged by the crash and resulting fire. The Piper went down in a nearby schoolyard and caused no further injuries on the ground. As a result of this incident, the FAA required all commercial aircraft to be equipped with Traffic Collision Avoidance System (TCAS).
1990s
On February 1, 1991, USAir Flight 1493 (arriving from Columbus, Ohio), a Boeing 737-300, landing on runway 24L at LAX, collided on touchdown with SkyWest Airlines Flight 5569, a Fairchild Metroliner, preparing to depart to Palmdale. The collision was caused by a controller who told the SkyWest plane to wait on the runway for takeoff, then later gave the USAir plane clearance to land on the same runway, forgetting that the SkyWest plane was there. The collision killed all 12 occupants of the SkyWest plane and 23 of the 89 people aboard the USAir 737.
2000s
Al-Qaeda attempted to bomb LAX on New Year's Eve 1999/2000. The bomber, Algerian Ahmed Ressam, was captured in Port Angeles, Washington, the U.S. port of entry, with a cache of explosives that could have produced a blast 40 times greater than that of a car bomb hidden in the trunk of the rented car in which he had traveled from Canada. He had planned to leave one or two suitcases filled with explosives in an LAX passenger waiting area. He was initially sentenced to 22 years in prison, but in February 2010 an appellate court ordered that his sentence be extended.
On January 31, 2000, Alaska Airlines Flight 261, attempted to land at LAX after experiencing problems with its tail-mounted horizontal stabilizer. Before the plane could divert to Los Angeles, it suddenly plummeted into the Pacific Ocean approximately north of Anacapa Island of the California coast, killing all 88 people aboard.
During the September 11 attacks, American Airlines Flight 11, United Airlines Flight 175 and American Airlines Flight 77 were destined for LAX and they were hijacked mid-flight by Al-Qaeda terrorists. Flight 11 and Flight 175 deliberately crashed into the Twin Towers of World Trade Center and Flight 77 deliberately crashed into The Pentagon.
In the 2002 Los Angeles International Airport shooting of July 4, 2002, Hesham Mohamed Hadayet killed two Israelis at the ticket counter of El Al Airlines at LAX. Although the gunman was not linked to any terrorist group, the man was upset at U.S. support for Israel, and therefore was motivated by political disagreement. This led the FBI to classify this shooting as a terrorist act, one of the first on U.S. soil since the September 11 attacks.
On September 21, 2005, JetBlue Flight 292, an Airbus A320 discovered a problem with its landing gear as it took off from Bob Hope Airport in Burbank. It flew in circles for three hours to burn off fuel, then landed safely at Los Angeles International Airport on runway 25L, balancing on its back wheels as it rolled down the center of the runway. Passengers were able to watch their own coverage live from the satellite broadcast on JetBlue in-flight TV seat displays of their plane as it made an emergency landing with the front landing gear visibly becoming damaged. Because JetBlue did not serve LAX at the time, the aircraft was evaluated and repaired at a Continental Airlines hangar.
On 19 December 2005, Air India flight 136, a Boeing 747-400M (registered as VT-AIM) flying from Los Angeles to Delhi via Frankfurt, suffered a tire blowout after take-off. The plane dumped fuel and returned to Los Angeles after conducting an emergency landing. There were no injuries among 267 passengers and crew, however a woman passenger was hospitalized after fainting on landing.
On June 2, 2006, an American Airlines Boeing 767 was about to complete a flight from John F. Kennedy International Airport in New York City when the plane's pilots noted that the number 1 engine lagged the number 2 one by 2 percent. The plane landed safely and passengers disembarked, but when maintenance personnel retarded its throttle to idle, the number one engine, which had been put to maximum power, suffered an uncontained rupture of the high pressure turbine stage 1 disk, causing the engine to explode. There were no injuries among the three people on board the aircraft at the time (all of them maintenance workers), but the airplane was written off.
On July 29, 2006, after America West Express Flight 6008, a Canadair Regional Jet operated by Mesa Airlines from Phoenix, Arizona, landed on runway 25L, controllers instructed the pilot to leave the runway on a taxiway known as "Mike" and stop short of runway 25R. Even though the pilot read back the instructions correctly, he accidentally taxied onto 25R and into the path of a departing SkyWest Airlines Embraer EMB-120 operating United Express Flight 6037 to Monterey. They cleared each other by and nobody was hurt.
On August 16, 2007, a runway incursion occurred between WestJet Flight 900 and Northwest Airlines Flight 180 on runways 24R and 24L, respectively, with the aircraft coming within of each other. The planes were carrying a combined total of 296 people, none of whom were injured. The NTSB concluded that the incursion was the result of controller error. In September 2007, FAA Administrator Marion Blakey stressed the need for LAX to increase lateral separation between its pair of north runways in order to preserve the safety and efficiency of the airport.
2010s
On October 13 and 14, 2013, two incidents of dry ice bomb explosions occurred at the airport. The first dry ice bomb exploded at 7:00 p.m. in an employee restroom in Terminal 2, with no injuries. Terminal 2 was briefly shut down as a result. On the next day at 8:30 p.m., a dry ice bomb exploded on the ramp area near the Tom Bradley International Terminal, also without injuries. Two other plastic bottles containing dry ice were found at the scene during the second explosion. On October 15, a 28-year-old airport employee was arrested in connection with the explosions and was booked on charges of possession of an explosive or destructive device near an aircraft. On October 18, a 41-year-old airport employee was arrested in connection with the second explosion, and was booked on suspicion of possessing a destructive device near an aircraft. Authorities believe that the incidents were not linked to terrorism. Both men subsequently pleaded no contest and were each sentenced to three years' probation. The airport workers had removed dry ice from a cargo hold into which a dog was to be loaded, because of fears that the dry ice could harm the animal.
In the 2013 Los Angeles International Airport shooting of November 1, 2013, at around 9:31 a.m. PDT, a lone gunman entered Terminal 3 and opened fire with a semi-automatic rifle, killing a Transportation Security Administration (TSA) officer and wounding three other people. The gunman was later apprehended and taken into custody. Until the situation was clarified and under control, a few terminals at the airport were evacuated, all inbound flights were diverted and all outbound flights were grounded until the airport began returning to normal operation at around 2:30 p.m.
On August 28, 2016, there was a false report of shots fired throughout the airport, causing a temporary lock down and about 3 hours of flight delays.
On May 20, 2017, Aeroméxico Flight 642, a Boeing 737-800, collided with a utility truck on a taxiway near Runway 25R, injuring 8 people, two of them seriously.
On July 25, 2018, jetblast from a Dash 8 caused some dollies to crash into a United 737.
On November 21, 2019, Philippine Airlines Flight 113, operated by a Boeing 777-300ER suffered an engine compressor stall shortly after take off from the airport's Runway 25R, forcing the flight to return. The flight made a successful emergency landing just 13 minutes after departure. There were 342 passengers and 18 crew on board the flight, with no injuries reported.
2020s
On August 19, 2020, FedEx Express Flight 1026, a Boeing 767, made an emergency landing when its left main landing gear failed to extend. One of the pilots was injured while leaving the aircraft.
On October 28, 2021, more than 300 passengers were forced to flee onto the tarmac after report of a person with a gun at the Terminal 1. Two people were injured, and the flights were temporarily suspended. No weapons were found, but two people were arrested and taken into custody by the airport police.
On February 10, 2023, an American Airlines Airbus A321 was being towed without any passengers when it collided with a passenger bus, injuring five people who were riding on the bus.
On July 8, 2024, a Boeing 757-200 of United Airlines, registration N14107, was in the initial climb out of runway 25R bound for Denver when one of the main wheels detached. The aircraft continued to Denver and landed safely with no casualties.
Aircraft spotting
The "Imperial Hill" area of El Segundo is a prime location for aircraft spotting, especially for takeoffs. Part of the Imperial Hill area has been set aside as a city park, Clutter's Park.
Another popular spotting location sits under the final approach for runways 24 L&R on a lawn next to the Westchester In-N-Out Burger on Sepulveda Boulevard. This is one of the few remaining locations in Southern California from which spotters may watch such a wide variety of low-flying commercial airliners from directly underneath a flight path.
Another aircraft spotting location is at a small park in the take-off pattern that normally goes out over the Pacific. The park is on the east side of the street Vista Del Mar, from which it takes its name, Vista Del Mar Park.
Space Shuttle Endeavour
At 12:51 p.m. on Friday, September 21, 2012, a Shuttle Carrier Aircraft carrying the Space Shuttle Endeavour landed at LAX on runway 25L. An estimated 10,000 people saw the shuttle land. Interstate 105 was backed up for miles at a standstill. Imperial Highway was shut down for spectators. It was quickly taken off the Shuttle Carrier Aircraft, a modified Boeing 747, and was moved to a United Airlines hangar. The shuttle spent about a month in the hangar while it was prepared to be transported to the California Science Center.
In popular culture
Numerous films and television shows have been set or filmed partially at LAX, at least partly due to the airport's proximity to Hollywood studios and Los Angeles. Film shoots at the Los Angeles airports, including LAX, produced $590 million for the Los Angeles region from 2002 to 2005.
| Technology | North America | null |
18153 | https://en.wikipedia.org/wiki/Logical%20connective | Logical connective | In logic, a logical connective (also called a logical operator, sentential connective, or sentential operator) is a logical constant. Connectives can be used to connect logical formulas. For instance in the syntax of propositional logic, the binary connective can be used to join the two atomic formulas and , rendering the complex formula .
Common connectives include negation, disjunction, conjunction, implication, and equivalence. In standard systems of classical logic, these connectives are interpreted as truth functions, though they receive a variety of alternative interpretations in nonclassical logics. Their classical interpretations are similar to the meanings of natural language expressions such as English "not", "or", "and", and "if", but not identical. Discrepancies between natural language connectives and those of classical logic have motivated nonclassical approaches to natural language meaning as well as approaches which pair a classical compositional semantics with a robust pragmatics.
A logical connective is similar to, but not equivalent to, a syntax commonly used in programming languages called a conditional operator.
Overview
In formal languages, truth functions are represented by unambiguous symbols. This allows logical statements to not be understood in an ambiguous way. These symbols are called logical connectives, logical operators, propositional operators, or, in classical logic, truth-functional connectives. For the rules which allow new well-formed formulas to be constructed by joining other well-formed formulas using truth-functional connectives, see well-formed formula.
Logical connectives can be used to link zero or more statements, so one can speak about -ary logical connectives. The boolean constants True and False can be thought of as zero-ary operators. Negation is a unary connective, and so on.
List of common logical connectives
Commonly used logical connectives include the following ones.
Negation (not): , , (prefix) in which is the most modern and widely used, and is also common;
Conjunction (and): , , (prefix) in which is the most modern and widely used;
Disjunction (or): , (prefix) in which is the most modern and widely used;
Implication (if...then): , , , (prefix) in which is the most modern and widely used, and is also common;
Equivalence (if and only if): , , , , (prefix) in which is the most modern and widely used, and is commonly used where is also used.
For example, the meaning of the statements it is raining (denoted by ) and I am indoors (denoted by ) is transformed, when the two are combined with logical connectives:
It is not raining ();
It is raining and I am indoors ();
It is raining or I am indoors ();
If it is raining, then I am indoors ();
If I am indoors, then it is raining ();
I am indoors if and only if it is raining ().
It is also common to consider the always true formula and the always false formula to be connective (in which case they are nullary).
True formula: , , (prefix), or ;
False formula: , , (prefix), or .
This table summarizes the terminology:
History of notations
Negation: the symbol appeared in Heyting in 1930 (compare to Frege's symbol ⫟ in his Begriffsschrift); the symbol appeared in Russell in 1908; an alternative notation is to add a horizontal line on top of the formula, as in ; another alternative notation is to use a prime symbol as in .
Conjunction: the symbol appeared in Heyting in 1930 (compare to Peano's use of the set-theoretic notation of intersection ); the symbol appeared at least in Schönfinkel in 1924; the symbol comes from Boole's interpretation of logic as an elementary algebra.
Disjunction: the symbol appeared in Russell in 1908 (compare to Peano's use of the set-theoretic notation of union ); the symbol is also used, in spite of the ambiguity coming from the fact that the of ordinary elementary algebra is an exclusive or when interpreted logically in a two-element ring; punctually in the history a together with a dot in the lower right corner has been used by Peirce.<ref>Peirce (1867) On an improvement in Boole's calculus of logic.</ref>
Implication: the symbol appeared in Hilbert in 1918; was used by Russell in 1908 (compare to Peano's Ɔ the inverted C); appeared in Bourbaki in 1954.
Equivalence: the symbol in Frege in 1879; in Becker in 1933 (not the first time and for this see the following); appeared in Bourbaki in 1954; other symbols appeared punctually in the history, such as in Gentzen, in Schönfinkel or in Chazal,
True: the symbol comes from Boole's interpretation of logic as an elementary algebra over the two-element Boolean algebra; other notations include (abbreviation for the Latin word "verum") to be found in Peano in 1889.
False: the symbol comes also from Boole's interpretation of logic as a ring; other notations include (rotated ) to be found in Peano in 1889.
Some authors used letters for connectives: for conjunction (German's "und" for "and") and for disjunction (German's "oder" for "or") in early works by Hilbert (1904); for negation, for conjunction, for alternative denial, for disjunction, for implication, for biconditional in Łukasiewicz in 1929.
Redundancy
Such a logical connective as converse implication "" is actually the same as material conditional with swapped arguments; thus, the symbol for converse implication is redundant. In some logical calculi (notably, in classical logic), certain essentially different compound statements are logically equivalent. A less trivial example of a redundancy is the classical equivalence between and . Therefore, a classical-based logical system does not need the conditional operator "" if "" (not) and "" (or) are already in use, or may use the "" only as a syntactic sugar for a compound having one negation and one disjunction.
There are sixteen Boolean functions associating the input truth values and with four-digit binary outputs. These correspond to possible choices of binary logical connectives for classical logic. Different implementations of classical logic can choose different functionally complete subsets of connectives.
One approach is to choose a minimal set, and define other connectives by some logical form, as in the example with the material conditional above.
The following are the minimal functionally complete sets of operators in classical logic whose arities do not exceed 2:
One element , .
Two elements , , , , , , , , , , , , , , , , , .
Three elements , , , , , .
Another approach is to use with equal rights connectives of a certain convenient and functionally complete, but not minimal set. This approach requires more propositional axioms, and each equivalence between logical forms must be either an axiom or provable as a theorem.
The situation, however, is more complicated in intuitionistic logic. Of its five connectives, {∧, ∨, →, ¬, ⊥}, only negation "¬" can be reduced to other connectives (see for more). Neither conjunction, disjunction, nor material conditional has an equivalent form constructed from the other four logical connectives.
Natural language
The standard logical connectives of classical logic have rough equivalents in the grammars of natural languages. In English, as in many languages, such expressions are typically grammatical conjunctions. However, they can also take the form of complementizers, verb suffixes, and particles. The denotations of natural language connectives is a major topic of research in formal semantics, a field that studies the logical structure of natural languages.
The meanings of natural language connectives are not precisely identical to their nearest equivalents in classical logic. In particular, disjunction can receive an exclusive interpretation in many languages. Some researchers have taken this fact as evidence that natural language semantics is nonclassical. However, others maintain classical semantics by positing pragmatic accounts of exclusivity which create the illusion of nonclassicality. In such accounts, exclusivity is typically treated as a scalar implicature. Related puzzles involving disjunction include free choice inferences, Hurford's Constraint, and the contribution of disjunction in alternative questions.
Other apparent discrepancies between natural language and classical logic include the paradoxes of material implication, donkey anaphora and the problem of counterfactual conditionals. These phenomena have been taken as motivation for identifying the denotations of natural language conditionals with logical operators including the strict conditional, the variably strict conditional, as well as various dynamic operators.
The following table shows the standard classically definable approximations for the English connectives.
Properties
Some logical connectives possess properties that may be expressed in the theorems containing the connective. Some of those properties that a logical connective may have are:
Associativity Within an expression containing two or more of the same associative connectives in a row, the order of the operations does not matter as long as the sequence of the operands is not changed.
CommutativityThe operands of the connective may be swapped, preserving logical equivalence to the original expression.
Distributivity A connective denoted by · distributes over another connective denoted by +, if for all operands , , .
Idempotence Whenever the operands of the operation are the same, the compound is logically equivalent to the operand.
Absorption A pair of connectives ∧, ∨ satisfies the absorption law if for all operands , .
Monotonicity If f(a1, ..., an) ≤ f(b1, ..., bn) for all a1, ..., an, b1, ..., bn ∈ {0,1} such that a1 ≤ b1, a2 ≤ b2, ..., an ≤ bn. E.g., ∨, ∧, ⊤, ⊥.
Affinity Each variable always makes a difference in the truth-value of the operation or it never makes a difference. E.g., ¬, ↔, , ⊤, ⊥.
Duality To read the truth-value assignments for the operation from top to bottom on its truth table is the same as taking the complement of reading the table of the same or another connective from bottom to top. Without resorting to truth tables it may be formulated as . E.g., ¬.
Truth-preserving The compound all those arguments are tautologies is a tautology itself. E.g., ∨, ∧, ⊤, →, ↔, ⊂ (see validity).
Falsehood-preserving The compound all those argument are contradictions is a contradiction itself. E.g., ∨, ∧, , ⊥, ⊄, ⊅ (see validity).
Involutivity (for unary connectives) . E.g. negation in classical logic.
For classical and intuitionistic logic, the "=" symbol means that corresponding implications "...→..." and "...←..." for logical compounds can be both proved as theorems, and the "≤" symbol means that "...→..." for logical compounds is a consequence of corresponding "...→..." connectives for propositional variables. Some many-valued logics may have incompatible definitions of equivalence and order (entailment).
Both conjunction and disjunction are associative, commutative and idempotent in classical logic, most varieties of many-valued logic and intuitionistic logic. The same is true about distributivity of conjunction over disjunction and disjunction over conjunction, as well as for the absorption law.
In classical logic and some varieties of many-valued logic, conjunction and disjunction are dual, and negation is self-dual, the latter is also self-dual in intuitionistic logic.
Order of precedence
As a way of reducing the number of necessary parentheses, one may introduce precedence rules: ¬ has higher precedence than ∧, ∧ higher than ∨, and ∨ higher than →. So for example, is short for .
Here is a table that shows a commonly used precedence of logical operators.
However, not all compilers use the same order; for instance, an ordering in which disjunction is lower precedence than implication or bi-implication has also been used. Sometimes precedence between conjunction and disjunction is unspecified requiring to provide it explicitly in given formula with parentheses. The order of precedence determines which connective is the "main connective" when interpreting a non-atomic formula.
Table and Hasse diagram
The 16 logical connectives can be partially ordered to produce the following Hasse diagram.
The partial order is defined by declaring that if and only if whenever holds then so does
Applications
Logical connectives are used in computer science and in set theory.
Computer science
A truth-functional approach to logical operators is implemented as logic gates in digital circuits. Practically all digital circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates; see more details in Truth function in computer science. Logical operators over bit vectors (corresponding to finite Boolean algebras) are bitwise operations.
But not every usage of a logical connective in computer programming has a Boolean semantic. For example, lazy evaluation is sometimes implemented for and , so these connectives are not commutative if either or both of the expressions , have side effects. Also, a conditional, which in some sense corresponds to the material conditional connective, is essentially non-Boolean because for if (P) then Q;, the consequent Q is not executed if the antecedent P is false (although a compound as a whole is successful ≈ "true" in such case). This is closer to intuitionist and constructivist views on the material conditional— rather than to classical logic's views.
Set theory
Logical connectives are used to define the fundamental operations of set theory, as follows:
This definition of set equality is equivalent to the axiom of extensionality.
| Mathematics | Mathematical logic | null |
18154 | https://en.wikipedia.org/wiki/Propositional%20calculus | Propositional calculus | The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. Sometimes, it is called first-order propositional logic to contrast it with System F, but it should not be confused with first-order logic. It deals with propositions (which can be true or false) and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.
Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
Propositional logic is typically studied with a formal language, in which propositions are represented by letters, which are called propositional variables. These are then used, together with symbols for connectives, to make propositional formula. Because of this, the propositional variables are called atomic formulas of a formal propositional language. While the atomic propositions are typically represented by letters of the alphabet, there is a variety of notations to represent the logical connectives. The following table shows the main notational variants for each of the connectives in propositional logic.
The most thoroughly researched branch of propositional logic is classical truth-functional propositional logic, in which formulas are interpreted as having precisely one of two possible truth values, the truth value of true or the truth value of false. The principle of bivalence and the law of excluded middle are upheld. By comparison with first-order logic, truth-functional propositional logic is considered to be zeroth-order logic.
History
Although propositional logic (also called propositional calculus) had been hinted by earlier philosophers, it was developed into a formal logic (Stoic logic) by Chrysippus in the 3rd century BC and expanded by his successor Stoics. The logic was focused on propositions. This was different from the traditional syllogistic logic, which focused on terms. However, most of the original writings were lost and, at some time between the 3rd and 6th century CE, Stoic logic faded into oblivion, to be resurrected only in the 20th century, in the wake of the (re)-discovery of propositional logic.
Symbolic logic, which would come to be important to refine propositional logic, was first developed by the 17th/18th-century mathematician Gottfried Leibniz, whose calculus ratiocinator was, however, unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan, completely independent of Leibniz.
Gottlob Frege's predicate logic builds upon propositional logic, and has been described as combining "the distinctive features of syllogistic logic and propositional logic." Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, including natural deduction, truth trees and truth tables. Natural deduction was invented by Gerhard Gentzen and Stanisław Jaśkowski. Truth trees were invented by Evert Willem Beth. The invention of truth tables, however, is of uncertain attribution.
Within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil Post (or both, independently). Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure include Jan Łukasiewicz, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving Lewis. Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables".
Sentences
Propositional logic, as currently studied in universities, is a specification of a standard of logical consequence in which only the meanings of propositional connectives are considered in evaluating the conditions for the truth of a sentence, or whether a sentence logically follows from some other sentence or group of sentences.
Declarative sentences
Propositional logic deals with statements, which are defined as declarative sentences having truth value. Examples of statements might include:
Wikipedia is a free online encyclopedia that anyone can edit.
London is the capital of England.
All Wikipedia editors speak at least three languages.
Declarative sentences are contrasted with questions, such as "What is Wikipedia?", and imperative statements, such as "Please add citations to support the claims in this article.". Such non-declarative sentences have no truth value, and are only dealt with in nonclassical logics, called erotetic and imperative logics.
Compounding sentences with connectives
In propositional logic, a statement can contain one or more other statements as parts. Compound sentences are formed from simpler sentences and express relationships among the constituent sentences. This is done by combining them with logical connectives: the main types of compound sentences are negations, conjunctions, disjunctions, implications, and biconditionals, which are formed by using the corresponding connectives to connect propositions. In English, these connectives are expressed by the words "and" (conjunction), "or" (disjunction), "not" (negation), "if" (material conditional), and "if and only if" (biconditional). Examples of such compound sentences might include:
Wikipedia is a free online encyclopedia that anyone can edit, and millions already have. (conjunction)
It is not true that all Wikipedia editors speak at least three languages. (negation)
Either London is the capital of England, or London is the capital of the United Kingdom, or both. (disjunction)
If sentences lack any logical connectives, they are called simple sentences, or atomic sentences; if they contain one or more logical connectives, they are called compound sentences, or molecular sentences.
Sentential connectives are a broader category that includes logical connectives. Sentential connectives are any linguistic particles that bind sentences to create a new compound sentence, or that inflect a single sentence to create a new sentence. A logical connective, or propositional connective, is a kind of sentential connective with the characteristic feature that, when the original sentences it operates on are (or express) propositions, the new sentence that results from its application also is (or expresses) a proposition. Philosophers disagree about what exactly a proposition is, as well as about which sentential connectives in natural languages should be counted as logical connectives. Sentential connectives are also called sentence-functors, and logical connectives are also called truth-functors.
Arguments
An argument is defined as a pair of things, namely a set of sentences, called the premises, and a sentence, called the conclusion. The conclusion is claimed to follow from the premises, and the premises are claimed to support the conclusion.
Example argument
The following is an example of an argument within the scope of propositional logic:
Premise 1: If it's raining, then it's cloudy.
Premise 2: It's raining.
Conclusion: It's cloudy.
The logical form of this argument is known as modus ponens, which is a classically valid form. So, in classical logic, the argument is valid, although it may or may not be sound, depending on the meteorological facts in a given context. This example argument will be reused when explaining .
Validity and soundness
An argument is valid if, and only if, it is necessary that, if all its premises are true, its conclusion is true. Alternatively, an argument is valid if, and only if, it is impossible for all the premises to be true while the conclusion is false.
Validity is contrasted with soundness. An argument is sound if, and only if, it is valid and all its premises are true. Otherwise, it is unsound.
Logic, in general, aims to precisely specify valid arguments. This is done by defining a valid argument as one in which its conclusion is a logical consequence of its premises, which, when this is understood as semantic consequence, means that there is no case in which the premises are true but the conclusion is not true – see below.
Formalization
Propositional logic is typically studied through a formal system in which formulas of a formal language are interpreted to represent propositions. This formal language is the basis for proof systems, which allow a conclusion to be derived from premises if, and only if, it is a logical consequence of them. This section will show how this works by formalizing the . The formal language for a propositional calculus will be fully specified in , and an overview of proof systems will be given in .
Propositional variables
Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives, it is typically studied by replacing such atomic (indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables). With propositional variables, the would then be symbolized as follows:
Premise 1:
Premise 2:
Conclusion:
When is interpreted as "It's raining" and as "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the same logical form.
When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such as , and ) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself.
Gentzen notation
If we assume that the validity of modus ponens has been accepted as an axiom, then the same can also be depicted like this:
This method of displaying it is Gentzen's notation for natural deduction and sequent calculus. The premises are shown above a line, called the inference line, separated by a comma, which indicates combination of premises. The conclusion is written below the inference line. The inference line represents syntactic consequence, sometimes called deductive consequence, which is also symbolized with ⊢. So the above can also be written in one line as .
Syntactic consequence is contrasted with semantic consequence, which is symbolized with ⊧. In this case, the conclusion follows syntactically because the natural deduction inference rule of modus ponens has been assumed. For more on inference rules, see the sections on proof systems below.
Language
The language (commonly called ) of a propositional calculus is defined in terms of:
a set of primitive symbols, called atomic formulas, atomic sentences, atoms, placeholders, prime formulas, proposition letters, sentence letters, or variables, and
a set of operator symbols, called connectives, logical connectives, logical operators, truth-functional connectives, truth-functors, or propositional connectives.
A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar. The language , then, is defined either as being identical to its set of well-formed formulas, or as containing that set (together with, for instance, its set of connectives and variables).
Usually the syntax of is defined recursively by just a few definitions, as seen next; some authors explicitly include parentheses as punctuation marks when defining their language's syntax, while others use them without comment.
Syntax
Given a set of atomic propositional variables , , , ..., and a set of propositional connectives , , , ..., , , , ..., , , , ..., a formula of propositional logic is defined recursively by these definitions:
Definition 1: Atomic propositional variables are formulas.
Definition 2: If is a propositional connective, and A, B, C, … is a sequence of m, possibly but not necessarily atomic, possibly but not necessarily distinct, formulas, then the result of applying to A, B, C, … is a formula.
Definition 3: Nothing else is a formula.
Writing the result of applying to A, B, C, … in functional notation, as (A, B, C, …), we have the following as examples of well-formed formulas:
What was given as Definition 2 above, which is responsible for the composition of formulas, is referred to by Colin Howson as the principle of composition. It is this recursion in the definition of a language's syntax which justifies the use of the word "atomic" to refer to propositional variables, since all formulas in the language are built up from the atoms as ultimate building blocks. Composite formulas (all formulas besides atoms) are called molecules, or molecular sentences. (This is an imperfect analogy with chemistry, since a chemical molecule may sometimes have only one atom, as in monatomic gases.)
The definition that "nothing else is a formula", given above as Definition 3, excludes any formula from the language which is not specifically required by the other definitions in the syntax. In particular, it excludes infinitely long formulas from being well-formed.
CF grammar in BNF
An alternative to the syntax definitions given above is to write a context-free (CF) grammar for the language in Backus-Naur form (BNF). This is more common in computer science than in philosophy. It can be done in many ways, of which a particularly brief one, for the common set of five connectives, is this single clause:
This clause, due to its self-referential nature (since is in some branches of the definition of ), also acts as a recursive definition, and therefore specifies the entire language. To expand it to add modal operators, one need only add … to the end of the clause.
Constants and schemata
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propositional constants represent some particular proposition, while propositional variables range over the set of all atomic propositions. Schemata, or schematic letters, however, range over all formulas. (Schematic letters are also called metavariables.) It is common to represent propositional constants by , , and , propositional variables by , , and , and schematic letters are often Greek letters, most often , , and .
However, some authors recognize only two "propositional constants" in their formal system: the special symbol , called "truth", which always evaluates to True, and the special symbol , called "falsity", which always evaluates to False. Other authors also include these symbols, with the same meaning, but consider them to be "zero-place truth-functors", or equivalently, "nullary connectives".
Semantics
To serve as a model of the logic of a given natural language, a formal language must be semantically interpreted. In classical logic, all propositions evaluate to exactly one of two truth-values: True or False. For example, "Wikipedia is a free online encyclopedia that anyone can edit" evaluates to True, while "Wikipedia is a paper encyclopedia" evaluates to False.
In other respects, the following formal semantics can apply to the language of any propositional logic, but the assumptions that there are only two semantic values (bivalence), that only one of the two is assigned to each formula in the language (noncontradiction), and that every formula gets assigned a value (excluded middle), are distinctive features of classical logic. To learn about nonclassical logics with more than two truth-values, and their unique semantics, one may consult the articles on "Many-valued logic", "Three-valued logic", "Finite-valued logic", and "Infinite-valued logic".
Interpretation (case) and argument
For a given language , an interpretation, valuation, or case, is an assignment of semantic values to each formula of . For a formal language of classical logic, a case is defined as an assignment, to each formula of , of one or the other, but not both, of the truth values, namely truth (T, or 1) and falsity (F, or 0). An interpretation that follows the rules of classical logic is sometimes called a Boolean valuation. An interpretation of a formal language for classical logic is often expressed in terms of truth tables. Since each formula is only assigned a single truth-value, an interpretation may be viewed as a function, whose domain is , and whose range is its set of semantic values , or .
For distinct propositional symbols there are distinct possible interpretations. For any particular symbol , for example, there are possible interpretations: either is assigned T, or is assigned F. And for the pair , there are possible interpretations: either both are assigned T, or both are assigned F, or is assigned T and is assigned F, or is assigned F and is assigned T. Since has , that is, denumerably many propositional symbols, there are , and therefore uncountably many distinct possible interpretations of as a whole.
Where is an interpretation and and represent formulas, the definition of an argument, given in , may then be stated as a pair , where is the set of premises and is the conclusion. The definition of an argument's validity, i.e. its property that , can then be stated as its absence of a counterexample, where a counterexample is defined as a case in which the argument's premises are all true but the conclusion is not true. As will be seen in , this is the same as to say that the conclusion is a semantic consequence of the premises.
Propositional connective semantics
An interpretation assigns semantic values to atomic formulas directly. Molecular formulas are assigned a function of the value of their constituent atoms, according to the connective used; the connectives are defined in such a way that the truth-value of a sentence formed from atoms with connectives depends on the truth-values of the atoms that they're applied to, and only on those. This assumption is referred to by Colin Howson as the assumption of the truth-functionality of the connectives.
Semantics via truth tables
Since logical connectives are defined semantically only in terms of the truth values that they take when the propositional variables that they're applied to take either of the two possible truth values, the semantic definition of the connectives is usually represented as a truth table for each of the connectives, as seen below:
This table covers each of the main five logical connectives: conjunction (here notated ), disjunction (), implication (), biconditional () and negation, (¬p, or ¬q, as the case may be). It is sufficient for determining the semantics of each of these operators. For more truth tables for more different kinds of connectives, see the article "Truth table".
Semantics via assignment expressions
Some authors (viz., all the authors cited in this subsection) write out the connective semantics using a list of statements instead of a table. In this format, where is the interpretation of , the five connectives are defined as:
if, and only if,
if, and only if, and
if, and only if, or
if, and only if, it is true that, if , then
if, and only if, it is true that if, and only if,
Instead of , the interpretation of may be written out as , or, for definitions such as the above, may be written simply as the English sentence " is given the value ". Yet other authors may prefer to speak of a Tarskian model for the language, so that instead they'll use the notation , which is equivalent to saying , where is the interpretation function for .
Connective definition methods
Some of these connectives may be defined in terms of others: for instance, implication, , may be defined in terms of disjunction and negation, as ; and disjunction may be defined in terms of negation and conjunction, as . In fact, a truth-functionally complete system, in the sense that all and only the classical propositional tautologies are theorems, may be derived using only disjunction and negation (as Russell, Whitehead, and Hilbert did), or using only implication and negation (as Frege did), or using only conjunction and negation, or even using only a single connective for "not and" (the Sheffer stroke), as Jean Nicod did. A joint denial connective (logical NOR) will also suffice, by itself, to define all other connectives. Besides NOR and NAND, no other connectives have this property.
Some authors, namely Howson and Cunningham, distinguish equivalence from the biconditional. (As to equivalence, Howson calls it "truth-functional equivalence", while Cunningham calls it "logical equivalence".) Equivalence is symbolized with ⇔ and is a metalanguage symbol, while a biconditional is symbolized with ↔ and is a logical connective in the object language . Regardless, an equivalence or biconditional is true if, and only if, the formulas connected by it are assigned the same semantic value under every interpretation. Other authors often do not make this distinction, and may use the word "equivalence", and/or the symbol ⇔, to denote their object language's biconditional connective.
Semantic truth, validity, consequence
Given and as formulas (or sentences) of a language , and as an interpretation (or case) of , then the following definitions apply:
Truth-in-a-case: A sentence of is true under an interpretation if assigns the truth value T to . If is true under , then is called a model of .
Falsity-in-a-case: is false under an interpretation if, and only if, is true under . This is the "truth of negation" definition of falsity-in-a-case. Falsity-in-a-case may also be defined by the "complement" definition: is false under an interpretation if, and only if, is not true under . In classical logic, these definitions are equivalent, but in nonclassical logics, they are not.
Semantic consequence: A sentence of is a semantic consequence () of a sentence if there is no interpretation under which is true and is not true.
Valid formula (tautology): A sentence of is logically valid (), or a tautology, if it is true under every interpretation, or true in every case.
Consistent sentence: A sentence of is consistent if it is true under at least one interpretation. It is inconsistent if it is not consistent. An inconsistent formula is also called self-contradictory, and said to be a self-contradiction, or simply a contradiction, although this latter name is sometimes reserved specifically for statements of the form .
For interpretations (cases) of , these definitions are sometimes given:
Complete case: A case is complete if, and only if, either is true-in- or is true-in-, for any in .
Consistent case: A case is consistent if, and only if, there is no in such that both and are true-in-.
For classical logic, which assumes that all cases are complete and consistent, the following theorems apply:
For any given interpretation, a given formula is either true or false under it.
No formula is both true and false under the same interpretation.
is true under if, and only if, is false under ; is true under if, and only if, is not true under .
If and are both true under , then is true under .
If and , then .
is true under if, and only if, either is not true under , or is true under .
if, and only if, is logically valid, that is, if, and only if, .
Proof systems
Proof systems in propositional logic can be broadly classified into semantic proof systems and syntactic proof systems, according to the kind of logical consequence that they rely on: semantic proof systems rely on semantic consequence (), whereas syntactic proof systems rely on syntactic consequence (). Semantic consequence deals with the truth values of propositions in all possible interpretations, whereas syntactic consequence concerns the derivation of conclusions from premises based on rules and axioms within a formal system. This section gives a very brief overview of the kinds of proof systems, with anchors to the relevant sections of this article on each one, as well as to the separate Wikipedia articles on each one.
Semantic proof systems
Semantic proof systems rely on the concept of semantic consequence, symbolized as , which indicates that if is true, then must also be true in every possible interpretation.
Truth tables
A truth table is a semantic proof method used to determine the truth value of a propositional logic expression in every possible scenario. By exhaustively listing the truth values of its constituent atoms, a truth table can show whether a proposition is true, false, tautological, or contradictory. See .
Semantic tableaux
A semantic tableau is another semantic proof technique that systematically explores the truth of a proposition. It constructs a tree where each branch represents a possible interpretation of the propositions involved. If every branch leads to a contradiction, the original proposition is considered to be a contradiction, and its negation is considered a tautology. See .
Syntactic proof systems
Syntactic proof systems, in contrast, focus on the formal manipulation of symbols according to specific rules. The notion of syntactic consequence, , signifies that can be derived from using the rules of the formal system.
Axiomatic systems
An axiomatic system is a set of axioms or assumptions from which other statements (theorems) are logically derived. In propositional logic, axiomatic systems define a base set of propositions considered to be self-evidently true, and theorems are proved by applying deduction rules to these axioms. See .
Natural deduction
Natural deduction is a syntactic method of proof that emphasizes the derivation of conclusions from premises through the use of intuitive rules reflecting ordinary reasoning. Each rule reflects a particular logical connective and shows how it can be introduced or eliminated. See .
Sequent calculus
The sequent calculus is a formal system that represents logical deductions as sequences or "sequents" of formulas. Developed by Gerhard Gentzen, this approach focuses on the structural properties of logical deductions and provides a powerful framework for proving statements within propositional logic.
Semantic proof via truth tables
Taking advantage of the semantic concept of validity (truth in every interpretation), it is possible to prove a formula's validity by using a truth table, which gives every possible interpretation (assignment of truth values to variables) of a formula. If, and only if, all the lines of a truth table come out true, the formula is semantically valid (true in every interpretation). Further, if (and only if) is valid, then is inconsistent.
For instance, this table shows that "" is not valid:
The computation of the last column of the third line may be displayed as follows:
Further, using the theorem that if, and only if, is valid, we can use a truth table to prove that a formula is a semantic consequence of a set of formulas: if, and only if, we can produce a truth table that comes out all true for the formula (that is, if ).
Semantic proof via tableaux
Since truth tables have 2n lines for n variables, they can be tiresomely long for large values of n. Analytic tableaux are a more efficient, but nevertheless mechanical, semantic proof method; they take advantage of the fact that "we learn nothing about the validity of the inference from examining the truth-value distributions which make either the premises false or the conclusion true: the only relevant distributions when considering deductive validity are clearly just those which make the premises true or the conclusion false."
Analytic tableaux for propositional logic are fully specified by the rules that are stated in schematic form below. These rules use "signed formulas", where a signed formula is an expression or , where is a (unsigned) formula of the language . (Informally, is read " is true", and is read " is false".) Their formal semantic definition is that "under any interpretation, a signed formula is called true if is true, and false if is false, whereas a signed formula is called false if is true, and true if is false."
In this notation, rule 2 means that yields both , whereas branches into . The notation is to be understood analogously for rules 3 and 4. Often, in tableaux for classical logic, the signed formula notation is simplified so that is written simply as , and as , which accounts for naming rule 1 the "Rule of Double Negation".
One constructs a tableau for a set of formulas by applying the rules to produce more lines and tree branches until every line has been used, producing a complete tableau. In some cases, a branch can come to contain both and for some , which is to say, a contradiction. In that case, the branch is said to close. If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close.
To construct a tableau for an argument , one first writes out the set of premise formulas, , with one formula on each line, signed with (that is, for each in the set); and together with those formulas (the order is unimportant), one also writes out the conclusion, , signed with (that is, ). One then produces a truth tree (analytic tableau) by using all those lines according to the rules. A closed tree will be proof that the argument was valid, in virtue of the fact that if, and only if, is inconsistent (also written as ).
List of classically valid argument forms
Using semantic checking methods, such as truth tables or semantic tableaux, to check for tautologies and semantic consequences, it can be shown that, in classical logic, the following classical argument forms are semantically valid, i.e., these tautologies and semantic consequences hold. We use ⟚ to denote equivalence of and , that is, as an abbreviation for both and ; as an aid to reading the symbols, a description of each formula is given. The description reads the symbol ⊧ (called the "double turnstile") as "therefore", which is a common reading of it, although many authors prefer to read it as "entails", or as "models".
Syntactic proof via natural deduction
Natural deduction, since it is a method of syntactical proof, is specified by providing inference rules (also called rules of proof) for a language with the typical set of connectives ; no axioms are used other than these rules. The rules are covered below, and a proof example is given afterwards.
Notation styles
Different authors vary to some extent regarding which inference rules they give, which will be noted. More striking to the look and feel of a proof, however, is the variation in notation styles. The , which was covered earlier for a short argument, can actually be stacked to produce large tree-shaped natural deduction proofs—not to be confused with "truth trees", which is another name for analytic tableaux. There is also a style due to Stanisław Jaśkowski, where the formulas in the proof are written inside various nested boxes, and there is a simplification of Jaśkowski's style due to Fredric Fitch (Fitch notation), where the boxes are simplified to simple horizontal lines beneath the introductions of suppositions, and vertical lines to the left of the lines that are under the supposition. Lastly, there is the only notation style which will actually be used in this article, which is due to Patrick Suppes, but was much popularized by E.J. Lemmon and Benson Mates. This method has the advantage that, graphically, it is the least intensive to produce and display, which made it a natural choice for the editor who wrote this part of the article, who did not understand the complex LaTeX commands that would be required to produce proofs in the other methods.
A proof, then, laid out in accordance with the Suppes–Lemmon notation style, is a sequence of lines containing sentences, where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence. Each line of proof is made up of a sentence of proof, together with its annotation, its assumption set, and the current line number. The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers. The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence. See the .
Inference rules
Natural deduction inference rules, due ultimately to Gentzen, are given below. There are ten primitive rules of proof, which are the rule assumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rule reductio ad adbsurdum. Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, and MTT and DN are commonly given rules, although they are not primitive.
Natural deduction proof example
The proof below derives from and using only MPP and RAA, which shows that MTT is not a primitive rule, since it can be derived from those two other rules.
Syntactic proof via axioms
It is possible to perform proofs axiomatically, which means that certain tautologies are taken as self-evident and various others are deduced from them using modus ponens as an inference rule, as well as a rule of substitution, which permits replacing any well-formed formula with any of it. Alternatively, one uses axiom schemas instead of axioms, and no rule of substitution is used.
This section gives the axioms of some historically notable axiomatic systems for propositional logic. For more examples, as well as metalogical theorems that are specific to such axiomatic systems (such as their completeness and consistency), see the article Axiomatic system (logic).
Frege's Begriffsschrift
Although axiomatic proof has been used since the famous Ancient Greek textbook, Euclid's Elements of Geometry, in propositional logic it dates back to Gottlob Frege's 1879 Begriffsschrift. Frege's system used only implication and negation as connectives, and it had six axioms, which were these ones:
Proposition 1:
Proposition 2:
Proposition 8:
Proposition 28:
Proposition 31:
Proposition 41:
These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic.
Łukasiewicz's P2
Jan Łukasiewicz showed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentence ". Which, taken out of Łukasiewicz's Polish notation into modern notation, means . Hence, Łukasiewicz is credited with this system of three axioms:
Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule. The exact same system was given (with an explicit substitution rule) by Alonzo Church, who referred to it as the system P2 and helped popularize it.
Schematic form of P2
One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemata (metalogical variables that may stand for any well-formed formulas), the axioms are given as:
The schematic version of P2 is attributed to John von Neumann, and is used in the Metamath "set.mm" formal proof database. It has also been attributed to Hilbert, and named in this context.
Proof example in P2
As an example, a proof of in P2 is given below. First, the axioms are given names:
(A1)
(A2)
(A3)
And the proof is as follows:
(instance of (A1))
(instance of (A2))
(from (1) and (2) by modus ponens)
(instance of (A1))
(from (4) and (3) by modus ponens)
Solvers
One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula is decidable. Deciding satisfiability of propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g., DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers.
| Mathematics | Discrete mathematics | null |
18167 | https://en.wikipedia.org/wiki/Linked%20list | Linked list | In computer science, a linked list is a linear collection of data elements whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence. In its most basic form, each node contains data, and a reference (in other words, a link) to the next node in the sequence. This structure allows for efficient insertion or removal of elements from any position in the sequence during iteration. More complex variants add additional links, allowing more efficient insertion or removal of nodes at arbitrary positions. A drawback of linked lists is that data access time is linear in respect to the number of nodes in the list. Because nodes are serially linked, accessing any node requires that the prior node be accessed beforehand (which introduces difficulties in pipelining). Faster access, such as random access, is not feasible. Arrays have better cache locality compared to linked lists.
Linked lists are among the simplest and most common data structures. They can be used to implement several other common abstract data types, including lists, stacks, queues, associative arrays, and S-expressions, though it is not uncommon to implement those data structures directly without using a linked list as the basis.
The principal benefit of a linked list over a conventional array is that the list elements can be easily inserted or removed without reallocation or reorganization of the entire structure because the data items do not need to be stored contiguously in memory or on disk, while restructuring an array at run-time is a much more expensive operation. Linked lists allow insertion and removal of nodes at any point in the list, and allow doing so with a constant number of operations by keeping the link previous to the link being added or removed in memory during list traversal.
On the other hand, since simple linked lists by themselves do not allow random access to the data or any form of efficient indexing, many basic operations—such as obtaining the last node of the list, finding a node that contains a given datum, or locating the place where a new node should be inserted—may require iterating through most or all of the list elements.
History
Linked lists were developed in 1955–1956, by Allen Newell, Cliff Shaw and Herbert A. Simon at RAND Corporation and Carnegie Mellon University as the primary data structure for their Information Processing Language (IPL). IPL was used by the authors to develop several early artificial intelligence programs, including the Logic Theory Machine, the General Problem Solver, and a computer chess program. Reports on their work appeared in IRE Transactions on Information Theory in 1956, and several conference proceedings from 1957 to 1959, including Proceedings of the Western Joint Computer Conference in 1957 and 1958, and Information Processing (Proceedings of the first UNESCO International Conference on Information Processing) in 1959. The now-classic diagram consisting of blocks representing list nodes with arrows pointing to successive list nodes appears in "Programming the Logic Theory Machine" by Newell and Shaw in Proc. WJCC, February 1957. Newell and Simon were recognized with the ACM Turing Award in 1975 for having "made basic contributions to artificial intelligence, the psychology of human cognition, and list processing". The problem of machine translation for natural language processing led Victor Yngve at Massachusetts Institute of Technology (MIT) to use linked lists as data structures in his COMIT programming language for computer research in the field of linguistics. A report on this language entitled "A programming language for mechanical translation" appeared in Mechanical Translation in 1958.
Another early appearance of linked lists was by Hans Peter Luhn who wrote an internal IBM memorandum in January 1953 that suggested the use of linked lists in chained hash tables.
LISP, standing for list processor, was created by John McCarthy in 1958 while he was at MIT and in 1960 he published its design in a paper in the Communications of the ACM, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". One of LISP's major data structures is the linked list.
By the early 1960s, the utility of both linked lists and languages which use these structures as their primary data representation was well established. Bert Green of the MIT Lincoln Laboratory published a review article entitled "Computer languages for symbol manipulation" in IRE Transactions on Human Factors in Electronics in March 1961 which summarized the advantages of the linked list approach. A later review article, "A Comparison of list-processing computer languages" by Bobrow and Raphael, appeared in Communications of the ACM in April 1964.
Several operating systems developed by Technical Systems Consultants (originally of West Lafayette Indiana, and later of Chapel Hill, North Carolina) used singly linked lists as file structures. A directory entry pointed to the first sector of a file, and succeeding portions of the file were located by traversing pointers. Systems using this technique included Flex (for the Motorola 6800 CPU), mini-Flex (same CPU), and Flex9 (for the Motorola 6809 CPU). A variant developed by TSC for and marketed by Smoke Signal Broadcasting in California, used doubly linked lists in the same manner.
The TSS/360 operating system, developed by IBM for the System 360/370 machines, used a double linked list for their file system catalog. The directory structure was similar to Unix, where a directory could contain files and other directories and extend to any depth.
Basic concepts and nomenclature
Each record of a linked list is often called an 'element' or 'node'.
The field of each node that contains the address of the next node is usually called the 'next link' or 'next pointer'. The remaining fields are known as the 'data', 'information', 'value', 'cargo', or 'payload' fields.
The 'head' of a list is its first node. The 'tail' of a list may refer either to the rest of the list after the head, or to the last node in the list. In Lisp and some derived languages, the next node may be called the 'cdr' (pronounced ) of the list, while the payload of the head node may be called the 'car'.
Singly linked list
Singly linked lists contain nodes which have a 'value' field as well as 'next' field, which points to the next node in line of nodes. Operations that can be performed on singly linked lists include insertion, deletion and traversal.
The following C language code demonstrates how to add a new node with the "value" to the end of a singly linked list:// Each node in a linked list is a structure. The head node is the first node in the list.
Node *addNodeToTail(Node *head, int value) {
// declare Node pointer and initialize to point to the new Node (i.e., it will have the new Node's memory address) being added to the end of the list.
Node *temp = malloc(sizeof *temp); /// 'malloc' in stdlib.
temp->value = value; // Add data to the value field of the new Node.
temp->next = NULL; // initialize invalid links to nil.
if (head == NULL) {
head = temp; // If the linked list is empty (i.e., the head node pointer is a null pointer), then have the head node pointer point to the new Node.
}
else {
Node *p = head; // Assign the head node pointer to the Node pointer 'p'.
while (p->next != NULL) {
p = p->next; // Traverse the list until p is the last Node. The last Node always points to NULL.
}
p->next = temp; // Make the previously last Node point to the new Node.
}
return head; // Return the head node pointer.
}
Doubly linked list
In a 'doubly linked list', each node contains, besides the next-node link, a second link field pointing to the 'previous' node in the sequence. The two links may be called 'forward('s') and 'backwards', or 'next' and 'prev'('previous').
A technique known as XOR-linking allows a doubly linked list to be implemented using a single link field in each node. However, this technique requires the ability to do bit operations on addresses, and therefore may not be available in some high-level languages.
Many modern operating systems use doubly linked lists to maintain references to active processes, threads, and other dynamic objects. A common strategy for rootkits to evade detection is to unlink themselves from these lists.
Multiply linked list
In a 'multiply linked list', each node contains two or more link fields, each field being used to connect the same set of data arranged in a different order (e.g., by name, by department, by date of birth, etc.). While a doubly linked list can be seen as a special case of multiply linked list, the fact that the two and more orders are opposite to each other leads to simpler and more efficient algorithms, so they are usually treated as a separate case.
Circular linked list
In the last node of a linked list, the link field often contains a null reference, a special value is used to indicate the lack of further nodes. A less common convention is to make it point to the first node of the list; in that case, the list is said to be 'circular' or 'circularly linked'; otherwise, it is said to be 'open' or 'linear'. It is a list where the last node pointer points to the first node (i.e., the "next link" pointer of the last node has the memory address of the first node).
In the case of a circular doubly linked list, the first node also points to the last node of the list.
Sentinel nodes
In some implementations an extra 'sentinel' or 'dummy' node may be added before the first data record or after the last one. This convention simplifies and accelerates some list-handling algorithms, by ensuring that all links can be safely dereferenced and that every list (even one that contains no data elements) always has a "first" and "last" node.
Empty lists
An empty list is a list that contains no data records. This is usually the same as saying that it has zero nodes. If sentinel nodes are being used, the list is usually said to be empty when it has only sentinel nodes.
Hash linking
The link fields need not be physically part of the nodes. If the data records are stored in an array and referenced by their indices, the link field may be stored in a separate array with the same indices as the data records.
List handles
Since a reference to the first node gives access to the whole list, that reference is often called the 'address', 'pointer', or 'handle' of the list. Algorithms that manipulate linked lists usually get such handles to the input lists and return the handles to the resulting lists. In fact, in the context of such algorithms, the word "list" often means "list handle". In some situations, however, it may be convenient to refer to a list by a handle that consists of two links, pointing to its first and last nodes.
Combining alternatives
The alternatives listed above may be arbitrarily combined in almost every way, so one may have circular doubly linked lists without sentinels, circular singly linked lists with sentinels, etc.
Tradeoffs
As with most choices in computer programming and design, no method is well suited to all circumstances. A linked list data structure might work well in one case, but cause problems in another. This is a list of some of the common tradeoffs involving linked list structures.
Linked lists vs. dynamic arrays
A dynamic array is a data structure that allocates all elements contiguously in memory, and keeps a count of the current number of elements. If the space reserved for the dynamic array is exceeded, it is reallocated and (possibly) copied, which is an expensive operation.
Linked lists have several advantages over dynamic arrays. Insertion or deletion of an element at a specific point of a list, assuming that a pointer is indexed to the node (before the one to be removed, or before the insertion point) already, is a constant-time operation (otherwise without this reference it is O(n)), whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.
Moreover, arbitrarily many elements may be inserted into a linked list, limited only by the total memory available; while a dynamic array will eventually fill up its underlying array data structure and will have to reallocate—an expensive operation, one that may not even be possible if memory is fragmented, although the cost of reallocation can be averaged over insertions, and the cost of an insertion due to reallocation would still be amortized O(1). This helps with appending elements at the array's end, but inserting into (or removing from) middle positions still carries prohibitive costs due to data moving to maintain contiguity. An array from which many elements are removed may also have to be resized in order to avoid wasting too much space.
On the other hand, dynamic arrays (as well as fixed-size array data structures) allow constant-time random access, while linked lists allow only sequential access to elements. Singly linked lists, in fact, can be easily traversed in only one direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index quickly, such as heapsort. Sequential access on arrays and dynamic arrays is also faster than on linked lists on many machines, because they have optimal locality of reference and thus make good use of data caching.
Another disadvantage of linked lists is the extra storage needed for references, which often makes them impractical for lists of small data items such as characters or Boolean values, because the storage overhead for the links may exceed by a factor of two or more the size of the data. In contrast, a dynamic array requires only the space for the data itself (and a very small amount of control data). It can also be slow, and with a naïve allocator, wasteful, to allocate memory separately for each new element, a problem generally solved using memory pools.
Some hybrid solutions try to combine the advantages of the two representations. Unrolled linked lists store several elements in each list node, increasing cache performance while decreasing memory overhead for references. CDR coding does both these as well, by replacing references with the actual data referenced, which extends off the end of the referencing record.
A good example that highlights the pros and cons of using dynamic arrays vs. linked lists is by implementing a program that resolves the Josephus problem. The Josephus problem is an election method that works by having a group of people stand in a circle. Starting at a predetermined person, one may count around the circle n times. Once the nth person is reached, one should remove them from the circle and have the members close the circle. The process is repeated until only one person is left. That person wins the election. This shows the strengths and weaknesses of a linked list vs. a dynamic array, because if the people are viewed as connected nodes in a circular linked list, then it shows how easily the linked list is able to delete nodes (as it only has to rearrange the links to the different nodes). However, the linked list will be poor at finding the next person to remove and will need to search through the list until it finds that person. A dynamic array, on the other hand, will be poor at deleting nodes (or elements) as it cannot remove one node without individually shifting all the elements up the list by one. However, it is exceptionally easy to find the nth person in the circle by directly referencing them by their position in the array.
The list ranking problem concerns the efficient conversion of a linked list representation into an array. Although trivial for a conventional computer, solving this problem by a parallel algorithm is complicated and has been the subject of much research.
A balanced tree has similar memory access patterns and space overhead to a linked list while permitting much more efficient indexing, taking O(log n) time instead of O(n) for a random access. However, insertion and deletion operations are more expensive due to the overhead of tree manipulations to maintain balance. Schemes exist for trees to automatically maintain themselves in a balanced state: AVL trees or red–black trees.
Singly linked linear lists vs. other lists
While doubly linked and circular lists have advantages over singly linked linear lists, linear lists offer some advantages that make them preferable in some situations.
A singly linked linear list is a recursive data structure, because it contains a pointer to a smaller object of the same type. For that reason, many operations on singly linked linear lists (such as merging two lists, or enumerating the elements in reverse order) often have very simple recursive algorithms, much simpler than any solution using iterative commands. While those recursive solutions can be adapted for doubly linked and circularly linked lists, the procedures generally need extra arguments and more complicated base cases.
Linear singly linked lists also allow tail-sharing, the use of a common final portion of sub-list as the terminal portion of two different lists. In particular, if a new node is added at the beginning of a list, the former list remains available as the tail of the new one—a simple example of a persistent data structure. Again, this is not true with the other variants: a node may never belong to two different circular or doubly linked lists.
In particular, end-sentinel nodes can be shared among singly linked non-circular lists. The same end-sentinel node may be used for every such list. In Lisp, for example, every proper list ends with a link to a special node, denoted by nil or ().
The advantages of the fancy variants are often limited to the complexity of the algorithms, not in their efficiency. A circular list, in particular, can usually be emulated by a linear list together with two variables that point to the first and last nodes, at no extra cost.
Doubly linked vs. singly linked
Double-linked lists require more space per node (unless one uses XOR-linking), and their elementary operations are more expensive; but they are often easier to manipulate because they allow fast and easy sequential access to the list in both directions. In a doubly linked list, one can insert or delete a node in a constant number of operations given only that node's address. To do the same in a singly linked list, one must have the address of the pointer to that node, which is either the handle for the whole list (in case of the first node) or the link field in the previous node. Some algorithms require access in both directions. On the other hand, doubly linked lists do not allow tail-sharing and cannot be used as persistent data structures.
Circularly linked vs. linearly linked
A circularly linked list may be a natural option to represent arrays that are naturally circular, e.g. the corners of a polygon, a pool of buffers that are used and released in FIFO ("first in, first out") order, or a set of processes that should be time-shared in round-robin order. In these applications, a pointer to any node serves as a handle to the whole list.
With a circular list, a pointer to the last node gives easy access also to the first node, by following one link. Thus, in applications that require access to both ends of the list (e.g., in the implementation of a queue), a circular structure allows one to handle the structure by a single pointer, instead of two.
A circular list can be split into two circular lists, in constant time, by giving the addresses of the last node of each piece. The operation consists in swapping the contents of the link fields of those two nodes. Applying the same operation to any two nodes in two distinct lists joins the two list into one. This property greatly simplifies some algorithms and data structures, such as the quad-edge and face-edge.
The simplest representation for an empty circular list (when such a thing makes sense) is a null pointer, indicating that the list has no nodes. Without this choice, many algorithms have to test for this special case, and handle it separately. By contrast, the use of null to denote an empty linear list is more natural and often creates fewer special cases.
For some applications, it can be useful to use singly linked lists that can vary between being circular and being linear, or even circular with a linear initial segment. Algorithms for searching or otherwise operating on these have to take precautions to avoid accidentally entering an endless loop. One well-known method is to have a second pointer walking the list at half or double the speed, and if both pointers meet at the same node, a cycle has been found.
Using sentinel nodes
Sentinel node may simplify certain list operations, by ensuring that the next or previous nodes exist for every element, and that even empty lists have at least one node. One may also use a sentinel node at the end of the list, with an appropriate data field, to eliminate some end-of-list tests. For example, when scanning the list looking for a node with a given value x, setting the sentinel's data field to x makes it unnecessary to test for end-of-list inside the loop. Another example is the merging two sorted lists: if their sentinels have data fields set to +∞, the choice of the next output node does not need special handling for empty lists.
However, sentinel nodes use up extra space (especially in applications that use many short lists), and they may complicate other operations (such as the creation of a new empty list).
However, if the circular list is used merely to simulate a linear list, one may avoid some of this complexity by adding a single sentinel node to every list, between the last and the first data nodes. With this convention, an empty list consists of the sentinel node alone, pointing to itself via the next-node link. The list handle should then be a pointer to the last data node, before the sentinel, if the list is not empty; or to the sentinel itself, if the list is empty.
The same trick can be used to simplify the handling of a doubly linked linear list, by turning it into a circular doubly linked list with a single sentinel node. However, in this case, the handle should be a single pointer to the dummy node itself.
Linked list operations
When manipulating linked lists in-place, care must be taken to not use values that have been invalidated in previous assignments. This makes algorithms for inserting or deleting linked list nodes somewhat subtle. This section gives pseudocode for adding or removing nodes from singly, doubly, and circularly linked lists in-place. Throughout, null is used to refer to an end-of-list marker or sentinel, which may be implemented in a number of ways.
Linearly linked lists
Singly linked lists
The node data structure will have two fields. There is also a variable, firstNode which always points to the first node in the list, or is null for an empty list.
record Node
{
data; // The data being stored in the node
Node next // A reference to the next node, null for last node
}
record List
{
Node firstNode // points to first node of list; null for empty list
}
Traversal of a singly linked list is simple, beginning at the first node and following each next link until reaching the end:
node := list.firstNode
while node not null
(do something with node.data)
node := node.next
The following code inserts a node after an existing node in a singly linked list. The diagram shows how it works. Inserting a node before an existing one cannot be done directly; instead, one must keep track of the previous node and insert a node after it.
function insertAfter(Node node, Node newNode) // insert newNode after node
newNode.next := node.next
node.next := newNode
Inserting at the beginning of the list requires a separate function. This requires updating firstNode.
function insertBeginning(List list, Node newNode) // insert node before current first node
newNode.next := list.firstNode
list.firstNode := newNode
Similarly, there are functions for removing the node after a given node, and for removing a node from the beginning of the list. The diagram demonstrates the former. To find and remove a particular node, one must again keep track of the previous element.
function removeAfter(Node node) // remove node past this one
obsoleteNode := node.next
node.next := node.next.next
destroy obsoleteNode
function removeBeginning(List list) // remove first node
obsoleteNode := list.firstNode
list.firstNode := list.firstNode.next // point past deleted node
destroy obsoleteNode
Notice that removeBeginning() sets list.firstNode to null when removing the last node in the list.
Since it is not possible to iterate backwards, efficient insertBefore or removeBefore operations are not possible. Inserting to a list before a specific node requires traversing the list, which would have a worst case running time of O(n).
Appending one linked list to another can be inefficient unless a reference to the tail is kept as part of the List structure, because it is needed to traverse the entire first list in order to find the tail, and then append the second list to this. Thus, if two linearly linked lists are each of length , list appending has asymptotic time complexity of . In the Lisp family of languages, list appending is provided by the append procedure.
Many of the special cases of linked list operations can be eliminated by including a dummy element at the front of the list. This ensures that there are no special cases for the beginning of the list and renders both insertBeginning() and removeBeginning() unnecessary, i.e., every element or node is next to another node (even the first node is next to the dummy node). In this case, the first useful data in the list will be found at list.firstNode.next.
Circularly linked list
In a circularly linked list, all nodes are linked in a continuous circle, without using null. For lists with a front and a back (such as a queue), one stores a reference to the last node in the list. The next node after the last node is the first node. Elements can be added to the back of the list and removed from the front in constant time.
Circularly linked lists can be either singly or doubly linked.
Both types of circularly linked lists benefit from the ability to traverse the full list beginning at any given node. This often allows us to avoid storing firstNode and lastNode, although if the list may be empty, there needs to be a special representation for the empty list, such as a lastNode variable which points to some node in the list or is null if it is empty; it uses such a lastNode here. This representation significantly simplifies adding and removing nodes with a non-empty list, but empty lists are then a special case.
Algorithms
Assuming that someNode is some node in a non-empty circular singly linked list, this code iterates through that list starting with someNode:
function iterate(someNode)
if someNode ≠ null
node := someNode
do
do something with node.value
node := node.next
while node ≠ someNode
Notice that the test "while node ≠ someNode" must be at the end of the loop. If the test was moved to the beginning of the loop, the procedure would fail whenever the list had only one node.
This function inserts a node "newNode" into a circular linked list after a given node "node". If "node" is null, it assumes that the list is empty.
function insertAfter(Node node, Node newNode)
if node = null // assume list is empty
newNode.next := newNode
else
newNode.next := node.next
node.next := newNode
update lastNode variable if necessary
Suppose that "L" is a variable pointing to the last node of a circular linked list (or null if the list is empty). To append "newNode" to the end of the list, one may do
insertAfter(L, newNode)
L := newNode
To insert "newNode" at the beginning of the list, one may do
insertAfter(L, newNode)
if L = null
L := newNode
This function inserts a value "newVal" before a given node "node" in O(1) time. A new node has been created between "node" and the next node, then puts the value of "node" into that new node, and puts "newVal" in "node". Thus, a singly linked circularly linked list with only a firstNode variable can both insert to the front and back in O(1) time.
function insertBefore(Node node, newVal)
if node = null // assume list is empty
newNode := new Node(data:=newVal, next:=newNode)
else
newNode := new Node(data:=node.data, next:=node.next)
node.data := newVal
node.next := newNode
update firstNode variable if necessary
This function removes a non-null node from a list of size greater than 1 in O(1) time. It copies data from the next node into the node, and then sets the node's next pointer to skip over the next node.
function remove(Node node)
if node ≠ null and size of list > 1
removedData := node.data
node.data := node.next.data
node.next = node.next.next
return removedData
Linked lists using arrays of nodes
Languages that do not support any type of reference can still create links by replacing pointers with array indices. The approach is to keep an array of records, where each record has integer fields indicating the index of the next (and possibly previous) node in the array. Not all nodes in the array need be used. If records are also not supported, parallel arrays can often be used instead.
As an example, consider the following linked list record that uses arrays instead of pointers:
record Entry {
integer next; // index of next entry in array
integer prev; // previous entry (if double-linked)
string name;
real balance;
}
A linked list can be built by creating an array of these structures, and an integer variable to store the index of the first element.
integer listHead
Entry Records[1000]
Links between elements are formed by placing the array index of the next (or previous) cell into the Next or Prev field within a given element. For example:
In the above example, ListHead would be set to 2, the location of the first entry in the list. Notice that entry 3 and 5 through 7 are not part of the list. These cells are available for any additions to the list. By creating a ListFree integer variable, a free list could be created to keep track of what cells are available. If all entries are in use, the size of the array would have to be increased or some elements would have to be deleted before new entries could be stored in the list.
The following code would traverse the list and display names and account balance:
i := listHead
while i ≥ 0 // loop through the list
print i, Records[i].name, Records[i].balance // print entry
i := Records[i].next
When faced with a choice, the advantages of this approach include:
The linked list is relocatable, meaning it can be moved about in memory at will, and it can also be quickly and directly serialized for storage on disk or transfer over a network.
Especially for a small list, array indexes can occupy significantly less space than a full pointer on many architectures.
Locality of reference can be improved by keeping the nodes together in memory and by periodically rearranging them, although this can also be done in a general store.
Naïve dynamic memory allocators can produce an excessive amount of overhead storage for each node allocated; almost no allocation overhead is incurred per node in this approach.
Seizing an entry from a pre-allocated array is faster than using dynamic memory allocation for each node, since dynamic memory allocation typically requires a search for a free memory block of the desired size.
This approach has one main disadvantage, however: it creates and manages a private memory space for its nodes. This leads to the following issues:
It increases complexity of the implementation.
Growing a large array when it is full may be difficult or impossible, whereas finding space for a new linked list node in a large, general memory pool may be easier.
Adding elements to a dynamic array will occasionally (when it is full) unexpectedly take linear (O(n)) instead of constant time (although it is still an amortized constant).
Using a general memory pool leaves more memory for other data if the list is smaller than expected or if many nodes are freed.
For these reasons, this approach is mainly used for languages that do not support dynamic memory allocation. These disadvantages are also mitigated if the maximum size of the list is known at the time the array is created.
Language support
Many programming languages such as Lisp and Scheme have singly linked lists built in. In many functional languages, these lists are constructed from nodes, each called a cons or cons cell. The cons has two fields: the car, a reference to the data for that node, and the cdr, a reference to the next node. Although cons cells can be used to build other data structures, this is their primary purpose.
In languages that support abstract data types or templates, linked list ADTs or templates are available for building linked lists. In other languages, linked lists are typically built using references together with records.
Internal and external storage
When constructing a linked list, one is faced with the choice of whether to store the data of the list directly in the linked list nodes, called internal storage, or merely to store a reference to the data, called external storage. Internal storage has the advantage of making access to the data more efficient, requiring less storage overall, having better locality of reference, and simplifying memory management for the list (its data is allocated and deallocated at the same time as the list nodes).
External storage, on the other hand, has the advantage of being more generic, in that the same data structure and machine code can be used for a linked list no matter what the size of the data is. It also makes it easy to place the same data in multiple linked lists. Although with internal storage the same data can be placed in multiple lists by including multiple next references in the node data structure, it would then be necessary to create separate routines to add or delete cells based on each field. It is possible to create additional linked lists of elements that use internal storage by using external storage, and having the cells of the additional linked lists store references to the nodes of the linked list containing the data.
In general, if a set of data structures needs to be included in linked lists, external storage is the best approach. If a set of data structures need to be included in only one linked list, then internal storage is slightly better, unless a generic linked list package using external storage is available. Likewise, if different sets of data that can be stored in the same data structure are to be included in a single linked list, then internal storage would be fine.
Another approach that can be used with some languages involves having different data structures, but all have the initial fields, including the next (and prev if double linked list) references in the same location. After defining separate structures for each type of data, a generic structure can be defined that contains the minimum amount of data shared by all the other structures and contained at the top (beginning) of the structures. Then generic routines can be created that use the minimal structure to perform linked list type operations, but separate routines can then handle the specific data. This approach is often used in message parsing routines, where several types of messages are received, but all start with the same set of fields, usually including a field for message type. The generic routines are used to add new messages to a queue when they are received, and remove them from the queue in order to process the message. The message type field is then used to call the correct routine to process the specific type of message.
Example of internal and external storage
To create a linked list of families and their members, using internal storage, the structure might look like the following:
record member { // member of a family
member next;
string firstName;
integer age;
}
record family { // the family itself
family next;
string lastName;
string address;
member members // head of list of members of this family
}
To print a complete list of families and their members using internal storage, write:
aFamily := Families // start at head of families list
while aFamily ≠ null // loop through list of families
print information about family
aMember := aFamily.members // get head of list of this family's members
while aMember ≠ null // loop through list of members
print information about member
aMember := aMember.next
aFamily := aFamily.next
Using external storage, the following structures can be created:
record node { // generic link structure
node next;
pointer data // generic pointer for data at node
}
record member { // structure for family member
string firstName;
integer age
}
record family { // structure for family
string lastName;
string address;
node members // head of list of members of this family
}
To print a complete list of families and their members using external storage, write:
famNode := Families // start at head of families list
while famNode ≠ null // loop through list of families
aFamily := (family) famNode.data // extract family from node
print information about family
memNode := aFamily.members // get list of family members
while memNode ≠ null // loop through list of members
aMember := (member)memNode.data // extract member from node
print information about member
memNode := memNode.next
famNode := famNode.next
Notice that when using external storage, an extra step is needed to extract the record from the node and cast it into the proper data type. This is because both the list of families and the list of members within the family are stored in two linked lists using the same data structure (node), and this language does not have parametric types.
As long as the number of families that a member can belong to is known at compile time, internal storage works fine. If, however, a member needed to be included in an arbitrary number of families, with the specific number known only at run time, external storage would be necessary.
Speeding up search
Finding a specific element in a linked list, even if it is sorted, normally requires O(n) time (linear search). This is one of the primary disadvantages of linked lists over other data structures. In addition to the variants discussed above, below are two simple ways to improve search time.
In an unordered list, one simple heuristic for decreasing average search time is the move-to-front heuristic, which simply moves an element to the beginning of the list once it is found. This scheme, handy for creating simple caches, ensures that the most recently used items are also the quickest to find again.
Another common approach is to "index" a linked list using a more efficient external data structure. For example, one can build a red–black tree or hash table whose elements are references to the linked list nodes. Multiple such indexes can be built on a single list. The disadvantage is that these indexes may need to be updated each time a node is added or removed (or at least, before that index is used again).
Random-access lists
A random-access list is a list with support for fast random access to read or modify any element in the list. One possible implementation is a skew binary random-access list using the skew binary number system, which involves a list of trees with special properties; this allows worst-case constant time head/cons operations, and worst-case logarithmic time random access to an element by index. Random-access lists can be implemented as persistent data structures.
Random-access lists can be viewed as immutable linked lists in that they likewise support the same O(1) head and tail operations.
A simple extension to random-access lists is the min-list, which provides an additional operation that yields the minimum element in the entire list in constant time (without mutation complexities).
Related data structures
Both stacks and queues are often implemented using linked lists, and simply restrict the type of operations which are supported.
The skip list is a linked list augmented with layers of pointers for quickly jumping over large numbers of elements, and then descending to the next layer. This process continues down to the bottom layer, which is the actual list.
A binary tree can be seen as a type of linked list where the elements are themselves linked lists of the same nature. The result is that each node may include a reference to the first node of one or two other linked lists, which, together with their contents, form the subtrees below that node.
An unrolled linked list is a linked list in which each node contains an array of data values. This leads to improved cache performance, since more list elements are contiguous in memory, and reduced memory overhead, because less metadata needs to be stored for each element of the list.
A hash table may use linked lists to store the chains of items that hash to the same position in the hash table.
A heap shares some of the ordering properties of a linked list, but is almost always implemented using an array. Instead of references from node to node, the next and previous data indexes are calculated using the current data's index.
A self-organizing list rearranges its nodes based on some heuristic which reduces search times for data retrieval by keeping commonly accessed nodes at the head of the list.
| Mathematics | Data structures and types | null |
18168 | https://en.wikipedia.org/wiki/Logic%20gate | Logic gate | A logic gate is a device that performs a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output. Depending on the context, the term may refer to an ideal logic gate, one that has, for instance, zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device (see ideal and real op-amps for comparison).
The primary way of building logic gates uses diodes or transistors acting as electronic switches. Today, most logic gates are made from MOSFETs (metal–oxide–semiconductor field-effect transistors). They can also be constructed using vacuum tubes, electromagnetic relays with relay logic, fluidic logic, pneumatic logic, optics, molecules, acoustics, or even mechanical or thermal elements.
Logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all of Boolean logic, and therefore, all of the algorithms and mathematics that can be described with Boolean logic. Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory, all the way up through complete microprocessors, which may contain more than 100 million logic gates.
Compound logic gates AND-OR-Invert (AOI) and OR-AND-Invert (OAI) are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.
History and development
The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705), influenced by the ancient I Chings binary system. Leibniz established that using the binary system combined the principles of arithmetic and logic.
In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Early electro-mechanical computers were constructed from switches and relay logic rather than the later innovations of vacuum tubes (thermionic valves) or transistors (from which later electronic computers were constructed). Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935 to 1938).
From 1934 to 1936, NEC engineer Akira Nakashima, Claude Shannon and Victor Shestakov introduced switching circuit theory in a series of papers showing that two-valued Boolean algebra, which they discovered independently, can describe the operation of switching circuits. Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Switching circuit theory became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II, with theoretical rigor superseding the ad hoc methods that had prevailed previously.
In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Their concept forms the basis of CMOS technology today. In 1957 Frosch and Derick were able to manufacture PMOS and NMOS planar gates. Later a team at Bell Labs demonstrated a working MOS with PMOS and NMOS gates. Both types were later combined and adapted into complementary MOS (CMOS) logic by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963.
Symbols
There are two sets of symbols for elementary logic gates in common use, both defined in ANSI/IEEE Std 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and derives from United States Military Standard MIL-STD-806 of the 1950s and 1960s. It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on ANSI Y32.14 and other early industry standards as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation of a much wider range of devices than is possible with the traditional symbols. The IEC standard, IEC 60617-12, has been adopted by other standards, such as EN 60617-12:1999 in Europe, BS EN 60617-12:1999 in the United Kingdom, and DIN EN 60617-12:1998 in Germany.
The mutual goal of IEEE Std 91-1984 and IEC 617-12 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium-scale circuits such as a 4-bit counter to a large-scale circuit such as a microprocessor.
IEC 617-12 and its renumbered successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them. These are, however, shown in ANSI/IEEE Std 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." IEC 60617-12 correspondingly contains the note (Section 2.1) "Although non-preferred, the use of other symbols recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form complex symbols (for example, use as embedded symbols) is discouraged." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another.
In the 1980s, schematics were the predominant method to design both circuit boards and custom ICs known as gate arrays. Today custom ICs and the field-programmable gate array are typically designed with Hardware Description Languages (HDL) such as Verilog or VHDL.
De Morgan equivalent symbols
By use of De Morgan's laws, an AND function is identical to an OR function with negated inputs and outputs. Likewise, an OR function is identical to an AND function with negated inputs and outputs. A NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs.
This leads to an alternative set of symbols for basic gates that use the opposite core symbol (AND or OR) but with the inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated.
A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor.
De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons.
Truth tables
Output comparison of various logic gates:
Universal logic gates
Charles Sanders Peirce (during 1880–1881) showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933. The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow. Consequently, these gates are sometimes called universal logic gates.
Data storage and sequential logic
Logic gates can also be used to hold a state, allowing data storage. A storage element can be constructed by connecting several gates in a "latch" circuit. Latching circuitry is used in static random-access memory. More complicated designs that use clock signals and that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". Formally, a flip-flop is called a bistable circuit, because it has two stable states which it can maintain indefinitely. The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called a sequential logic system since its output can be influenced by its previous state(s), i.e. by the sequence of input states. In contrast, the output from combinational logic is purely a combination of its present inputs, unaffected by the previous input and output states.
These logic circuits are used in computer memory. They vary in performance, based on factors of speed, complexity, and reliability of storage, and many different types of designs are used based on the application.
Manufacturing
Electronic gates
A functionally complete logic system may be composed of relays, valves (vacuum tubes), or transistors.
Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gain voltage amplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate.
For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400 series by Texas Instruments, the CMOS 4000 series by RCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack many mixed logic gates into a single integrated circuit. The field-programmable nature of programmable logic devices such as FPGAs has reduced the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed.
An important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered.
The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fan-out limit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speed synchronous circuits. Additional delay can be caused when many inputs are connected to an output, due to the distributed capacitance of all the inputs and wiring and the finite amount of current that each output can provide.
Logic families
There are several logic families with different characteristics (power consumption, speed, cost, size) such as: RDL (resistor–diode logic), RTL (resistor-transistor logic), DTL (diode–transistor logic), TTL (transistor–transistor logic) and CMOS. There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors.
The simplest family of logic gates uses bipolar transistors, and is called resistor–transistor logic (RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in early integrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting in diode–transistor logic (DTL). Transistor–transistor logic (TTL) then supplanted DTL.
As integrated circuits became more complex, bipolar transistors were replaced with smaller field-effect transistors (MOSFETs); see PMOS and NMOS. To reduce power consumption still further, most contemporary chip implementations of digital systems now use CMOS logic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power dissipation.
Other types of logic gates include, but are not limited to:
Three-state logic gates
A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used on buses of the CPU to allow multiple chips to send data. A group of three-states driving a line with a suitable control circuit is basically equivalent to a multiplexer, which may be physically distributed over separate devices or plug-in cards.
In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit.
Non-electronic logic gates
Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as the Harvard Mark I, were built from relay logic gates, using electro-mechanical relays. Logic gates can be made using pneumatic devices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale. Various types of fundamental logic gates have been constructed using molecules (molecular logic gates), which are based on chemical inputs and spectroscopic outputs. Logic gates have been made out of DNA (see DNA nanotechnology) and used to create a computer called MAYA (see MAYA-II). Logic gates can be made from quantum mechanical effects, see quantum logic gate. Photonic logic gates use nonlinear optical effects.
In principle any method that leads to a gate that is functionally complete (for example, either a NOR or a NAND gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not needed, and can be replaced by digital multiplexers, which can be built using only simple logic gates (such as NAND gates, NOR gates, or AND and OR gates).
| Technology | Digital logic | null |
18172 | https://en.wikipedia.org/wiki/Land%20mine | Land mine | A land mine, or landmine, is an explosive weapon often concealed under or camouflaged on the ground, and designed to destroy or disable enemy targets as they pass over or near it. Land mines are divided into two types: anti-tank mines, which are designed to disable tanks or other vehicles; and anti-personnel mines, which are designed to injure or kill people.
Land mines are typically pressure activated, exploding automatically when stepped on by a person or driven over by a vehicle, though alternative detonation mechanisms are sometimes used. A land mine may cause damage by direct blast effect, by fragments that are thrown by the blast, or by both. Land mines are typically laid throughout an area, creating a minefield which is dangerous to cross.
The use of land mines is controversial because of their indiscriminate nature and their potential to remain dangerous many years after a conflict has ended, harming civilians and the economy. With pressure from a number of campaign groups organised through the International Campaign to Ban Landmines, a global movement to prohibit their use led to the 1997 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, also known as the Ottawa Treaty. To date, 164 nations have signed the treaty. However, China, the Russian Federation and the United States are not signatories.
Definition
The Anti-Personnel Mine Ban Convention (also known as the Ottawa Treaty) and the Protocol on Mines, Booby-Traps and Other Devices define a mine as a "munition designed to be placed under, on or near the ground or other surface area and to be exploded by the presence, proximity or contact of a person or vehicle". Similar in function is the booby-trap, which the protocol defines as "any device or material which is designed, constructed or adapted to kill or injure and which functions unexpectedly when a person disturbs or approaches an apparently harmless object or performs an apparently safe act". Such actions might include opening a door or picking up an object. Normally, mines are mass-produced and placed in groups, while booby traps are improvised and deployed one at a time. Booby traps can also be non-explosive devices such as punji sticks. Overlapping both categories is the improvised explosive device (IED), which is "a device placed or fabricated in an improvised manner incorporating explosive material, destructive, lethal, noxious, incendiary, pyrotechnic materials or chemicals designed to destroy, disfigure, distract or harass. They may incorporate military stores, but are normally devised from non-military components." Some meet the definition of mines or booby traps and are also referred to as "improvised", "artisanal" or "locally manufactured" mines. Other types of IED are remotely activated, so are not considered mines.
Remotely delivered mines are dropped from aircraft or carried by devices such as artillery shells or rockets. Another type of remotely delivered explosive is the cluster munition, a device that releases several submunitions ("bomblets") over a large area. The use, transfer, production, and stockpiling of cluster munitions is prohibited by the international CCM treaty. If bomblets do not explode, they are referred to as unexploded ordnance (UXO), along with unexploded artillery shells and other explosive devices that were not manually placed (that is, mines and booby traps are not UXOs). Explosive remnants of war (ERW) include UXOs and abandoned explosive ordnance (AXO), devices that were never used and were left behind after a conflict.
History
The history of land mines can be divided into three main phases: In the ancient world, buried spikes provided many of the same functions as modern mines. Mines using gunpowder as the explosive were used from the Ming dynasty to the American Civil War. Subsequently, high explosives were developed for use in land mines.
Before explosives
Some fortifications in the Roman Empire were surrounded by a series of hazards buried in the ground. These included goads, pieces of wood with iron hooks on their ends; lilia (lilies, so named after their appearance), which were pits in which sharpened logs were arranged in a five-point pattern; and abatis, fallen trees with sharpened branches facing outwards. As with modern land mines, they were "victim-operated", often concealed, and complicated attempts by the enemy to remove the obstacles by making them vulnerable to projectiles such as spears. A notable use of these defenses was by Julius Caesar in the Battle of Alesia. His forces were besieging Vercingetorix, the leader of the Gauls, but Vercingetorix managed to send for reinforcements. To maintain the siege and defend against the reinforcements, Caesar formed a line of fortifications on both sides, and they played an important role in his victory. Lilies were also used by Scots against the English at the Battle of Bannockburn in 1314, and by Germans at the Battle of Passchendaele in the First World War.
A more easily deployed defense used by the Romans was the caltrop, a weapon 12–15 cm across with four sharp spikes that are oriented so that when it is thrown on the ground, one spike always points up. As with modern antipersonnel mines, caltrops are designed to disable soldiers rather than kill them; they are also more effective in stopping mounted forces, who lack the advantage of being able to carefully scrutinize each step they take (though forcing foot-mounted forces to take the time to do so has benefits in and of itself). They were used by the Jin dynasty in China at the Battle of Zhongdu to slow down the advance of Genghis Khan's army; Joan of Arc was wounded by one in the Siege of Orléans; in Japan they are known as tetsu-bishu and were used by ninjas from the fourteenth century onward. Caltrops are still strung together and used as roadblocks in some modern conflicts.
Gunpowder
East Asia
Gunpowder, an explosive mixture of sulfur, charcoal and potassium nitrate was invented in China by the 10th century and was used in warfare soon after. An "enormous bomb", credited to Lou Qianxia, was used in 1277 by the Chinese at the Battle of Zhongdu.
A 14th-century military treatise, the Huolongjing (Fire Dragon Manual), describes hollow cast iron cannonball shells filled with gunpowder. The wad of the mine was made of hard wood, carrying three different fuses in case of defective connection to the touch hole. These fuses were long and lit by hand, so they required carefully timed calculations of enemy movements.
The Huolongjing also describes land mines that were set off by enemy movement. A length of bamboo was waterproofed by wrapping it in cowhide and covering it with oil. It was filled with compressed gunpowder and lead or iron pellets, sealed with wax and concealed in a trench. The triggering mechanism was not fully described until the early 17th century. When the enemy stepped onto hidden boards, they dislodged a pin, causing a weight to fall. A cord attached to the weight was wrapped around a drum attached to two steel wheels; when the weight fell, the wheels struck sparks against flint, igniting a set of fuses leading to multiple mines. A similar mechanism was used in the first wheellock musket in Europe as sketched by Leonardo da Vinci around 1500 AD.
Another victim-operated device was the "underground sky-soaring thunder", which lured bounty hunters with halberds, pikes, and lances planted in the ground. If they pulled on one of these weapons, the butt end disturbed a bowl underneath and a slow-burning incandescent material in the bowl ignited the fuses.
Western world
At Augsburg in 1573, three centuries after the Chinese invented the first pressure-operated mine, a German military engineer by the name of Samuel Zimmermann invented the Fladdermine (flying mine). It consisted of a few pounds of black powder buried near the surface and was activated by stepping on it or tripping a wire that made a flintlock fire. Such mines were deployed on the slope in front of a fort. They were used during the Franco-Prussian War, but were probably not very effective because a flintlock does not work for long when left untended.
The fougasse, was a precursor of modern fragmentation mines and the claymore mine. It consisted of a cone-shape hole with gunpowder at the bottom, covered either by rocks and scrap iron (stone fougasse) or mortar shells, similar to large black powder hand grenades (shell fougasse). It was triggered by a flintlock connected to a tripwire on the surface. It could sometimes cause heavy casualties but required high maintenance due to the susceptibility of black powder to dampness. Consequently, it was mainly employed in the defenses of major fortifications, in which role it used in several European wars of the eighteenth century and the American Revolution.
Early land mines suffered from unreliable fuses which were vulnerable to damp. This changed with the invention of the safety fuse. Later, command initiation, the ability to detonate a charge immediately instead of waiting several minutes for a fuse to burn, became possible after electricity was developed. An electric current sent down a wire could ignite the charge with a spark. The Russians claim first use of this technology in the Russo-Turkish War of 1828–1829, and with it the fougasse remained useful until it was superseded by the Claymore mine in the 1960s.
Victim-activated mines were also unreliable because they relied on a flintlock to ignite the explosive. The percussion cap, developed in the early 19th century, made them much more reliable, and pressure-operated mines were deployed on land and sea in the Crimean War (1853–1856).
During the American Civil War, the Confederate brigadier general Gabriel J. Rains deployed thousands of "torpedoes" consisting of artillery shells with pressure caps, beginning with the Battle of Yorktown in 1862. As a captain, Rains had earlier employed explosive booby traps during the Seminole Wars in Florida in 1840. Over the course of the war, mines only caused a few hundred casualties, but they had a large effect on morale and slowed down the advance of Union troops. Many on both sides considered the use of mines barbaric, and in response, generals in the Union Army forced Confederate prisoners to remove the mines.
High explosives
Starting in the 19th century, more powerful explosives than gunpowder were developed, often for non-military reasons such as blasting train tunnels in the Alps and Rockies. Guncotton, up to four times more powerful than gunpowder, was invented by Christian Schonbein in 1846. It was dangerous to make until Frederick Augustus Abel developed a safe method in 1865. From the 1870s to the First World War, it was the standard explosive used by the British military.
In 1847, Ascanio Sobrero invented nitroglycerine to treat angina pectoris and it turned out to be a much more powerful explosive than guncotton. It was very dangerous to use until Alfred Nobel formulated a solid mixture he called dynamite and paired it with a safe detonator he developed. Even then, dynamite needed to be stored carefully or it could form crystals that detonated easily. Thus, the military still preferred guncotton.
In 1863, the German chemical industry developed trinitrotoluene (TNT). This had the advantage that it was difficult to detonate, so it could withstand the shock of firing by artillery pieces. It was also advantageous for land mines for several reasons: it was not detonated by the shock of shells landing nearby; it was lightweight, unaffected by damp, and stable under a wide range of conditions; it could be melted to fill a container of any shape, and it was cheap to make. Thus, it became the standard explosive in mines after the First World War.
Between the American Civil War and the First World War
The British used mines in the Siege of Khartoum. A Sudanese Mahdist force much larger than British strength was held off for ten months, but the town was ultimately taken and the British massacred. In the Boer War (1899–1903), they succeeded in holding Mafeking against Boer forces with the help of a mixture of real and fake minefields; and they laid mines alongside railroad tracks to discourage sabotage.
In the Russo-Japanese War of 1904–1905, both sides used land and sea mines, although the effect on land mainly affected morale. The naval mines were far more effective, destroying several battleships.
First World War
One sign of the increasing power of explosives used in land mines was that, by the First World War, they burst into about 1,000 high-velocity fragments; in the Franco-Prussian War (1870), it had only been 20 to 30 fragments. Nevertheless, anti-personnel mines were not a big factor in the war because machine guns, barbed wire and rapid-fire artillery were far more effective defenses. An exception was in Africa (now Tanzania and Namibia) where the warfare was much more mobile.
Towards the end of the war, the British started to use tanks to break through trench defenses. The Germans responded with anti-tank guns and mines. Improvised mines gave way to mass-produced mines consisting of wooden boxes filled with guncotton, and minefields were standardized to stop masses of tanks from advancing.
Between world wars, the future Allies did little work on land mines, but the Germans developed a series of anti-tank mines, the Tellermines (plate mines). They also developed the Schrapnell mine (also known as the S-mine), the first bounding mine. When triggered, this jumped up to about waist height and exploded, sending thousands of steel balls in all directions. Triggered by pressure, trip wires or electronics, it could harm soldiers within an area of about 2,800 square feet.
Second World War
Tens of millions of mines were laid in the Second World War, particularly in the deserts of North Africa and the steppes of Eastern Europe, where the open ground favored tanks. However, the first country to use them was Finland. They were defending against a much larger Soviet force with over 6,000 tanks, twenty times the number the Finns had; but they had terrain that was broken up by lakes and forests, so tank movement was restricted to roads and tracks. Their defensive line, the Mannerheim Line, integrated these natural defenses with mines, including simple fragmentation mines mounted on stakes.
While the Germans were advancing rapidly using blitzkrieg tactics, they did not make much use of mines. After 1942, however, they were on the defensive and became the most inventive and systematic users of mines. Their production shot up and they began inventing new types of mines as the Allies found ways to counter the existing ones. To make it more difficult to remove antitank mines, they surrounded them with S-mines and added anti-handling devices that would explode when soldiers tried to lift them. They also took a formal approach to laying mines and they kept detailed records of the locations of mines.
In the Second Battle of El Alamein in 1942, the Germans prepared for an Allied attack by laying about half a million mines in two fields running across the entire battlefield and five miles deep. Nicknamed the "Devil's gardens", they were covered by 88 mm anti-tank guns and small-arms fire. The Allies prevailed, but at the cost of over half their tanks; 20 percent of the losses were caused by mines.
The Soviets learned the value of mines from their war with Finland, and when Germany invaded they made heavy use of them, manufacturing over 67 million. At the Battle of Kursk, which put an end to the German advance, they laid over a million mines in eight belts with an overall depth of 35 kilometres.
Mines forced tanks to slow down and wait for soldiers to go ahead and remove the mines. The main method of breaching minefields involved prodding the dirt with a bayonet or stick at an angle of 30 degrees to avoid pressuring the top of the mine. Since all mines at the beginning of the war had metal casings, metal detectors could be used to speed up the locating of mines. A Polish officer, Józef Kosacki, developed a portable mine detector known as the Polish mine detector. To counter the detector, Germans developed mines with wooden casings, the Schü-mine 42 (anti-personnel) and Holzmine 42 (anti-tank). Effective, cheap and easy to make, the Schü-mine became the most common mine in the war. Mine casings were also made of glass, concrete and clay. The Russians developed a mine with a pressed-cardboard casing, the PMK40, and the Italians made an anti-tank mine out of bakelite. In 1944, the Germans created the Topfmine, an entirely non-metallic mine. They ensured that they could detect their own mines by covering them with radioactive sand; the Allies did not find this out until after the war.
Several mechanical methods for clearing mines were tried. Heavy rollers were attached to tanks or cargo trucks, but they did not last long and their weight made the tanks considerably slower. Tanks and bulldozers pushed ploughs that pushed aside any mines to a depth of 30 cm. The Bangalore torpedo, a long thin tube filled with explosives, was invented in 1912 and used to clear barbed wire; larger versions such as the Snake and the Conger were developed for clearing mines, but were not very effective. One of the best options was the flail, which had weights attached by chains to rotating drums. The first version, the Scorpion, was attached to the Matilda tank and used in the Second Battle of El Alamein. The Crab, attached to the Sherman tank, was faster, at 2 kilometers per hour; it was used during D-Day and the aftermath.
Cold War
During the Cold War, the members of NATO were concerned about massive armored attacks by the Soviet Union. They planned for a minefield stretching across the entire West German border, and developed new types of mines. The British designed an anti-tank mine, the Mark 7, to defeat rollers by detonating the second time it was pressed. It also had a 0.7-second delay so the tank would be directly over the mine. They also developed the first scatterable mine, the No. 7 Dingbat. The Americans used the M6 anti-tank mine and tripwire-operated bounding anti-personnel mines such as the M2 and M16.
In the Korean War, land mine use was dictated by the steep terrain, narrow valleys, forest cover and lack of developed roads. This made tanks less effective and more easily stopped by mines. However, mines laid near roads were often easy to spot. In response to this problem, the U.S. developed the M24, a mine that was placed off to the side of the road. When triggered by a tripwire, it fired a rocket. However, the mine was not available until after the war.
The Chinese had a lot of success with massed infantry attacks. The extensive forest cover limited the range of machine guns, but anti-personnel mines were effective. However, mines were poorly recorded and marked, often becoming as much a hazard to allies as enemies. Tripwire-operated mines were not defended by pressure mines; the Chinese were often able to disable them and reuse them against U.N. forces.
Looking for more destructive mines, the Americans developed the Claymore, a directional fragmentation mine that hurls steel balls in a 60-degree arc at a speed of 1,200m/s. They also developed a pressure-operated mine, the M14 "toe-popper". These, too, were ready too late for the Korean War.
In 1948, the British developed the No. 6 anti-personnel mine, a minimum-metal mine with a narrow diameter, making it difficult to detect with metal detectors or prodding. Its three-pronged pressure piece inspired the nickname "carrot mine". However, it was unreliable in wet conditions. In the 1960s the Canadians developed a similar, but more reliable mine, the C3A1 "Elsie" and the British army adopted it. The British also developed the L9 bar mine, a wide anti-tank mine with a rectangular shape, which covered more area, allowing a minefield to be laid four times as fast as previous mines. They also upgraded the Dingbat to the Ranger, a plastic mine that was fired from a truck-mounted discharger that could fire 72 mines at a time.
In the 1950s, the US Operation Doan Brook studied the feasibility of delivering mines by air. This led to three types of air-delivered mine. Wide Area Anti-Personnel Mines (WAAPMs) were small steel spheres that discharged tripwires when they hit the ground; each dispenser held 540 mines. The BLU-43 Dragontooth was small and had a flattened "W" shape to slow its descent, while the gravel mine was larger. Both were packed by the thousand into bombs. All three were designed to inactivate after a period of time, but any that failed to activate presented a safety challenge. Over 37 million gravel mines were produced between 1967 and 1968, and when they were dropped in places like Vietnam their locations were unmarked and unrecorded. A similar problem was presented by unexploded cluster munitions.
The next generation of scatterable mines arose in response to the increasing mobility of war. The Germans developed the Skorpion system, which scattered AT2 mines from a tracked vehicle. The Italians developed a helicopter delivery system that could rapidly switch between SB-33 anti-personnel mines and SB-81 anti-tank mines. The US developed a range of systems called the Family of Scatterable Mines (FASCAM) that could deliver mines by fast jet, artillery, helicopter and ground launcher.
Middle eastern conflicts
The Iraq-Iran War, the Gulf War, and the Islamic State have all contributed to land mine saturation in Iraq from the 1980s through 2020. In 2019, Iraq was the most saturated country in the world with land mines. Countries that provided land mines during the Iran-Iraq War included Belgium, Canada, Chile, China, Egypt, France, Italy, Romania, Singapore, the former Soviet Union and the U.S., and were concentrated in the Kurdish areas in the northern area of Iraq. During the Gulf War, the U.S. deployed 117,634 mines, with 27,967 being anti-personnel mines and 89,667 being anti-vehicle mines. The U.S. did not use land mines during the Iraq War.
Landmines and other unexploded battlefield ordnance contaminate at least 724 million square meters of land in Afghanistan. Only two of Afghanistan's twenty-nine provinces are believed to be free of landmines. The most heavily mined provinces are Herat and Kandahar. Since 1989, nearly 44,000 Afghan civilians have been recorded to have been killed or injured by landmines and Explosive Remnants of War (ERW) averaging to around 110 people per month. Improvised mines and ERW from armed clashes caused nearly 99 percent of the casualties recorded in 2021.
Invasion of Ukraine
During the 2022 Russian Invasion of Ukraine, both Russian and Ukrainian forces have used land mines. Ukrainian officials claim Russian forces planted thousands of land mines or other explosive devices during their withdrawal from Ukrainian cities, including in civilian areas. Russian forces have also utilized remotely delivered anti-personnel mines such as the POM-3.
Chemical and nuclear
In the First World War, the Germans developed a device, nicknamed the "Yperite Mine" by the British, that they left behind in abandoned trenches and bunkers. It was detonated by a delayed charge, spreading mustard gas ("Yperite"). In the Second World War they developed a modern chemical mine, the Sprüh-Büchse 37 (Bounding Gas Mine 37), but never used it. The United States developed the M1 chemical mine, which used mustard gas, in 1939; and the M23 chemical mine, which used the VX nerve agent, in 1960. The Soviets developed the KhF, a "bounding chemical mine". The French had chemical mines and the Iraqis were believed to have them before the invasion of Kuwait. In 1997, the Chemical Weapons Convention came into force, prohibiting the use of chemical weapons and mandating their destruction. By July 2023 all declared stockpiles of chemical weapons were destroyed.
For a few decades during the Cold War, the U.S. developed atomic demolition munitions, often referred to as nuclear land mines. These were portable nuclear bombs that could be placed by hand, and could be detonated remotely or with a timer. Some of these were deployed in Europe. Governments in West Germany, Turkey and Greece wanted to have nuclear minefields as a defense against attack from the Warsaw Pact. However, such weapons were politically and tactically infeasible, and by 1989 the last of these munitions was retired. The British also had a project, codenamed Blue Peacock, to develop nuclear mines to be buried in Germany; the project was cancelled in 1958.
Characteristics and function
A conventional land mine consists of a casing that is mostly filled with the main charge. It has a firing mechanism such as a pressure plate; this triggers a detonator or igniter, which in turn sets off a booster charge. There may be additional firing mechanisms in anti-handling devices.
Firing mechanisms and initiating actions
A land mine can be triggered by a number of things including pressure, movement, sound, magnetism and vibration. Anti-personnel mines commonly use the pressure of a person's foot as a trigger, but tripwires are also frequently employed. Because modern anti-vehicle mines usually employ magnetic triggers, they can detonate even if the victim's tires or tracks do not directly impact it. Advanced mines are able to sense the difference between friendly and enemy types of vehicles by way of a built-in signature catalog (an identification friend or foe system). This theoretically enables friendly forces to use the mined area while denying the enemy access.
Many mines combine the main trigger with a touch or tilt trigger to prevent enemy engineers from defusing the mine. Land mine designs tend to use as little metal as possible to make searching with a metal detector more difficult; land mines made mostly of plastic have the added advantage of being very inexpensive.
Some types of modern mines are designed to self-destruct, or chemically render themselves inert after a period of weeks or months to reduce the likelihood of civilian casualties at the conflict's end. These self-destruct mechanisms are not absolutely reliable, and most land mines laid historically are not equipped in this manner.
There is a common misconception that a landmine is armed by stepping on it and only triggered by stepping off. This is not the case for almost all types of mine. In virtually all cases the initial pressure trigger detonates the mine, since mines are designed to kill or maim the victim rather than standing still until the mine can be disarmed. This misperception originated with the fictional portrayal of mines, often in movies in which the disarming of a mine is a source of narrative tension.
Anti-handling devices
Anti-handling devices detonate the mine if someone attempts to lift, shift or disarm it. The intention is to hinder deminers by discouraging any attempts to clear minefields. There is a degree of overlap between the function of a boobytrap and an anti-handling device insofar as some mines have optional fuze pockets into which standard pull or pressure-release boobytrap firing devices can be screwed. Alternatively, some mines may mimic a standard design, but actually be specifically intended to kill deminers, such as the MC-3 and PMN-3 variants of the PMN mine. Anti-handling devices can be found on both anti-personnel mines and anti-tank mines, either as an integral part of their design or as improvised add-ons. For this reason, the standard render safe procedure for mines is often to destroy them on site without attempting to lift them.
Smart mines
"Smart mines" utilize a number of advanced technologies developed in the late 20th and early 21st century. Most commonly, this includes mechanisms to deactivate or self-destruct the mine after a preset period of time. This is intended to reduce civilian casualties and simplify demining.
Other innovations include "self-healing" minefields, which detect gaps in the field and can direct the mines to rearrange their positions, eliminating the gaps.
Anti-tank mines
Anti-tank mines were created in response to the invention of the tank in the First World War. At first improvised, purpose-built designs were developed. Set off when a tank passes, they attack the tank at one of its weaker areas – the tracks. They are designed to immobilize or destroy vehicles and their occupants. In U.S. military terminology destroying the vehicles is referred to as a catastrophic kill while only disabling its movement is referred to as a mobility kill.
Anti-tank mines are typically larger than anti-personnel mines and require more pressure to detonate. The high trigger pressure, normally requiring prevents them from being set off by infantry or smaller vehicles of lesser importance. More modern anti-tank mines use shaped charges to focus and increase the armor penetration of the explosives.
Anti-personnel mines
Anti-personnel mines are designed primarily to kill or injure people, as opposed to vehicles. They are often designed to injure rather than kill to increase the logistical support (evacuation, medical) burden on the opposing force. Some types of anti-personnel mines can also damage the tracks or wheels of armored vehicles.
In the asymmetric warfare conflicts and civil wars of the 21st century, improvised explosives, known as IEDs, have partially supplanted conventional land mines as the source of injury to dismounted (pedestrian) soldiers and civilians. IEDs are used mainly by insurgents and terrorists against regular armed forces and civilians. The injuries from the anti-personnel IED were recently reported in BMJ Open to be far worse than with landmines resulting in multiple limb amputations and lower body mutilation.
Warfare
Land mines were designed for two main uses:
To create defensive tactical barriers, channelling attacking forces into predetermined fire zones or slowing an invading force's progress to allow reinforcements to arrive.
To act as passive area denial weapons (to deny the enemy use of valuable terrain, resources or facilities when active defense of the area is not desirable or possible).
Land mines are currently used in large quantities mostly for this first purpose, thus their widespread use in the demilitarized zones (DMZs) of likely flashpoints such as Cyprus, Afghanistan and Korea. Syria has used land mines in its civil war. Since 2021, land mine use has risen in Myanmar during its internal conflict. As of 2023, both Russia and Ukraine have deployed land mines.
In military science, minefields are considered a defensive or harassing weapon, used to slow the enemy down, to deny certain terrain to the enemy, to focus enemy movement into kill zones, or to reduce morale by randomly attacking materiel and personnel. In some engagements during World War II, anti-tank mines accounted for half of all vehicles disabled.
Since combat engineers with mine-clearing equipment can clear a path through a minefield relatively quickly, mines are usually considered effective only if covered by fire.
The extents of minefields are often marked with warning signs and cloth tape, to prevent friendly troops and non-combatants from entering them. Of course, sometimes terrain can be denied using dummy minefields. Most forces carefully record the location and disposition of their own minefields, because warning signs can be destroyed or removed, and minefields should eventually be cleared. Minefields may also have marked or unmarked safe routes to allow friendly movement through them.
Placing minefields without marking and recording them for later removal is considered a war crime under Protocol II of the Convention on Certain Conventional Weapons, which is itself an annex to the Geneva Conventions.
Artillery and aircraft-scatterable mines allow minefields to be placed in front of moving formations of enemy units, including the reinforcement of minefields or other obstacles that have been breached by enemy engineers. They can also be used to cover the retreat of forces disengaging from the enemy, or for interdiction of supporting units to isolate front line units from resupply. In most cases these minefields consist of a combination of anti-tank and anti-personnel mines, with the anti-personnel mines making removal of the anti-tank mines more difficult. Mines of this type used by the United States are designed to self-destruct after a preset period of time, reducing the requirement for mine clearing to only those mines whose self-destruct system did not function. Some designs of these scatterable mines require an electrical charge (capacitor or battery) to detonate. After a certain period of time, either the charge dissipates, leaving them effectively inert or the circuitry is designed such that upon reaching a low level, the device is triggered, destroying the mine.
Guerrilla warfare
None of the conventional tactics and norms of mine warfare applies when they are employed in a guerrilla role:
The mines are not used in defensive roles (for specific position or area).
Mined areas are not marked.
Mines are usually placed singly and not in groups covering an area.
Mines are often left unattended (not covered by fire).
Land mines were commonly deployed by insurgents during the South African Border War, leading directly to the development of the first dedicated mine-protected armoured vehicles in South Africa. Namibian insurgents used anti-tank mines to throw South African military convoys into disarray before attacking them. To discourage detection and removal efforts, they also laid anti-personnel mines directly parallel to the anti-tank mines. This initially resulted in heavy South African military and police casualties, as the vast distances of road network vulnerable to insurgent sappers every day made comprehensive detection and clearance efforts impractical. The only other viable option was the adoption of mine-protected vehicles which could remain mobile on the roads with little risk to their passengers even if a mine was detonated. South Africa is widely credited with inventing the v-hull, a vee-shaped hull for armoured vehicles which deflects mine blasts away from the passenger compartment.
During the ongoing Syrian Civil War, Iraqi Civil War (2014–2017) and Yemeni Civil War (2015–present) land mines have been used for both defensive and guerrilla purposes.
Laying mines
Minefields may be laid by several means. The preferred, but most labour-intensive, way is to have engineers bury the mines, since this will make the mines practically invisible and reduce the number of mines needed to deny the enemy an area. Mines can be laid by specialized mine-laying vehicles. Mine-scattering shells may be fired by artillery from a distance of several tens of kilometers.
Mines may be dropped from helicopters or airplanes, or ejected from cluster bombs or cruise missiles.
Anti-tank minefields can be scattered with anti-personnel mines to make clearing them manually more time-consuming; and anti-personnel minefields are scattered with anti-tank mines to prevent the use of armored vehicles to clear them quickly. Some anti-tank mine types are also able to be triggered by infantry, giving them a dual purpose even though their main and official intention is to work as anti-tank weapons.
Some minefields are specifically booby-trapped to make clearing them more dangerous. Mixed anti-personnel and anti-tank minefields, anti-personnel mines under anti-tank mines, and fuses separated from mines have all been used for this purpose. Often, single mines are backed by a secondary device, designed to kill or maim personnel tasked with clearing the mine.
Multiple anti-tank mines have been buried in stacks of two or three with the bottom mine fuzed, to multiply the penetrating power. Since the mines are buried, the ground directs the energy of the blast in a single direction—through the bottom of the target vehicle or on the track.
Another specific use is to mine an aircraft runway immediately after it has been bombed to delay or discourage repair. Some cluster bombs combine these functions. One example was the British JP233 cluster bomb which includes munitions to damage (crater) the runway as well as anti-personnel mines in the same cluster bomb. As a result of the anti-personnel mine ban it was withdrawn from British Royal Air Force service, and the last stockpiles of the mine were destroyed on October 19, 1999.
Demining
Metal detectors were first used for demining, after their invention by the Polish officer Józef Kosacki. His invention, known as the Polish mine detector, was used by the Allies alongside mechanical methods, to clear the German mine fields during the Second Battle of El Alamein when 500 units were shipped to Field Marshal Montgomery's Eighth Army.
The Nazis used captured civilians who were chased across minefields to detonate the explosives. According to Laurence Rees "Curt von Gottberg, the SS-Obergruppenführer who, during 1943, conducted another huge anti-partisan action called Operation Kottbus on the eastern border of Belarus, reported that 'approximately two to three thousand local people were blown up in the clearing of the minefields'."
Whereas the placing and arming of mines is relatively inexpensive and simple, the process of detecting and removing them is typically expensive, slow, and dangerous. This is especially true of irregular warfare where mines were used on an ad hoc basis in unmarked and undocumented areas. Anti-personnel mines are most difficult to find, due to their small size and many being made almost entirely of non-metallic materials specifically to evade metal detectors.
Manual clearing remains the most effective technique for clearing mine fields, although hybrid techniques involving the use of animals and robots are being developed. Many animals are desirable due to having a strong sense of smell capable of detecting a land mine. Animals such as rats and dogs can be trained to detect the explosive agent.
Other techniques involve the use of geolocation technologies. a joint team of researchers at the University of New South Wales and Ohio State University was working to develop a system based on multi-sensor integration. Furthermore, defence firms have been increasingly competing on the creation of unmanned demining systems. In addition to conventional remote control mine defusing robots that operate either through precise mechanical dismantling, electronic destabilization and kinetic triggering methods, fully autonomous methods are in development. Notably, these autonomous methods utilize unmanned ground systems, or more recently subterranean systems such as the EMC Operations Termite, using either outward pressure differentials along system bodies, or corkscrew mechanisms.
The laying of land mines has inadvertently led to a positive development in the Falkland Islands. Minefields laid near the sea during the Falklands War have become favorite places for penguins, which do not weigh enough to detonate the mines. Therefore, they can breed safely, free of human intrusion. These odd sanctuaries have proven so popular and lucrative for ecotourism that efforts existed to prevent removal of the mines, but the area has since been demined.
International treaties
The use of land mines is controversial because they are indiscriminate weapons, harming soldier and civilian alike. They remain dangerous after the conflict in which they were deployed has ended, killing and injuring civilians and rendering land impassable and unusable for decades. To make matters worse, many factions have not kept accurate records (or any at all) of the exact locations of their minefields, making removal efforts painstakingly slow. These facts pose serious difficulties in many developing nations where the presence of mines hampers resettlement, agriculture, and tourism. The International Campaign to Ban Landmines campaigned successfully to prohibit their use, culminating in the 1997 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, known informally as the Ottawa Treaty.
The Treaty came into force on March 1, 1999. The treaty was the result of the leadership of the Governments of Canada, Norway, South Africa and Mozambique working with the International Campaign to Ban Landmines, launched in 1992. The campaign and its leader, Jody Williams, won the Nobel Peace Prize in 1997 for its efforts.
The treaty does not include anti-tank mines, cluster bombs or Claymore-type mines operated in command mode and focuses specifically on anti-personnel mines, because these pose the greatest long term (post-conflict) risk to humans and animals since they are typically designed to be triggered by any movement or pressure of only a few kilograms, whereas anti-tank mines require much more weight (or a combination of factors that would exclude humans). Existing stocks must be destroyed within four years of signing the treaty.
Signatories of the Ottawa Treaty agree that they will not use, produce, stockpile or trade in anti-personnel land mines. In 1997, there were 122 signatories; as of early 2016, 162 countries have joined the Treaty. Thirty-six countries, including the People's Republic of China, the Russian Federation and the United States, which together may hold tens of millions of stockpiled anti-personnel mines, are not party to the Convention. Another 34 have yet to sign on. The United States did not sign because the treaty lacks an exception for the Korean Demilitarized Zone.
Article 3 of the Treaty permits countries to retain land mines for use in training or development of countermeasures. Sixty-four countries have taken this option.
As an alternative to an outright ban, 10 countries follow regulations that are contained in a 1996 amendment of Protocol II of the Convention on Conventional Weapons (CCW). The countries are China, Finland, India, Israel, Morocco, Pakistan, South Korea and the United States. Sri Lanka, which had adhered to this regulation, announced in 2016 that it would join the Ottawa Treaty.
Submunitions and unexploded ordnance from cluster munitions can also function as land mines, in that they continue to kill and maim indiscriminately long after conflicts have ended. The Convention on Cluster Munitions (CCM) is an international treaty that prohibits the use, distribution, or manufacture of cluster munitions. The CCM entered into force in 2010, and has been ratified by over 100 countries.
Manufacturers
Before the Ottawa Treaty was adopted, the Arms Project of Human Rights Watch identified "almost 100 companies and government agencies in 48 countries" that had manufactured "more than 340 types of anti-personnel land mines in recent decades". Five to ten million mines were produced per year with a value of $50 to $200 million. The largest producers were probably China, Italy and the Soviet Union. The companies involved included giants such as Daimler-Benz, the Fiat Group, the Daewoo Group, RCA and General Electric.
As of 2017, the Landmine & Cluster Munition Monitor identified four countries that were "likely to be actively producing" land mines: India, Myanmar, Pakistan and South Korea. Another seven states reserved the right to make them but were probably not doing so: China, Cuba, Iran, North Korea, Russia, Singapore, and Vietnam.
In recent years, arms industry manufacturers have been utilizing non-static mines that can be specifically targeted in order to remove the imprecision of anti-personnel devices, promoting the use of movable underground systems, movable above ground systems and systems that can be expired (automatically or manually via strategic operators.)
Development of systems such as Termite, by arms firm EMC Operations has led to criticism from proponents of past multilateral agreements against the placement of land mines and submunitions due to expectations of similar long-dormancy period issues after systems break or fail after it was announced that vehicles would likely be armed to destroy static targets, rather than focus purely on demining efforts.
Impacts
Throughout the world there are millions of hectares that are contaminated with land mines.
Casualties
From 1999 to 2017, the Landmine Monitor has recorded over 120,000 casualties from mines, IEDs and ERW; it estimates that another 1,000 per year go unrecorded. The estimate for all time is over half a million. In 2017, at least 2,793 were killed and 4,431 injured. 87% of the casualties were civilians and 47% were children (less than 18 years old). The largest numbers of casualties were in Afghanistan (2,300), Syria (1,906), and Ukraine (429).
Environmental
Natural disasters can have a significant impact on efforts to demine areas of land. For example, the floods that occurred in Mozambique in 1999 and 2000 may have displaced hundreds of thousands of land mines left from the war. Uncertainty about their locations delayed recovery efforts.
Land degradation
From a study by Asmeret Asefaw Berhe, land degradation caused by land mines "can be classified into five groups: access denial, loss of biodiversity, micro-relief disruption, chemical composition, and loss of productivity". The effects of an explosion depend on: "(i) the objectives and methodological approaches of the investigation; (ii) concentration of mines in a unit area; (iii) chemical composition and toxicity of the mines; (iv) previous uses of the land and (v) alternatives that are available for the affected populations".
Access denial
The most prominent ecological issue associated with land mines (or fear of them) is denial of access to vital resources (where "access" refers to the ability to use resources, in contrast to "property", the right to use them). The presence and fear of presence of even a single land mine can discourage access for agriculture, water supplies and possibly conservation measures. Reconstruction and development of important structures such as schools and hospitals are likely to be delayed, and populations may shift to urban areas, increasing overcrowding and the risk of spreading diseases.
Access denial can have positive effects on the environment. When a mined area becomes a "no-man's land", plants and vegetation have a chance to grow and recover. For example, formerly arable lands in Nicaragua returned to forests and remained undisturbed after the establishment of land mines. Similarly, the penguins of the Falkland Islands have benefited because they are not heavy enough to trigger the mines present. However, these benefits can only last as long as animals, tree limbs, etc. do not detonate the mines. In addition, long idle periods could "potentially end up creating or exacerbating loss of productivity", particularly within land of low quality.
Loss of biodiversity
Land mines can threaten biodiversity by wiping out vegetation and wildlife during explosions or demining. This extra burden can push threatened and endangered species to extinction. They have also been used by poachers to target endangered species. Displaced refugees hunt animals for food and destroy habitat by making shelters.
Shrapnel, or abrasions of bark or roots caused by detonated mines, can cause the slow death of trees and provide entry sites for wood-rotting fungi. When land mines make land unavailable for farming, residents resort to the forests to meet all of their survival needs. This exploitation furthers the loss of biodiversity.
Chemical contamination
Near mines that have exploded or decayed, soils tend to be contaminated, particularly with heavy metals. Products produced from the explosives, both organic and inorganic substances, are most likely to be "long lasting, water-soluble and toxic even in small amounts". They can be implemented either "directly or indirectly into soil, water bodies, microorganisms and plants with drinking water, food products or during respiration".
Toxic compounds can also find their way into bodies of water and accumulate in land animals, fish and plants. They can act "as a nerve poison to hamper growth", with deadly effect.
| Technology | Explosive weapons | null |
18184 | https://en.wikipedia.org/wiki/Lizard | Lizard | Lizard is the common name used for all squamate reptiles other than snakes (and to a lesser extent amphisbaenians), encompassing over 7,000 species, ranging across all continents except Antarctica, as well as most oceanic island chains. The grouping is paraphyletic as some lizards are more closely related to snakes than they are to other lizards. Lizards range in size from chameleons and geckos a few centimeters long to the 3-meter-long Komodo dragon.
Most lizards are quadrupedal, running with a strong side-to-side motion. Some lineages (known as "legless lizards") have secondarily lost their legs, and have long snake-like bodies. Some lizards, such as the forest-dwelling Draco, are able to glide. They are often territorial, the males fighting off other males and signalling, often with bright colours, to attract mates and to intimidate rivals. Lizards are mainly carnivorous, often being sit-and-wait predators; many smaller species eat insects, while the Komodo eats mammals as big as water buffalo.
Lizards make use of a variety of antipredator adaptations, including venom, camouflage, reflex bleeding, and the ability to sacrifice and regrow their tails.
Anatomy
Largest and smallest
The adult length of species within the suborder ranges from a few centimeters for chameleons such as Brookesia micra and geckos such as Sphaerodactylus ariasae to nearly in the case of the largest living varanid lizard, the Komodo dragon. Most lizards are fairly small animals.
Distinguishing features
Lizards typically have rounded torsos, elevated heads on short necks, four limbs and long tails, although some are legless. Lizards and snakes share a movable quadrate bone, distinguishing them from the rhynchocephalians, which have more rigid diapsid skulls. Some lizards such as chameleons have prehensile tails, assisting them in climbing among vegetation.
As in other reptiles, the skin of lizards is covered in overlapping scales made of keratin. This provides protection from the environment and reduces water loss through evaporation. This adaptation enables lizards to thrive in some of the driest deserts on earth. The skin is tough and leathery, and is shed (sloughed) as the animal grows. Unlike snakes which shed the skin in a single piece, lizards slough their skin in several pieces. The scales may be modified into spines for display or protection, and some species have bone osteoderms underneath the scales.
The dentitions of lizards reflect their wide range of diets, including carnivorous, insectivorous, omnivorous, herbivorous, nectivorous, and molluscivorous. Species typically have uniform teeth suited to their diet, but several species have variable teeth, such as cutting teeth in the front of the jaws and crushing teeth in the rear. Most species are pleurodont, though agamids and chameleons are acrodont.
The tongue can be extended outside the mouth, and is often long. In the beaded lizards, whiptails and monitor lizards, the tongue is forked and used mainly or exclusively to sense the environment, continually flicking out to sample the environment, and back to transfer molecules to the vomeronasal organ responsible for chemosensation, analogous to but different from smell or taste. In geckos, the tongue is used to lick the eyes clean: they have no eyelids. Chameleons have very long sticky tongues which can be extended rapidly to catch their insect prey.
Three lineages, the geckos, anoles, and chameleons, have modified the scales under their toes to form adhesive pads, highly prominent in the first two groups. The pads are composed of millions of tiny setae (hair-like structures) which fit closely to the substrate to adhere using van der Waals forces; no liquid adhesive is needed. In addition, the toes of chameleons are divided into two opposed groups on each foot (zygodactyly), enabling them to perch on branches as birds do.
Physiology
Locomotion
Aside from legless lizards, most lizards are quadrupedal and move using gaits with alternating movement of the right and left limbs with substantial body bending. This body bending prevents significant respiration during movement, limiting their endurance, in a mechanism called Carrier's constraint. Several species can run bipedally, and a few can prop themselves up on their hindlimbs and tail while stationary. Several small species such as those in the genus Draco can glide: some can attain a distance of , losing in height. Some species, like geckos and chameleons, adhere to vertical surfaces including glass and ceilings. Some species, like the common basilisk, can run across water.
Senses
Lizards make use of their senses of sight, touch, olfaction and hearing like other vertebrates. The balance of these varies with the habitat of different species; for instance, skinks that live largely covered by loose soil rely heavily on olfaction and touch, while geckos depend largely on acute vision for their ability to hunt and to evaluate the distance to their prey before striking. Monitor lizards have acute vision, hearing, and olfactory senses. Some lizards make unusual use of their sense organs: chameleons can steer their eyes in different directions, sometimes providing non-overlapping fields of view, such as forwards and backwards at once. Lizards lack external ears, having instead a circular opening in which the tympanic membrane (eardrum) can be seen. Many species rely on hearing for early warning of predators, and flee at the slightest sound.
As in snakes and many mammals, all lizards have a specialised olfactory system, the vomeronasal organ, used to detect pheromones. Monitor lizards transfer scent from the tip of their tongue to the organ; the tongue is used only for this information-gathering purpose, and is not involved in manipulating food.
Some lizards, particularly iguanas, have retained a photosensory organ on the top of their heads called the parietal eye, a basal ("primitive") feature also present in the tuatara. This "eye" has only a rudimentary retina and lens and cannot form images, but is sensitive to changes in light and dark and can detect movement. This helps them detect predators stalking it from above.
Venom
Until 2006 it was thought that the Gila monster and the Mexican beaded lizard were the only venomous lizards. However, several species of monitor lizards, including the Komodo dragon, produce powerful venom in their oral glands. Lace monitor venom, for instance, causes swift loss of consciousness and extensive bleeding through its pharmacological effects, both lowering blood pressure and preventing blood clotting. Nine classes of toxin known from snakes are produced by lizards. The range of actions provides the potential for new medicinal drugs based on lizard venom proteins.
Genes associated with venom toxins have been found in the salivary glands of a wide range of lizards, including species traditionally thought of as non-venomous, such as iguanas and bearded dragons. This suggests that these genes evolved in the common ancestor of lizards and snakes, some 200 million years ago (forming a single clade, the Toxicofera). However, most of these putative venom genes were "housekeeping genes" found in all cells and tissues, including skin and cloacal scent glands. The genes in question may thus be evolutionary precursors of venom genes.
Respiration
Recent studies (2013 and 2014) on the lung anatomy of the savannah monitor and green iguana found them to have a unidirectional airflow system, which involves the air moving in a loop through the lungs when breathing. This was previously thought to only exist in the archosaurs (crocodilians and birds). This may be evidence that unidirectional airflow is an ancestral trait in diapsids.
Reproduction and life cycle
As with all amniotes, lizards rely on internal fertilisation and copulation involves the male inserting one of his hemipenes into the female's cloaca. Female lizards also have
hemiclitorises, a doubled
clitoris. The majority of species are oviparous (egg laying). The female deposits the eggs in a protective structure like a nest or crevice or simply on the ground. Depending on the species, clutch size can vary from 4–5 percent of the females body weight to 40–50 percent and clutches range from one or a few large eggs to dozens of small ones.
In most lizards, the eggs have leathery shells to allow for the exchange of water, although more arid-living species have calcified shells to retain water. Inside the eggs, the embryos use nutrients from the yolk. Parental care is uncommon and the female usually abandons the eggs after laying them. Brooding and protection of eggs do occur in some species. The female prairie skink uses respiratory water loss to maintain the humidity of the eggs which facilitates embryonic development. In lace monitors, the young hatch close to 300 days, and the female returns to help them escape the termite mound where the eggs were laid.
Around 20 percent of lizard species reproduce via viviparity (live birth). This is particularly common in Anguimorphs. Viviparous species give birth to relatively developed young which look like miniature adults. Embryos are nourished via a placenta-like structure. A minority of lizards have parthenogenesis (reproduction from unfertilised eggs). These species consist of all females who reproduce asexually with no need for males. This is known to occur in various species of whiptail lizards. Parthenogenesis was also recorded in species that normally reproduce sexually. A captive female Komodo dragon produced a clutch of eggs, despite being separated from males for over two years.
Sex determination in lizards can be temperature-dependent. The temperature of the eggs' micro-environment can determine the sex of the hatched young: low temperature incubation produces more females while higher temperatures produce more males. However, some lizards have sex chromosomes and both male heterogamety (XY and XXY) and female heterogamety (ZW) occur.
Aging
A significant component of aging in the painted dragon lizard Ctenophorus pictus is fading breeding colors. By manipulating superoxide levels (using a superoxide dismutase mimetic) it was shown that this fading coloration is likely due to gradual loss with lizard age of an innate capacity for antioxidation due to increasing DNA damage.
Behaviour
Diurnality and thermoregulation
The majority of lizard species are active during the day, though some are active at night, notably geckos. As ectotherms, lizards have a limited ability to regulate their body temperature, and must seek out and bask in sunlight to gain enough heat to become fully active. Thermoregulation behavior can be beneficial in the short term for lizards as it allows the ability to buffer environmental variation and endure climate warming.
In high altitudes, the Podarcis hispaniscus responds to higher temperature with a darker dorsal coloration to prevent UV-radiation and background matching. Their thermoregulatory mechanisms also allow the lizard to maintain their ideal body temperature for optimal mobility.
Territoriality
Most social interactions among lizards are between breeding individuals. Territoriality is common and is correlated with species that use sit-and-wait hunting strategies. Males establish and maintain territories that contain resources that attract females and which they defend from other males. Important resources include basking, feeding, and nesting sites as well as refuges from predators. The habitat of a species affects the structure of territories, for example, rock lizards have territories atop rocky outcrops. Some species may aggregate in groups, enhancing vigilance and lessening the risk of predation for individuals, particularly for juveniles. Agonistic behaviour typically occurs between sexually mature males over territory or mates and may involve displays, posturing, chasing, grappling and biting.
Communication
Lizards signal both to attract mates and to intimidate rivals. Visual displays include body postures and inflation, push-ups, bright colours, mouth gapings and tail waggings. Male anoles and iguanas have dewlaps or skin flaps which come in various sizes, colours and patterns and the expansion of the dewlap as well as head-bobs and body movements add to the visual signals. Some species have deep blue dewlaps and communicate with ultraviolet signals. Blue-tongued skinks will flash their tongues as a threat display. Chameleons are known to change their complex colour patterns when communicating, particularly during agonistic encounters. They tend to show brighter colours when displaying aggression and darker colours when they submit or "give up".
Several gecko species are brightly coloured; some species tilt their bodies to display their coloration. In certain species, brightly coloured males turn dull when not in the presence of rivals or females. While it is usually males that display, in some species females also use such communication. In the bronze anole, head-bobs are a common form of communication among females, the speed and frequency varying with age and territorial status. Chemical cues or pheromones are also important in communication. Males typically direct signals at rivals, while females direct them at potential mates. Lizards may be able to recognise individuals of the same species by their scent.
Acoustic communication is less common in lizards. Hissing, a typical reptilian sound, is mostly produced by larger species as part of a threat display, accompanying gaping jaws. Some groups, particularly geckos, snake-lizards, and some iguanids, can produce more complex sounds and vocal apparatuses have independently evolved in different groups. These sounds are used for courtship, territorial defense and in distress, and include clicks, squeaks, barks and growls. The mating call of the male tokay gecko is heard as "tokay-tokay!". Tactile communication involves individuals rubbing against each other, either in courtship or in aggression. Some chameleon species communicate with one another by vibrating the substrate that they are standing on, such as a tree branch or leaf.
Ecology
Distribution and habitat
Lizards are found worldwide, excluding the far north and Antarctica, and some islands. They can be found in elevations from sea level to . They prefer warmer, tropical climates but are adaptable and can live in all but the most extreme environments. Lizards also exploit a number of habitats; most primarily live on the ground, but others may live in rocks, on trees, underground and even in water. The marine iguana is adapted for life in the sea.
Diet
The majority of lizard species are predatory and the most common prey items are small, terrestrial invertebrates, particularly insects. Many species are sit-and-wait predators though others may be more active foragers. Chameleons prey on numerous insect species, such as beetles, grasshoppers and winged termites as well as spiders. They rely on persistence and ambush to capture these prey. An individual perches on a branch and stays perfectly still, with only its eyes moving. When an insect lands, the chameleon focuses its eyes on the target and slowly moves toward it before projecting its long sticky tongue which, when hauled back, brings the attached prey with it. Geckos feed on crickets, beetles, termites and moths.
Termites are an important part of the diets of some species of Autarchoglossa, since, as social insects, they can be found in large numbers in one spot. Ants may form a prominent part of the diet of some lizards, particularly among the lacertas. Horned lizards are also well known for specializing on ants. Due to their small size and indigestible chitin, ants must be consumed in large amounts, and ant-eating lizards have larger stomachs than even herbivorous ones. Species of skink and alligator lizards eat snails and their power jaws and molar-like teeth are adapted for breaking the shells.
Larger species, such as monitor lizards, can feed on larger prey including fish, frogs, birds, mammals and other reptiles. Prey may be swallowed whole and torn into smaller pieces. Both bird and reptile eggs may also be consumed as well. Gila monsters and beaded lizards climb trees to reach both the eggs and young of birds. Despite being venomous, these species rely on their strong jaws to kill prey. Mammalian prey typically consists of rodents and leporids; the Komodo dragon can kill prey as large as water buffalo. Dragons are prolific scavengers, and a single decaying carcass can attract several from away. A dragon is capable of consuming a carcass in 17 minutes.
Around 2 percent of lizard species, including many iguanids, are herbivores. Adults of these species eat plant parts like flowers, leaves, stems and fruit, while juveniles eat more insects. Plant parts can be hard to digest, and, as they get closer to adulthood, juvenile iguanas eat faeces from adults to acquire the microflora necessary for their transition to a plant-based diet. Perhaps the most herbivorous species is the marine iguana which dives to forage for algae, kelp and other marine plants. Some non-herbivorous species supplement their insect diet with fruit, which is easily digested.
Antipredator adaptations
Lizards have a variety of antipredator adaptations, including running and climbing, venom, camouflage, tail autotomy, and reflex bleeding.
Camouflage
Lizards exploit a variety of different camouflage methods. Many lizards are disruptively patterned. In some species, such as Aegean wall lizards, individuals vary in colour, and select rocks which best match their own colour to minimise the risk of being detected by predators. The Moorish gecko is able to change colour for camouflage: when a light-coloured gecko is placed on a dark surface, it darkens within an hour to match the environment. The chameleons in general use their ability to change their coloration for signalling rather than camouflage, but some species such as Smith's dwarf chameleon do use active colour change for camouflage purposes.
The flat-tail horned lizard's body is coloured like its desert background, and is flattened and fringed with white scales to minimise its shadow.
Autotomy
Many lizards, including geckos and skinks, are capable of shedding their tails (autotomy). The detached tail, sometimes brilliantly coloured, continues to writhe after detaching, distracting the predator's attention from the fleeing prey. Lizards partially regenerate their tails over a period of weeks. Some 326 genes are involved in regenerating lizard tails. The fish-scale gecko Geckolepis megalepis sheds patches of skin and scales if grabbed.
Escape, playing dead, reflex bleeding
Many lizards attempt to escape from danger by running to a place of safety; for example, wall lizards can run up walls and hide in holes or cracks. Horned lizards adopt differing defences for specific predators. They may play dead to deceive a predator that has caught them; attempt to outrun the rattlesnake, which does not pursue prey; but stay still, relying on their cryptic coloration, for Masticophis whip snakes which can catch even swift prey. If caught, some species such as the greater short-horned lizard puff themselves up, making their bodies hard for a narrow-mouthed predator like a whip snake to swallow. Finally, horned lizards can squirt blood at cat and dog predators from a pouch beneath its eyes, to a distance of about ; the blood tastes foul to these attackers.
Evolution
Fossil history
The closest living relatives of lizards are rhynchocephalians, a once diverse order of reptiles, of which is there is now only one living species, the tuatara of New Zealand. Some reptiles from the Early and Middle Triassic, like Sophineta and Megachirella, are suggested to be stem-group squamates, more closely related to modern lizards than rhynchocephalians, however, their position is disputed, with some studies recovering them as less closely related to squamates than rhynchocephalians are. The oldest undisputed lizards date to the Middle Jurassic, from remains found In Europe, Asia and North Africa. Lizard morphological and ecological diversity substantially increased over the course of the Cretaceous. In the Palaeogene, lizard body sizes in North America peaked during the middle of the period.
Mosasaurs likely evolved from an extinct group of aquatic lizards known as aigialosaurs in the Early Cretaceous. Dolichosauridae is a family of Late Cretaceous aquatic varanoid lizards closely related to the mosasaurs.
Phylogeny
External
The position of the lizards and other Squamata among the reptiles was studied using fossil evidence by Rainer Schoch and Hans-Dieter Sues in 2015. Lizards form about 60% of the extant non-avian reptiles.
Internal
Both the snakes and the Amphisbaenia (worm lizards) are clades deep within the Squamata (the smallest clade that contains all the lizards), so "lizard" is paraphyletic.
The cladogram is based on genomic analysis by Wiens and colleagues in 2012 and 2016. Excluded taxa are shown in upper case on the cladogram.
Taxonomy
In the 13th century, lizards were recognized in Europe as part of a broad category of reptiles that consisted of a miscellany of egg-laying creatures, including "snakes, various fantastic monsters, […], assorted amphibians, and worms", as recorded by Vincent of Beauvais in his Mirror of Nature. The seventeenth century saw changes in this loose description. The name Sauria was coined by James Macartney (1802); it was the Latinisation of the French name Sauriens, coined by Alexandre Brongniart (1800) for an order of reptiles in the classification proposed by the author, containing lizards and crocodilians, later discovered not to be each other's closest relatives. Later authors used the term "Sauria" in a more restricted sense, i.e. as a synonym of Lacertilia, a suborder of Squamata that includes all lizards but excludes snakes. This classification is rarely used today because Sauria so-defined is a paraphyletic group. It was defined as a clade by Jacques Gauthier, Arnold G. Kluge and Timothy Rowe (1988) as the group containing the most recent common ancestor of archosaurs and lepidosaurs (the groups containing crocodiles and lizards, as per Mcartney's original definition) and all its descendants. A different definition was formulated by Michael deBraga and Olivier Rieppel (1997), who defined Sauria as the clade containing the most recent common ancestor of Choristodera, Archosauromorpha, Lepidosauromorpha and all their descendants. However, these uses have not gained wide acceptance among specialists.
Suborder Lacertilia (Sauria) – (lizards)
Family †Bavarisauridae
Family †Eichstaettisauridae
Infraorder Iguanomorpha
Family †Arretosauridae
Family †Euposauridae
Family Corytophanidae (casquehead lizards)
Family Iguanidae (iguanas and spinytail iguanas)
Family Phrynosomatidae (earless, spiny, tree, side-blotched and horned lizards)
Family Polychrotidae (anoles)
Family Leiosauridae (see Polychrotinae)
Family Tropiduridae (neotropical ground lizards)
Family Liolaemidae (see Tropidurinae)
Family Leiocephalidae (see Tropidurinae)
Family Crotaphytidae (collared and leopard lizards)
Family Opluridae (Madagascar iguanids)
Family Hoplocercidae (wood lizards, clubtails)
Family †Priscagamidae
Family †Isodontosauridae
Family Agamidae (agamas, frilled lizards)
Family Chamaeleonidae (chameleons)
Infraorder Gekkota
Family Gekkonidae (geckos)
Family Pygopodidae (legless geckos)
Family Dibamidae (blind lizards)
Infraorder Scincomorpha
Family †Paramacellodidae
Family †Slavoiidae
Family Scincidae (skinks)
Family Cordylidae (spinytail lizards)
Family Gerrhosauridae (plated lizards)
Family Xantusiidae (night lizards)
Family Lacertidae (wall lizards or true lizards)
Family †Mongolochamopidae
Family †Adamisauridae
Family Teiidae (tegus and whiptails)
Family Gymnophthalmidae (spectacled lizards)
Infraorder Diploglossa
Family Anguidae (slowworms, glass lizards)
Family Anniellidae (American legless lizards)
Family Xenosauridae (knob-scaled lizards)
Infraorder Platynota (Varanoidea)
Family Varanidae (monitor lizards)
Family Lanthanotidae (earless monitor lizards)
Family Helodermatidae (Gila monsters and beaded lizards)
Family †Mosasauridae (marine lizards)
Convergence
Lizards have frequently evolved convergently, with multiple groups independently developing similar morphology and ecological niches. Anolis ecomorphs have become a model system in evolutionary biology for studying convergence. Limbs have been lost or reduced independently over two dozen times across lizard evolution, including in the Anniellidae, Anguidae, Cordylidae, Dibamidae, Gymnophthalmidae, Pygopodidae, and Scincidae; snakes are just the most famous and species-rich group of Squamata to have followed this path.
Relationship with humans
Interactions and uses by humans
Most lizard species are harmless to humans. Only the largest lizard species, the Komodo dragon, which reaches in length and weighs up to , has been known to stalk, attack, and, on occasion, kill humans. An eight-year-old Indonesian boy died from blood loss after an attack in 2007.
Numerous species of lizard are kept as pets, including bearded dragons, iguanas, anoles, and geckos (such as the popular leopard gecko).Monitor lizards such as the savannah monitor and tegus such as the Argentine tegu and red tegu are also kept.
Green iguanas are eaten in Central America, where they are sometimes referred to as "chicken of the tree" after their habit of resting in trees and their supposedly chicken-like taste, while spiny-tailed lizards are eaten in Africa. In North Africa, Uromastyx species are considered dhaab or 'fish of the desert' and eaten by nomadic tribes.Lizards such as the Gila monster produce toxins with medical applications. Gila toxin reduces plasma glucose; the substance is now synthesized for use in the anti-diabetes drug exenatide (Byetta). Another toxin from Gila monster saliva has been studied for use as an anti-Alzheimer's drug.
In culture
Lizards appear in myths and folktales around the world. In Australian Aboriginal mythology, Tarrotarro, the lizard god, split the human race into male and female, and gave people the ability to express themselves in art. A lizard king named Mo'o features in Hawaii and other cultures in Polynesia. In the Amazon, the lizard is the king of beasts, while among the Bantu of Africa, the god UNkulunkulu sent a chameleon to tell humans they would live forever, but the chameleon was held up, and another lizard brought a different message, that the time of humanity was limited. A popular legend in Maharashtra tells the tale of how a common Indian monitor, with ropes attached, was used to scale the walls of the fort in the Battle of Sinhagad. In the Bhojpuri speaking region of India and Nepal, there is a belief among children that, on touching skink's tail three (or five) time with the shortest finger gives money.
Lizards in many cultures share the symbolism of snakes, especially as an emblem of resurrection. This may have derived from their regular molting. The motif of lizards on Christian candle holders probably alludes to the same symbolism. According to Jack Tresidder, in Egypt and the Classical world, they were beneficial emblems, linked with wisdom. In African, Aboriginal and Melanesian folklore they are linked to cultural heroes or ancestral figures.
| Biology and health sciences | Reptiles | null |
18203 | https://en.wikipedia.org/wiki/Lambda%20calculus | Lambda calculus | Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal machine, a model of computation that can be used to simulate any Turing machine (and vice versa). It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940.
Lambda calculus consists of constructing lambda terms and performing reduction operations on them. A term is defined as any valid lambda calculus expression. In the simplest form of lambda calculus, terms are built using only the following rules:
: A variable is a character or string representing a parameter.
: A lambda abstraction is a function definition, taking as input the bound variable (between the λ and the punctum/dot .) and returning the body .
: An application, applying a function to an argument . Both and are lambda terms.
The reduction operations include:
: α-conversion, renaming the bound variables in the expression. Used to avoid name collisions.
: β-reduction, replacing the bound variables with the argument expression in the body of the abstraction.
If De Bruijn indexing is used, then α-conversion is no longer required as there will be no name collisions. If repeated application of the reduction steps eventually terminates, then by the Church–Rosser theorem it will produce a β-normal form.
Variable names are not needed if using a universal lambda function, such as Iota and Jot, which can create any function behavior by calling it on itself in various combinations.
Explanation and applications
Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine. Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denote binding a variable in a function.
Lambda calculus may be untyped or typed. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are strictly weaker than the untyped lambda calculus, which is the primary subject of this article, in the sense that typed lambda calculi can express less than the untyped calculus can. On the other hand, typed lambda calculi allow more things to be proven. For example, in simply typed lambda calculus, it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate (see below). One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus.
Lambda calculus has applications in many different areas in mathematics, philosophy, linguistics, and computer science.
Lambda calculus has played an important role in the development of the theory of programming languages. Functional programming languages implement lambda calculus. Lambda calculus is also a current research topic in category theory.
History
Lambda calculus was introduced by mathematician Alonzo Church in the 1930s as part of an investigation into the foundations of mathematics. The original system was shown to be logically inconsistent in 1935 when Stephen Kleene and J. B. Rosser developed the Kleene–Rosser paradox.
Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus. In 1940, he also introduced a computationally weaker, but logically consistent system, known as the simply typed lambda calculus.
Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks to Richard Montague and other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics and computer science.
Origin of the λ symbol
There is some uncertainty over the reason for Church's use of the Greek letter lambda (λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006):
By the way, why did Church choose the notation “λ”? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation “” used for class-abstraction by Whitehead and Russell, by first modifying “” to “” to distinguish function-abstraction from class-abstraction, and then changing “” to “λ” for ease of printing.
This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen.
Dana Scott has also addressed this question in various public lectures.
Scott recounts that he once posed a question about the origin of the lambda symbol to Church's former student and son-in-law John W. Addison Jr., who then wrote his father-in-law a postcard:
Dear Professor Church,
Russell had the iota operator, Hilbert had the epsilon operator. Why did you choose lambda for your operator?
According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe".
Informal description
Motivation
Computable functions are a fundamental concept within computer science and mathematics. The lambda calculus provides simple semantics for computation which are useful for formally studying properties of computation. The lambda calculus incorporates two simplifications that make its semantics simple.
The first simplification is that the lambda calculus treats functions "anonymously;" it does not give them explicit names. For example, the function
can be rewritten in anonymous form as
(which is read as "a tuple of and is mapped to "). Similarly, the function
can be rewritten in anonymous form as
where the input is simply mapped to itself.
The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance the function, can be reworked into an equivalent function that accepts a single input, and as output returns another function, that in turn accepts a single input. For example,
can be reworked into
This method, known as currying, transforms a function that takes multiple arguments into a chain of functions each with a single argument.
Function application of the function to the arguments (5, 2), yields at once
,
whereas evaluation of the curried version requires one more step
// the definition of has been used with in the inner expression. This is like β-reduction.
// the definition of has been used with . Again, similar to β-reduction.
to arrive at the same result.
The lambda calculus
The lambda calculus consists of a language of lambda terms, that are defined by a certain formal syntax, and a set of transformation rules for manipulating the lambda terms. These transformation rules can be viewed as an equational theory or as an operational definition.
As described above, having no names, all functions in the lambda calculus are anonymous functions. They only accept one input variable, so currying is used to implement functions of several variables.
Lambda terms
The syntax of the lambda calculus defines some expressions as valid lambda calculus expressions and some as invalid, just as some strings of characters are valid computer programs and some are not. A valid lambda calculus expression is called a "lambda term'''".
The following three rules give an inductive definition that can be applied to build all syntactically valid lambda terms:{{efn|name=lamTerms|1= The expression e can be: variables x, lambda abstractions, or applications —in BNF, .— from Wikipedia's Simply typed lambda calculus#Syntax for untyped lambda calculus}}
variable is itself a valid lambda term.
if is a lambda term, and is a variable, then is a lambda term (called an abstraction);
if and are lambda terms, then is a lambda term (called an application).
Nothing else is a lambda term. That is, a lambda term is valid if and only if it can be obtained by repeated application of these three rules. For convenience, some parentheses can be omitted when writing a lambda term. For example, the outermost parentheses are usually not written. See § Notation, below, for an explicit description of which parentheses are optional. It is also common to extend the syntax presented here with additional operations, which allows making sense of terms such as The focus of this article is the pure lambda calculus without extensions, but lambda terms extended with arithmetic operations are used for explanatory purposes.
An abstraction denotes an § anonymous function that takes a single input and returns . For example, is an abstraction representing the function defined by using the term for . The name is superfluous when using abstraction. The syntax binds the variable in the term . The definition of a function with an abstraction merely "sets up" the function but does not invoke it.
An application represents the application of a function to an input , that is, it represents the act of calling function on input to produce .
A lambda term may refer to a variable that has not been bound, such as the term (which represents the function definition ). In this term, the variable has not been defined and is considered an unknown. The abstraction is a syntactically valid term and represents a function that adds its input to the yet-unknown .
Parentheses may be used and might be needed to disambiguate terms. For example,
is of form and is therefore an abstraction, while
is of form and is therefore an application.
The examples 1 and 2 denote different terms, differing only in where the parentheses are placed. They have different meanings: example 1 is a function definition, while example 2 is a function application. The lambda variable is a placeholder in both examples.
Here, example 1 defines a function , where is , an anonymous function , with input ; while example 2, , is M applied to N, where is the lambda term being applied to the input which is . Both examples 1 and 2 would evaluate to the identity function .
Functions that operate on functions
In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions.
For example, the lambda term represents the identity function, . Further, represents the constant function , the function that always returns , no matter the input. As an example of a function operating on functions, the function composition can be defined as .
There are several notions of "equivalence" and "reduction" that allow lambda terms to be "reduced" to "equivalent" lambda terms.
Alpha equivalence
A basic form of equivalence, definable on lambda terms, is alpha equivalence. It captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter.
For instance, and are alpha-equivalent lambda terms, and they both represent the same function (the identity function).
The terms and are not alpha-equivalent, because they are not bound in an abstraction.
In many presentations, it is usual to identify alpha-equivalent lambda terms.
The following definitions are necessary in order to be able to define β-reduction:
Free variables
The free variables
of a term are those variables not bound by an abstraction. The set of free variables of an expression is defined inductively:
The free variables of are just
The set of free variables of is the set of free variables of , but with removed
The set of free variables of is the union of the set of free variables of and the set of free variables of .
For example, the lambda term representing the identity has no free variables, but the function has a single free variable, .
Capture-avoiding substitutions
Suppose , and are lambda terms, and and are variables.
The notation indicates substitution of for in in a capture-avoiding manner. This is defined so that:
; with substituted for , becomes
if ; with substituted for , (which is not ) remains
; substitution distributes to both sides of an application
; a variable bound by an abstraction is not subject to substitution; substituting such variable leaves the abstraction unchanged
if and does not appear among the free variables of ( is said to be "fresh" for ) ; substituting a variable which is not bound by an abstraction proceeds in the abstraction's body, provided that the abstracted variable is "fresh" for the substitution term .
For example, , and .
The freshness condition (requiring that is not in the free variables of ) is crucial in order to ensure that substitution does not change the meaning of functions.
For example, a substitution that ignores the freshness condition could lead to errors: . This erroneous substitution would turn the constant function into the identity .
In general, failure to meet the freshness condition can be remedied by alpha-renaming first, with a suitable fresh variable.
For example, switching back to our correct notion of substitution, in the abstraction can be renamed with a fresh variable , to obtain , and the meaning of the function is preserved by substitution.
β-reduction
The β-reduction rule states that an application of the form reduces to the term . The notation is used to indicate that β-reduces to .
For example, for every , . This demonstrates that really is the identity.
Similarly, , which demonstrates that is a constant function.
The lambda calculus may be seen as an idealized version of a functional programming language, like Haskell or Standard ML. Under this view, β-reduction corresponds to a computational step. This step can be repeated by additional β-reductions until there are no more applications left to reduce. In the untyped lambda calculus, as presented here, this reduction process may not terminate. For instance, consider the term .
Here .
That is, the term reduces to itself in a single β-reduction, and therefore the reduction process will never terminate.
Another aspect of the untyped lambda calculus is that it does not distinguish between different kinds of data. For instance, it may be desirable to write a function that only operates on numbers. However, in the untyped lambda calculus, there is no way to prevent a function from being applied to truth values, strings, or other non-number objects.
Formal definition
Definition
Lambda expressions are composed of:
variables v1, v2, ...;
the abstraction symbols λ (lambda) and . (dot);
parentheses ().
The set of lambda expressions, , can be defined inductively:
If x is a variable, then
If x is a variable and then
If then
Instances of rule 2 are known as abstractions and instances of rule 3 are known as applications. See § reducible expression
This set of rules may be written in Backus–Naur form as:
<expression> :== <abstraction> | <application> | <variable>
<abstraction> :== λ <variable> . <expression>
<application> :== ( <expression> <expression> )
<variable> :== v1 | v2 | ...
Notation
To keep the notation of lambda expressions uncluttered, the following conventions are usually applied:
Outermost parentheses are dropped: M N instead of (M N).
Applications are assumed to be left associative: M N P may be written instead of ((M N) P).
When all variables are single-letter, the space in applications may be omitted: MNP instead of M N P.
The body of an abstraction extends as far right as possible: λx.M N means λx.(M N) and not (λx.M) N.
A sequence of abstractions is contracted: λx.λy.λz.N is abbreviated as λxyz.N.
Free and bound variables
The abstraction operator, λ, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to be bound. In an expression λx.M, the part λx is often called binder, as a hint that the variable x is getting bound by prepending λx to M. All other variables are called free. For example, in the expression λy.x x y, y is a bound variable and x is a free variable. Also a variable is bound by its nearest abstraction. In the following example the single occurrence of x in the expression is bound by the second lambda: λx.y (λx.z x).
The set of free variables of a lambda expression, M, is denoted as FV(M) and is defined by recursion on the structure of the terms, as follows:
, where x is a variable.
.
An expression that contains no free variables is said to be closed. Closed lambda expressions are also known as combinators and are equivalent to terms in combinatory logic.
Reduction
The meaning of lambda expressions is defined by how expressions can be reduced.
There are three kinds of reduction:
α-conversion: changing bound variables;
β-reduction: applying functions to their arguments;
η-reduction: which captures a notion of extensionality.
We also speak of the resulting equivalences: two expressions are α-equivalent, if they can be α-converted into the same expression. β-equivalence and η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, (λx.M) N is a β-redex in expressing the substitution of N for x in M. The expression to which a redex reduces is called its reduct; the reduct of (λx.M) N is M[x := N].
If x is not free in M, λx.M x is also an η-redex, with a reduct of M.
α-conversion α-conversion (alpha-conversion), sometimes known as α-renaming, allows bound variable names to be changed. For example, α-conversion of λx.x might yield λy.y. Terms that differ only by α-conversion are called α-equivalent. Frequently, in uses of lambda calculus, α-equivalent terms are considered to be equivalent.
The precise rules for α-conversion are not completely trivial. First, when α-converting an abstraction, the only variable occurrences that are renamed are those that are bound to the same abstraction. For example, an α-conversion of λx.λx.x could result in λy.λx.x, but it could not result in λy.λx.y. The latter has a different meaning from the original. This is analogous to the programming notion of variable shadowing.
Second, α-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replace x with y in λx.λy.x, we get λy.λy.y, which is not at all the same.
In programming languages with static scope, α-conversion can be used to make name resolution simpler by ensuring that no variable name masks a name in a containing scope (see α-renaming to make name resolution trivial).
In the De Bruijn index notation, any two α-equivalent terms are syntactically identical.
Substitution
Substitution, written M[x := N], is the process of replacing all free occurrences of the variable x in the expression M with expression N. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any lambda expression):
x[x := N] = N
y[x := N] = y, if x ≠ y
(M1 M2)[x := N] = M1[x := N] M2[x := N]
(λx.M)[x := N] = λx.M
(λy.M)[x := N] = λy.(M[x := N]), if x ≠ y and y ∉ FV(N) See above for the FV
To substitute into an abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for (λx.y)[y := x] to result in λx.x, because the substituted x was supposed to be free but ended up being bound. The correct substitution in this case is λz.x, up to α-equivalence. Substitution is defined uniquely up to α-equivalence. See Capture-avoiding substitutions above
β-reduction β-reduction (beta reduction) captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of (λx.M) N is M[x := N].
For example, assuming some encoding of 2, 7, ×, we have the following β-reduction: (λn.n × 2) 7 → 7 × 2.
β-reduction can be seen to be the same as the concept of local reducibility in natural deduction, via the Curry–Howard isomorphism.
η-reduction η-reduction (eta reduction) expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments. η-reduction converts between λx.f x and f whenever x does not appear free in f.
η-reduction can be seen to be the same as the concept of local completeness in natural deduction, via the Curry–Howard isomorphism.
Normal forms and confluence
For the untyped lambda calculus, β-reduction as a rewriting rule is neither strongly normalising nor weakly normalising.
However, it can be shown that β-reduction is confluent when working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other).
Therefore, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it.
Encoding datatypes
The basic lambda calculus may be used to model arithmetic, Booleans, data structures, and recursion, as illustrated in the following sub-sections i, ii, iii, and § iv.
Arithmetic in lambda calculus
There are several possible ways to define the natural numbers in lambda calculus, but by far the most common are the Church numerals, which can be defined as follows:
and so on. Or using the alternative syntax presented above in Notation:
A Church numeral is a higher-order function—it takes a single-argument function , and returns another single-argument function. The Church numeral is a function that takes a function as argument and returns the -th composition of , i.e. the function composed with itself times. This is denoted and is in fact the -th power of (considered as an operator); is defined to be the identity function. Such repeated compositions (of a single function ) obey the laws of exponents, which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of impossible.)
One way of thinking about the Church numeral , which is often useful when analysing programs, is as an instruction 'repeat n times'. For example, using the and functions defined below, one can define a function that constructs a (linked) list of n elements all equal to x by repeating 'prepend another x element' n times, starting from an empty list. The lambda term is
By varying what is being repeated, and varying what argument that function being repeated is applied to, a great many different effects can be achieved.
We can define a successor function, which takes a Church numeral and returns by adding another application of , where '(mf)x' means the function 'f' is applied 'm' times on 'x':
Because the -th composition of composed with the -th composition of gives the -th composition of , addition can be defined as follows:
can be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that
and
are β-equivalent lambda expressions. Since adding to a number can be accomplished by adding 1 times, an alternative definition is:
Similarly, multiplication can be defined as
Alternatively
since multiplying and is the same as repeating the add function times and then applying it to zero.
Exponentiation has a rather simple rendering in Church numerals, namely
The predecessor function defined by for a positive integer and is considerably more difficult. The formula
can be validated by showing inductively that if T denotes , then for . Two other definitions of are given below, one using conditionals and the other using pairs. With the predecessor function, subtraction is straightforward. Defining
,
yields when and otherwise.
Logic and predicates
By convention, the following two definitions (known as Church Booleans) are used for the Boolean values and :
Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions could be equally correct):
We are now able to compute some logic functions, for example:
and we see that is equivalent to .
A predicate is a function that returns a Boolean value. The most fundamental predicate is , which returns if its argument is the Church numeral , but if its argument were any other Church numeral:
The following predicate tests whether the first argument is less-than-or-equal-to the second:
,
and since , if and , it is straightforward to build a predicate for numerical equality.
The availability of predicates and the above definition of and make it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as:
which can be verified by showing inductively that is the add − 1 function for > 0.
Pairs
A pair (2-tuple) can be defined in terms of and , by using the Church encoding for pairs. For example, encapsulates the pair (,), returns the first element of the pair, and returns the second.
A linked list can be defined as either NIL for the empty list, or the of an element and a smaller list. The predicate tests for the value . (Alternatively, with , the construct obviates the need for an explicit NULL test).
As an example of the use of pairs, the shift-and-increment function that maps to can be defined as
which allows us to give perhaps the most transparent version of the predecessor function:
Additional programming techniques
There is a considerable body of programming idioms for lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation for programming language semantics, effectively using lambda calculus as a low-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign.
Named constants
In lambda calculus, a library would take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to use to mean N (some explicit lambda-term) in M (another lambda-term, the "main program"), one can say
M N
Authors often introduce syntactic sugar, such as , to permit writing the above in the more intuitive order
NM
By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program.
A notable restriction of this is that the name be not defined in N, for N to be outside the scope of the abstraction binding ; this means a recursive function definition cannot be used as the N with . The construction would allow writing recursive function definitions.
Recursion and fixed points
Recursion is the definition of a function invoking itself. A definition containing itself inside itself, by value, leads to the whole value being of infinite size. Other notations which support recursion natively overcome this by referring to the function definition by name. Lambda calculus cannot express this: all functions are anonymous in lambda calculus, so we can't refer by name to a value which is yet to be defined, inside the lambda term defining that same value. However, a lambda expression can receive itself as its own argument, for example in . Here E should be an abstraction, applying its parameter to a value to express recursion.
Consider the factorial function recursively defined by
.
In the lambda expression which is to represent this function, a parameter (typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it – applying it to an argument – will amount to recursion. Thus to achieve recursion, the intended-as-self-referencing argument (called here) must always be passed to itself within the function body, at a call point:
with to hold, so and
The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced and called there.
This solves it but requires re-writing each recursive call as self-application. We would like to have a generic solution, without a need for any re-writes:
with to hold, so and
where
so that
Given a lambda term with first argument representing recursive call (e.g. here), the fixed-point combinator will return a self-replicating lambda expression representing the recursive function (here, ). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression is re-created inside itself, at call-point, achieving self-reference.
In fact, there are many possible definitions for this operator, the simplest of them being:
In the lambda calculus, is a fixed-point of , as it expands to:
Now, to perform our recursive call to the factorial function, we would simply call , where n is the number we are calculating the factorial of. Given n = 4, for example, this gives:
Every recursively defined function can be seen as a fixed point of some suitably defined function closing over the recursive call with an extra argument, and therefore, using , every recursively defined function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication and comparison predicate of natural numbers recursively.
Standard terms
Certain terms have commonly accepted names:
is the identity function. and form complete combinator calculus systems that can express any lambda term - see
the next section. is , the smallest term that has no normal form. is another such term.
is standard and defined above, and can also be defined as , so that . and defined above are commonly abbreviated as and .
Abstraction elimination
If N is a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-term T(,N) which is equivalent to N but lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, as T(,N) removes all occurrences of from N, while still allowing argument values to be substituted into the positions where N contains an . The conversion function T can be defined by:
T(, ) := I T(, N) := K N if is not free in N.
T(, M N) := S T(, M) T(, N)
In either case, a term of the form T(,N) P can reduce by having the initial combinator I, K, or S grab the argument P, just like β-reduction of N P would do. I returns that argument. K throws the argument away, just like N would do if has no free occurrence in N. S passes the argument on to both subterms of the application, and then applies the result of the first to the result of the second.
The combinators B and C are similar to S, but pass the argument on to only one subterm of an application (B to the "argument" subterm and C to the "function" subterm), thus saving a subsequent K if there is no occurrence of in one subterm. In comparison to B and C, the S combinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. The W combinator does only the latter, yielding the B, C, K, W system as an alternative to SKI combinator calculus.
Typed lambda calculus
A typed lambda calculus is a typed formalism that uses the lambda-symbol () to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see Kinds of typed lambda calculi). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type.
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here typability usually captures desirable properties of the program, e.g., the program will not cause a memory access violation.
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of classes of categories, e.g., the simply typed lambda calculus is the language of a Cartesian closed category (CCC).
Reduction strategies
Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. Common lambda calculus reduction strategies include:
Normal order The leftmost outermost redex is reduced first. That is, whenever possible, arguments are substituted into the body of an abstraction before the arguments are reduced. If a term has a beta-normal form, normal order reduction will always reach that normal form.
Applicative order The leftmost innermost redex is reduced first. As a consequence, a function's arguments are always reduced before they are substituted into the function. Unlike normal order reduction, applicative order reduction may fail to find the beta-normal form of an expression, even if such a normal form exists. For example, the term is reduced to itself by applicative order, while normal order reduces it to its beta-normal form .
Full β-reductions Any redex can be reduced at any time. This means essentially the lack of any particular reduction strategy—with regard to reducibility, "all bets are off".
Weak reduction strategies do not reduce under lambda abstractions:
Call by value Like applicative order, but no reductions are performed inside abstractions. This is similar to the evaluation order of strict languages like C: the arguments to a function are evaluated before calling the function, and function bodies are not even partially evaluated until the arguments are substituted in.
Call by name Like normal order, but no reductions are performed inside abstractions. For example, is in normal form according to this strategy, although it contains the redex .
Strategies with sharing reduce computations that are "the same" in parallel:
Optimal reduction As normal order, but computations that have the same label are reduced simultaneously.
Call by need As call by name (hence weak), but function applications that would duplicate terms instead name the argument. The argument may be evaluated "when needed," at which point the name binding is updated with the reduced value. This can save time compared to normal order evaluation.
Computability
There is no algorithm that takes as input any two lambda expressions and outputs or depending on whether one expression reduces to the other. More precisely, no computable function can decide the question. This was historically the first problem for which undecidability could be proven. As usual for such a proof, computable means computable by any model of computation that is Turing complete. In fact computability can itself be defined via the lambda calculus: a function F: N → N of natural numbers is a computable function if and only if there exists a lambda expression f such that for every pair of x, y in N, F(x)=y if and only if f =β , where and are the Church numerals corresponding to x and y, respectively and =β meaning equivalence with β-reduction. See the Church–Turing thesis for other approaches to defining computability and their equivalence.
Church's proof of uncomputability first reduces the problem to determining whether a given lambda expression has a normal form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression that closely follows the proof of Gödel's first incompleteness theorem. If is applied to its own Gödel number, a contradiction results.
Complexity
The notion of computational complexity for the lambda calculus is a bit tricky, because the cost of a β-reduction may vary depending on how it is implemented.
To be precise, one must somehow find the location of all of the occurrences of the bound variable in the expression , implying a time cost, or one must keep track of the locations of free variables in some way, implying a space cost. A naïve search for the locations of in is O(n) in the length n of . Director strings were an early approach that traded this time cost for a quadratic space usage. More generally this has led to the study of systems that use explicit substitution.
In 2014, it was shown that the number of β-reduction steps taken by normal order reduction to reduce a term is a reasonable time cost model, that is, the reduction can be simulated on a Turing machine in time polynomially proportional to the number of steps. This was a long-standing open problem, due to size explosion, the existence of lambda terms which grow exponentially in size for each β-reduction. The result gets around this by working with a compact shared representation. The result makes clear that the amount of space needed to evaluate a lambda term is not proportional to the size of the term during reduction. It is not currently known what a good measure of space complexity would be.
An unreasonable model does not necessarily mean inefficient. Optimal reduction reduces all computations with the same label in one step, avoiding duplicated work, but the number of parallel β-reduction steps to reduce a given term to normal form is approximately linear in the size of the term. This is far too small to be a reasonable cost measure, as any Turing machine may be encoded in the lambda calculus in size linearly proportional to the size of the Turing machine. The true cost of reducing lambda terms is not due to β-reduction per se but rather the handling of the duplication of redexes during β-reduction. It is not known if optimal reduction implementations are reasonable when measured with respect to a reasonable cost model such as the number of leftmost-outermost steps to normal form, but it has been shown for fragments of the lambda calculus that the optimal reduction algorithm is efficient and has at most a quadratic overhead compared to leftmost-outermost. In addition the BOHM prototype implementation of optimal reduction outperformed both Caml Light and Haskell on pure lambda terms.
Lambda calculus and programming languages
As pointed out by Peter Landin's 1965 paper "A Correspondence between ALGOL 60 and Church's Lambda-notation", sequential procedural programming languages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application.
Anonymous functions
For example, in Python the "square" function can be expressed as a lambda expression as follows:
(lambda x: x**2)
The above example is an expression that evaluates to a first-class function. The symbol lambda creates an anonymous function, given a list of parameter names, x – just a single argument in this case, and an expression that is evaluated as the body of the function, x**2. Anonymous functions are sometimes called lambda expressions.
For example, Pascal and many other imperative languages have long supported passing subprograms as arguments to other subprograms through the mechanism of function pointers. However, function pointers are an insufficient condition for functions to be first class datatypes, because a function is a first class datatype if and only if new instances of the function can be created at runtime. Such runtime creation of functions is supported in Smalltalk, JavaScript, Wolfram Language, and more recently in Scala, Eiffel (as agents), C# (as delegates) and C++11, among others.
Parallelism and concurrency
The Church–Rosser property of the lambda calculus means that evaluation (β-reduction) can be carried out in any order, even in parallel. This means that various nondeterministic evaluation strategies are relevant. However, the lambda calculus does not offer any explicit constructs for parallelism. One can add constructs such as futures to the lambda calculus. Other process calculi have been developed for describing communication and concurrency.
Semantics
The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a set D isomorphic to the function space D → D, of functions on itself. However, no nontrivial such D can exist, by cardinality constraints because the set of all functions from D to D has greater cardinality than D, unless D is a singleton set.
In the 1970s, Dana Scott showed that if only continuous functions were considered, a set or domain D with the required property could be found, thus providing a model for the lambda calculus.
This work also formed the basis for the denotational semantics of programming languages.
Variations and extensions
These extensions are in the lambda cube:
Typed lambda calculus – Lambda calculus with typed variables (and functions)
System F – A typed lambda calculus with type-variables
Calculus of constructions – A typed lambda calculus with types as first-class values
These formal systems are extensions of lambda calculus that are not in the lambda cube:
Binary lambda calculus – A version of lambda calculus with binary input/output (I/O), a binary encoding of terms, and a designated universal machine.
Lambda-mu calculus – An extension of the lambda calculus for treating classical logic
These formal systems are variations of lambda calculus:
Kappa calculus – A first-order analogue of lambda calculus
These formal systems are related to lambda calculus:
Combinatory logic – A notation for mathematical logic without variables
SKI combinator calculus – A computational system based on the S, K and I''' combinators, equivalent to lambda calculus, but reducible without variable substitutions
| Mathematics | Computability theory | null |
18232 | https://en.wikipedia.org/wiki/Little%20penguin | Little penguin | The little penguin (Eudyptula minor) is the smallest species of penguin. It originates from New Zealand. It is commonly known as the fairy penguin, little blue penguin, or blue penguin, owing to its slate-blue plumage and is also known by its Māori name . It is a marine neritic species that dives for food throughout the day and returns to burrows on the shore at dusk, making it the only nocturnal penguin species on land.
The Australian little penguin (Eudyptula novaehollandiae), from Australia and the Otago region of New Zealand, is considered a separate species.
Eudyptula minor feathers are dense in melanosomes, which increase water resistance and give them their unique blue colour.
Taxonomy
The little penguin was first described by German naturalist Johann Reinhold Forster in 1781. Several subspecies are known, but a precise classification of these is still a matter of dispute. The holotypes of the subspecies E. m. variabilis and Eudyptula minor chathamensis are in the collection of the Museum of New Zealand Te Papa Tongarewa. The white-flippered penguin (E. m. albosignata or E. m. minor morpha albosignata) is currently considered by most taxonomists to be a colour morph or subspecies of Eudyptula minor. In 2008, Shirihai treated the little penguin and white-flippered penguin as allospecies. However, as of 2012, the IUCN and BirdLife International consider the white-flippered penguin to be a subspecies or morph of the little penguin.
Little penguins from New Zealand and Australia were once considered to be the same species, called Eudyptula minor. Analysis of mtDNA in 2002 revealed two clades in Eudyptula: one containing little penguins of New Zealand's North Island, Cook Strait and Chatham Island, as well as the white-flippered penguin, and a second containing little penguins of Australia and the Otago region of New Zealand. Preliminary analysis of braying calls and cluster analysis of morphometrics partially supported these results. A 2016 study described the Australian little penguin as a new and separate species, Eudyptula novaehollandiae. E. minor is endemic to New Zealand, while E. novaehollandiae is found in Australia and Otago. A 2019 study supported the recognition of E. minor and E. novaehollandiae as separate species.
The IUCN assessment for Eudyptula minor uses Eudyptula minor and Eudyptula novaehollandiae interchangeably throughout the report to specify location, but considers them as two genetically distinct clades within the same species.
Description
Like those of all penguins, the wings of Eudyptula species have developed into flippers used for swimming.
Eudyptula species typically grow to between tall and on average weigh 1.5 kg (3.3 lb). The head and upper parts are blue in colour, with slate-grey ear coverts fading to white underneath, from the chin to the belly. Their flippers are blue in colour. The dark grey-black beak is 3–4 cm long, the irises pale silvery- or bluish-grey or hazel, and the feet pink above with black soles and webbing. An immature individual will have a shorter bill and lighter upperparts.
Like most seabirds, the Eudyptula species have a long lifespan. The average for the species is 6.5 years, but flipper ringing experiments show that in very exceptional cases they may live up to 25 years in captivity.
Eudyptula minor does not have the distinct bright blue feathers that distinguish Eudyptula novaehollandiae. In addition, the vocalisation patterns of the New Zealand lineage located on Tiritiri Matangi Island vary from the Australian lineage located in Oamaru. Females are known to prefer the local call of the New Zealand lineage.
There are also behavioural differences that help differentiate these penguins. Those of the Australian lineage will swim together in a large group after dusk and walk along the shore to reach their nesting sites. This may be an effective predator avoidance strategy by traveling in a large group simultaneously. This has not been seen by those of the New Zealand lineage. Eudyptula minor only recently encountered terrestrial vertebrate predators, while Eudyptula novaehollandiae would have had to deal with carnivorous marsupials.
Distribution and habitat
Eudyptula minor breeds along most of the coastline of New Zealand, including the Chatham Islands. The chicks are raised in nests constructed in burrows along the shoreline, both dug by Eudyptula minor and by other animals. Eudyptula minor does not occur in Otago, which is located on the east coast of New Zealand's South Island. The Australian species Eudyptula novaehollandiae occurs in Otago. E. novaehollandiae was originally endemic to Australia. Using ancient-DNA analysis and radiocarbon dating using historical, pre-human, as well as archaeological Eudyptula remains, the arrival of the Australian species in New Zealand was determined to have occurred roughly between AD 1500 and 1900. When the E. minor population declined in New Zealand, it left a genetic opening for E. novaehollandiae. The decrease of E. minor was most likely due to anthropogenic effects, such as being hunted by humans as well as introduced predators, including dogs brought from overseas.
It has been determined using multilocus coalescent analyses that the population of Eudyptula novaehollandiae in Otago arrived less than 750 years ago, more recently than previously estimated.
Outside of the Otago region, all colonies are expected to belong to the sub species Eudyptula novaehollandiae Many of these colonies are smaller and more patchily distributed than larger Eudyptula minorcolonies that exist in Australia and Otago. Extensive research exists on Philip Island and Oamaru colonies as they are sites of large colonies which attract large groups of tourists
Population size and trends of colonies in New Zealand remain poorly documented, what is known is colonies in New Zealand commonly consist of smaller fragmented groups in comparison to Australias larger colonies, some with <10 breeding pairs, this is largely attributed to NZs fragmented coastline separating the larger colonies. This is commonly seen in Kaikōura where 6–7 smaller colonies have been found along 1.7% of coastline
Behaviour
Feeding
Little penguins are central place foragers, meaning they will travel distances to forage but always return to the same nest or colony. They are also a species where both parents are required to raise chicks, and alternate foraging trips while the other is guarding and incubating the nest during the post guard stage. These stints can last anywhere between 1–10 days during incubation. Despite nesting on the shore, little penguins forage at sea and feed on a diet ranging from small schooling fish, to cephalopods, krill, and microzooplankton. As the species is widely distributed across a range of habitats in New Zealand and Australia, variation in diet and foraging choice has also arisen. Important little penguin prey items include arrow squid, slender sprat, Graham's gudgeon, red cod, and ahuru.
Little penguins feed by hunting small clupeoid fish, cephalopods, and crustaceans, for which they travel and dive quite extensively, including to the sea floor. Foraging efficiency has been found to be significantly influenced by age. Foraging success appears to stabilise selection for middle-aged penguins, as feeding is a learnt behaviour but also requires good physical condition.
For the Philip Island and other southern Australian colonies, Australian anchovies are the primary food source. Although the diet of the Philip Island colony has diversified to include selections of cephalopods and krill during the post guard stage of their life cycle where greater amounts of energy is required for chick development and egg production, resident penguins predominantly rely on anchovies when more energy is required.
The nature of their diet also impacts foraging methods, which may vary by colony depending on what food is available. When prey is larger and individuals are only catching 1-2 items at a time, they are more likely to hunt alone to reduce competition, whereas smaller and more mobile prey, or schooling prey species, promote group hunting to enable efficient encirclement. The Oamaru colony predominantly feeds on smaller schooling species such as sprat and gudgeon, while penguins from the Stewart/Codfish Island colonies more often hunt alone. The latter is likely linked to a predominantly cephalopods diet (58% of prey items at < 10 gm each).
Prey availability
Rising ocean temperatures have seen a trend towards earlier onset of breeding in Eudyptula minor which does not always align with the availability of their prey. This is because higher sea surface temperatures are associated with early onset of nesting, but also associated with lower nutrients and oxygen availability. During the breeding season, parents are restricted to a short foraging area close to their nest and are therefore vulnerable to small regional changes. La Niña events increasing the sea surface temperature along the New Zealand coastline cause prey such as schooling fish and krill to either become more regionally scarce or migrate to new habitats. Graham's Gudgeon once dominated the diet of the Oamaru colony of Eudyptula minor, however in 1995 the availability of the species dropped from 20% in December to 0% in January the following year. Penguins were able to successfully adapt their diet to consist of slender sprat and pigfish.
Double brooding
If penguins produce a second clutch of eggs in a season once the first chicks have fledged, this is known as double brooding. Thus far this behaviour has only been observed in the Eudyptula novaehollandiae, the lineage of little blue penguins which inhabit Australian and Otago regions. There is no evidence to suggest this is an established behaviour within Eudyptula minor, however double broods are occasionally noticed among the colonies in the Kaikoura coastline. It is as yet unclear whether this means double brooding is a genetically mediated behaviour. A study carried out on Oamaru penguin colony found double broods to increase breeding success by up to 75% per season. Double brooding is more likely to occur in individuals who lay their first clutch, prior to mid-September. While there is some interannual variability, the most common period for little penguins to lay their first clutch is in spring, mid-September is considered early and gives individuals time left in the season to lay a second clutch of eggs after the first have fledged.
The onset of double brooding can be strongly influenced by sea surface temperature, age and food availability. Warmer sea surface temperature in summer and autumn correlated with earlier laying of first clutch of eggs increasing the chances of double brooding. In contrast, in New Zealand it was observed that during the La Niña phase of the El Niño Southern Oscillation when colder temperature water was brought to the surface, there was a delay in the onset of breeding for Eudyptula novaehollandiae, thus resulting in a lower incidence of double brooding in the Otago colonies. Age is also believed to be a factor affecting double brooding because the pairs successfully able to double brood were most commonly strategic in reclaiming successful nests and pair-bonds. Little penguins show a high nest fidelity, and the ability to reclaim success early suggests it is likely that successful double brooding is a behaviour that improves with age. Another influencing factor is the availability of food, for larger colonies such as the Philip Islands, competition for food can increase significantly during the breeding season, particularly if there is variability in the amount of prey available. If this competition results in aggression between adults, this can also influence ability to successfully raise chicks, and successfully breed in the next season.
Foraging behaviour
During the breeding season, Eudyptula minor are central place foragers. They travel within their home range to find food, but will return to their nest to feed both themselves and their chicks. Their foraging range is limited by how long chicks can fast, and the high energetic of costs of constant travelling for individuals. This behaviour results in a small foraging range, and therefore a higher probability of competition when prey availability is more scarce. Particularly , In order to survive Eudyptula minor adapt to these constraints by increasing the plasticity and variability in their foraging behaviour, such as spatial, age, or diet based segregation, during breeding season when energy demands for both parents and chicks are at their highest. During chick rearing, parents will make on average one day long foraging trips within a 30 km radius of their nest.
Research conducted on the Philip Island colony found the spatial segregation of foraging behaviour was primarily determined by age rather than biological sex. Middle aged individuals foraged at greater distances from their nests and were able to dive greater distances, whereas older penguins were found to forage closer to the shore than middle aged adults.
When foraging in groups for small schooling prey, they were also observed to all be of a similar age cohort. If the groups are segregated by age, this is likely because they are at the same foraging ability and occupy the same approximate range.
Threats
Introduced predators
Introduced mammalian predators present the greatest terrestrial risk to little penguins and include cats, dogs, rats, and particularly ferrets and stoats. Significant dog attacks have been recorded at the colony at Little Kaiteriteri Beach, and a suspected stoat or ferret attack at Doctor's Point near Dunedin, New Zealand, claimed the lives of 29 little blue penguins in November 2014.
Oil spills
Little penguin populations were significantly affected by a major oil spill with the grounding of the Rena off New Zealand in 2011, which killed 2,000 seabirds (including little penguins) directly, and killed an estimated 20,000 in total based on wider ecosystem impacts. Oil spills are the most common cause of the little penguins being admitted to the rehabilitation facilities at Phillip Island Nature Park (PINP). These oil spill recurrences have endangered not just the little penguins, but the entire penguin population. This can further decline the population, which can lead to possible extinction.
Fire
Increased frequency of drought and extreme temperatures in Southern Australia has led to an increased fire risk. Being flightless birds that nest on land, little blue penguins are especially vulnerable to fire. Behavioural traits such as reluctancy to abandon nests and emerging mostly during daylit hours is thought to be some of the main reasons for increased vulnerability in the future. The threats it provides include nest and habitat distruption, as well as deadly to eggs and individuals, despite this Eudyptula minorappears to show no fear towards fire when directly exposed. When observed, they have been to found to remain around or under vegetation until severely burnt or injured. Some have even been observed preening their feathers near to open flames.
Fires can also significantly alter the composition of vegetation in Eudyptula minor habitats. A large fire in Marion Bay, South Australia in 1994 saw the loss of two key plant species; introduced marram grass Ammophila and coastal wattle A.sophorae. Following the fire, these grasses were replaced by invasive palms A.arenia and A.sophoraegrew back in dense thickets. This habitat became no longer suitable for Eudyptula minor and colony relocated.
Conservation
Eudyptula species are classified as "at risk - declining" under New Zealand's Wildlife Act 1953.
Overall, little penguin populations in New Zealand have been decreasing. Some colonies have become extinct, and others continue to be at risk. Some new colonies have been established in urban areas. The species is not considered endangered in New Zealand, with the exception of the white-flippered subspecies found only on Banks Peninsula and nearby Motunau Island. Since the 1960s, the mainland population has declined by 60-70%; though a small increase has occurred on Motunau Island. A colony exists in Wellington Harbour on Matiu / Somes Island.
Protestors have opposed the development of a marina at Kennedy Point, Waiheke Island in New Zealand for the risk it poses to little penguins and their habitat. Protesters claimed that they exhausted all legal means to oppose the project and have had to resort to occupation and non-violent resistance. Several arrests were made for trespassing. Construction of the marina was upheld by the Environment Court in 2012 and completed in 2022. Since the completion of the construction, little penguin burrows have still been found in the area, as well as one dead little penguin on the boat ramp there.
The West Coast Penguin Trust and DOC have worked in collaboration to maintain data on penguin mortality, the West Coast South Island colonies are highlighted as one of the Eudyptula minor colonies currently facing decline The data shows highest level of penguin mortality is caused by roadkill, likely due to many of the colonies being close to coastal highway. To mitigate this issue, a penguin-proof fence was erected in 2019 across 3.3 km of highway where road kill was most prevalent, no roadkill deaths have been recorded since its implementation
The risk of fire damage to habitats in Philip Island has been partially mitigated through the planting of fire-resistant indigenous vegetation in and around the nesting sites. Thus far this planting has occurred primarily in the <10% of the colony most visible from tourist look-out points
In 1997 in NSW, the Eudyptula minor was listed as an endangered species under the endangered species act 1995. Since then conservation efforts such as public education, nest monitoring and labelling it as ‘critical habitat’ were implemented. Despite these efforts, this mainland colony was met with additional challenges from threats from wild dogs and foxes, to lack of available local prey. Species is now listed as at-risk declining under the same act
Zoological exhibits
Zoological exhibits featuring purpose-built enclosures for Eudyptula species can be seen in Australia at the Adelaide Zoo, Melbourne Zoo, the National Zoo & Aquarium in Canberra, Perth Zoo, Caversham Wildlife Park (Perth), Ballarat Wildlife Park, Sea Life Sydney Aquarium, and the Taronga Zoo in Sydney. Enclosures include nesting boxes or similar structures for the animals to retire into, a reconstruction of a pool and in some cases, a transparent aquarium wall to allow patrons to view the animals underwater while they swim.
Eudyptula penguin exhibit exists at Sea World, on the Gold Coast, Queensland, Australia. In early March 2007, 25 of the 37 penguins died from an unknown toxin following a change of gravel in their enclosure. It is still not known what caused the deaths of the penguins, and it was decided not to return the 12 surviving penguins to the same enclosure where the penguins became ill. A new enclosure for the little penguin colony was opened at Sea World in 2008.
In New Zealand, Eudyptula penguin exhibits exist at the Auckland Zoo, the Wellington Zoo, the International Antarctic Centre and the National Aquarium of New Zealand. Since 2017, the National Aquarium of New Zealand, has featured a monthly "Penguin of the Month" board, declaring two of their resident animals the "Naughty" and "Nice" penguin for that month. Photos of the board have gone viral and gained the aquarium a large worldwide social media following.
In the United States, Eudyptula penguins can be seen at the Louisville Zoo the Bronx Zoo, and the Cincinnati Zoo.
| Biology and health sciences | Sphenisciformes | Animals |
18285 | https://en.wikipedia.org/wiki/Lagrange%20point | Lagrange point | In celestial mechanics, the Lagrange points (; also Lagrangian points or libration points) are points of equilibrium for small-mass objects under the gravitational influence of two massive orbiting bodies. Mathematically, this involves the solution of the restricted three-body problem.
Normally, the two massive bodies exert an unbalanced gravitational force at a point, altering the orbit of whatever is at that point. At the Lagrange points, the gravitational forces of the two large bodies and the centrifugal force balance each other. This can make Lagrange points an excellent location for satellites, as orbit corrections, and hence fuel requirements, needed to maintain the desired orbit are kept at a minimum.
For any combination of two orbital bodies, there are five Lagrange points, L1 to L5, all in the orbital plane of the two large bodies. There are five Lagrange points for the Sun–Earth system, and five different Lagrange points for the Earth–Moon system. L1, L2, and L3 are on the line through the centers of the two large bodies, while L4 and L5 each act as the third vertex of an equilateral triangle formed with the centers of the two large bodies.
When the mass ratio of the two bodies is large enough, the L4 and L5 points are stable points, meaning that objects can orbit them and that they have a tendency to pull objects into them. Several planets have trojan asteroids near their L4 and L5 points with respect to the Sun; Jupiter has more than one million of these trojans.
Some Lagrange points are being used for space exploration. Two important Lagrange points in the Sun-Earth system are L1, between the Sun and Earth, and L2, on the same line at the opposite side of the Earth; both are well outside the Moon's orbit. Currently, an artificial satellite called the Deep Space Climate Observatory (DSCOVR) is located at L1 to study solar wind coming toward Earth from the Sun and to monitor Earth's climate, by taking images and sending them back. The James Webb Space Telescope, a powerful infrared space observatory, is located at L2. This allows the satellite's large sunshield to protect the telescope from the light and heat of the Sun, Earth and Moon. The L1 and L2 Lagrange points are located about from Earth.
The European Space Agency's earlier Gaia telescope, and its newly launched Euclid, also occupy orbits around L2. Gaia keeps a tighter Lissajous orbit around L2, while Euclid follows a halo orbit similar to JWST. Each of the space observatories benefit from being far enough from Earth's shadow to utilize solar panels for power, from not needing much power or propellant for station-keeping, from not being subjected to the Earth's magnetospheric effects, and from having direct line-of-sight to Earth for data transfer.
History
The three collinear Lagrange points (L1, L2, L3) were discovered by the Swiss mathematician Leonhard Euler around 1750, a decade before the Italian-born Joseph-Louis Lagrange discovered the remaining two.
In 1772, Lagrange published an "Essay on the three-body problem". In the first chapter he considered the general three-body problem. From that, in the second chapter, he demonstrated two special constant-pattern solutions, the collinear and the equilateral, for any three masses, with circular orbits.
Lagrange points
The five Lagrange points are labelled and defined as follows:
point
The point lies on the line defined between the two large masses M1 and M2. It is the point where the gravitational attraction of M2 and that of M1 combine to produce an equilibrium. An object that orbits the Sun more closely than Earth would typically have a shorter orbital period than Earth, but that ignores the effect of Earth's gravitational pull. If the object is directly between Earth and the Sun, then Earth's gravity counteracts some of the Sun's pull on the object, increasing the object's orbital period. The closer to Earth the object is, the greater this effect is. At the point, the object's orbital period becomes exactly equal to Earth's orbital period. is about 1.5 million kilometers, or 0.01 au, from Earth in the direction of the Sun.
point
The point lies on the line through the two large masses beyond the smaller of the two. Here, the combined gravitational forces of the two large masses balance the centrifugal force on a body at . On the opposite side of Earth from the Sun, the orbital period of an object would normally be greater than Earth's. The extra pull of Earth's gravity decreases the object's orbital period, and at the point, that orbital period becomes equal to Earth's. Like L1, L2 is about 1.5 million kilometers or 0.01 au from Earth (away from the sun). An example of a spacecraft designed to operate near the Earth–Sun L2 is the James Webb Space Telescope. Earlier examples include the Wilkinson Microwave Anisotropy Probe and its successor, Planck.
point
The point lies on the line defined by the two large masses, beyond the larger of the two. Within the Sun–Earth system, the point exists on the opposite side of the Sun, a little outside Earth's orbit and slightly farther from the center of the Sun than Earth is. This placement occurs because the Sun is also affected by Earth's gravity and so orbits around the two bodies' barycenter, which is well inside the body of the Sun. An object at Earth's distance from the Sun would have an orbital period of one year if only the Sun's gravity is considered. But an object on the opposite side of the Sun from Earth and directly in line with both "feels" Earth's gravity adding slightly to the Sun's and therefore must orbit a little farther from the barycenter of Earth and Sun in order to have the same 1-year period. It is at the point that the combined pull of Earth and Sun causes the object to orbit with the same period as Earth, in effect orbiting an Earth+Sun mass with the Earth-Sun barycenter at one focus of its orbit.
and points
The and points lie at the third vertices of the two equilateral triangles in the plane of orbit whose common base is the line between the centers of the two masses, such that the point lies 60° ahead of () or behind () the smaller mass with regard to its orbit around the larger mass.
Stability
The triangular points ( and ) are stable equilibria, provided that the ratio of is greater than 24.96. This is the case for the Sun–Earth system, the Sun–Jupiter system, and, by a smaller margin, the Earth–Moon system. When a body at these points is perturbed, it moves away from the point, but the factor opposite of that which is increased or decreased by the perturbation (either gravity or angular momentum-induced speed) will also increase or decrease, bending the object's path into a stable, kidney bean-shaped orbit around the point (as seen in the corotating frame of reference).
The points , , and are positions of unstable equilibrium. Any object orbiting at , , or will tend to fall out of orbit; it is therefore rare to find natural objects there, and spacecraft inhabiting these areas must employ a small but critical amount of station keeping in order to maintain their position.
Natural objects at Lagrange points
Due to the natural stability of and , it is common for natural objects to be found orbiting in those Lagrange points of planetary systems. Objects that inhabit those points are generically referred to as 'trojans' or 'trojan asteroids'. The name derives from the names that were given to asteroids discovered orbiting at the Sun–Jupiter and points, which were taken from mythological characters appearing in Homer's Iliad, an epic poem set during the Trojan War. Asteroids at the point, ahead of Jupiter, are named after Greek characters in the Iliad and referred to as the "Greek camp". Those at the point are named after Trojan characters and referred to as the "Trojan camp". Both camps are considered to be types of trojan bodies.
As the Sun and Jupiter are the two most massive objects in the Solar System, there are more known Sun–Jupiter trojans than for any other pair of bodies. However, smaller numbers of objects are known at the Lagrange points of other orbital systems:
The Sun–Earth and points contain interplanetary dust and at least two asteroids, and .
The Earth–Moon and points contain concentrations of interplanetary dust, known as Kordylewski clouds. Stability at these specific points is greatly complicated by solar gravitational influence.
The Sun–Neptune and points contain several dozen known objects, the Neptune trojans.
Mars has four accepted Mars trojans: 5261 Eureka, , , and .
Saturn's moon Tethys has two smaller moons of Saturn in its and points, Telesto and Calypso. Another Saturn moon, Dione also has two Lagrange co-orbitals, Helene at its point and Polydeuces at . The moons wander azimuthally about the Lagrange points, with Polydeuces describing the largest deviations, moving up to 32° away from the Saturn–Dione point.
One version of the giant impact hypothesis postulates that an object named Theia formed at the Sun–Earth or point and crashed into Earth after its orbit destabilized, forming the Moon.
In binary stars, the Roche lobe has its apex located at ; if one of the stars expands past its Roche lobe, then it will lose matter to its companion star, known as Roche lobe overflow.
Objects which are on horseshoe orbits are sometimes erroneously described as trojans, but do not occupy Lagrange points. Known objects on horseshoe orbits include 3753 Cruithne with Earth, and Saturn's moons Epimetheus and Janus.
Physical and mathematical details
Lagrange points are the constant-pattern solutions of the restricted three-body problem. For example, given two massive bodies in orbits around their common barycenter, there are five positions in space where a third body, of comparatively negligible mass, could be placed so as to maintain its position relative to the two massive bodies. This occurs because the combined gravitational forces of the two massive bodies provide the exact centripetal force required to maintain the circular motion that matches their orbital motion.
Alternatively, when seen in a rotating reference frame that matches the angular velocity of the two co-orbiting bodies, at the Lagrange points the combined gravitational fields of two massive bodies balance the centrifugal pseudo-force, allowing the smaller third body to remain stationary (in this frame) with respect to the first two.
The location of L1 is the solution to the following equation, gravitation providing the centripetal force:
where r is the distance of the L1 point from the smaller object, R is the distance between the two main objects, and M1 and M2 are the masses of the large and small object, respectively. The quantity in parentheses on the right is the distance of L1 from the center of mass. The solution for r is the only real root of the following quintic function
where
is the mass fraction of M2 and
is the normalised distance. If the mass of the smaller object (M2) is much smaller than the mass of the larger object (M1) then and are at approximately equal distances r from the smaller object, equal to the radius of the Hill sphere, given by:
We may also write this as:
Since the tidal effect of a body is proportional to its mass divided by the distance cubed, this means that the tidal effect of the smaller body at the L or at the L point is about three times of that body. We may also write:
where ρ and ρ are the average densities of the two bodies and d and d are their diameters. The ratio of diameter to distance gives the angle subtended by the body, showing that viewed from these two Lagrange points, the apparent sizes of the two bodies will be similar, especially if the density of the smaller one is about thrice that of the larger, as in the case of the earth and the sun.
This distance can be described as being such that the orbital period, corresponding to a circular orbit with this distance as radius around M2 in the absence of M1, is that of M2 around M1, divided by ≈ 1.73:
The location of L2 is the solution to the following equation, gravitation providing the centripetal force:
with parameters defined as for the L1 case. The corresponding quintic equation is
Again, if the mass of the smaller object (M2) is much smaller than the mass of the larger object (M1) then L2 is at approximately the radius of the Hill sphere, given by:
The same remarks about tidal influence and apparent size apply as for the L point. For example, the angular radius of the sun as viewed from L2 is arcsin() ≈ 0.264°, whereas that of the earth is arcsin() ≈ 0.242°. Looking toward the sun from L2 one sees an annular eclipse. It is necessary for a spacecraft, like Gaia, to follow a Lissajous orbit or a halo orbit around L2 in order for its solar panels to get full sun.
L3
The location of L3 is the solution to the following equation, gravitation providing the centripetal force:
with parameters M1, M2, and R defined as for the L1 and L2 cases, and r being defined such that the distance of L3 from the centre of the larger object is R − r. If the mass of the smaller object (M2) is much smaller than the mass of the larger object (M1), then:
Thus the distance from L3 to the larger object is less than the separation of the two objects (although the distance between L3 and the barycentre is greater than the distance between the smaller object and the barycentre).
and
The reason these points are in balance is that at and the distances to the two masses are equal. Accordingly, the gravitational forces from the two massive bodies are in the same ratio as the masses of the two bodies, and so the resultant force acts through the barycenter of the system. Additionally, the geometry of the triangle ensures that the resultant acceleration is to the distance from the barycenter in the same ratio as for the two massive bodies. The barycenter being both the center of mass and center of rotation of the three-body system, this resultant force is exactly that required to keep the smaller body at the Lagrange point in orbital equilibrium with the other two larger bodies of the system (indeed, the third body needs to have negligible mass). The general triangular configuration was discovered by Lagrange working on the three-body problem.
Radial acceleration
The radial acceleration a of an object in orbit at a point along the line passing through both bodies is given by:
where r is the distance from the large body M1, R is the distance between the two main objects, and sgn(x) is the sign function of x. The terms in this function represent respectively: force from M1; force from M2; and centripetal force. The points L3, L1, L2 occur where the acceleration is zero — see chart at right. Positive acceleration is acceleration towards the right of the chart and negative acceleration is towards the left; that is why acceleration has opposite signs on opposite sides of the gravity wells.
Stability
Although the , , and points are nominally unstable, there are quasi-stable periodic orbits called halo orbits around these points in a three-body system. A full n-body dynamical system such as the Solar System does not contain these periodic orbits, but does contain quasi-periodic (i.e. bounded but not precisely repeating) orbits following Lissajous-curve trajectories. These quasi-periodic Lissajous orbits are what most of Lagrangian-point space missions have used until now. Although they are not perfectly stable, a modest effort of station keeping keeps a spacecraft in a desired Lissajous orbit for a long time.
For Sun–Earth- missions, it is preferable for the spacecraft to be in a large-amplitude () Lissajous orbit around than to stay at , because the line between Sun and Earth has increased solar interference on Earth–spacecraft communications. Similarly, a large-amplitude Lissajous orbit around keeps a probe out of Earth's shadow and therefore ensures continuous illumination of its solar panels.
The and points are stable provided that the mass of the primary body (e.g. the Earth) is at least 25 times the mass of the secondary body (e.g. the Moon), The Earth is over 81 times the mass of the Moon (the Moon is 1.23% of the mass of the Earth). Although the and points are found at the top of a "hill", as in the effective potential contour plot above, they are nonetheless stable. The reason for the stability is a second-order effect: as a body moves away from the exact Lagrange position, Coriolis acceleration (which depends on the velocity of an orbiting object and cannot be modeled as a contour map) curves the trajectory into a path around (rather than away from) the point. Because the source of stability is the Coriolis force, the resulting orbits can be stable, but generally are not planar, but "three-dimensional": they lie on a warped surface intersecting the ecliptic plane. The kidney-shaped orbits typically shown nested around and are the projections of the orbits on a plane (e.g. the ecliptic) and not the full 3-D orbits.
Solar System values
This table lists sample values of L1, L2, and L3 within the Solar System. Calculations assume the two bodies orbit in a perfect circle with separation equal to the semimajor axis and no other bodies are nearby. Distances are measured from the larger body's center of mass (but see barycenter especially in the case of Moon and Jupiter) with L3 showing a negative direction. The percentage columns show the distance from the orbit compared to the semimajor axis. E.g. for the Moon, L1 is from Earth's center, which is 84.9% of the Earth–Moon distance or 15.1% "in front of" (Earthwards from) the Moon; L2 is located from Earth's center, which is 116.8% of the Earth–Moon distance or 16.8% beyond the Moon; and L3 is located from Earth's center, which is 99.3% of the Earth–Moon distance or 0.7084% inside (Earthward) of the Moon's 'negative' position.
Spaceflight applications
Sun–Earth
Sun–Earth is suited for making observations of the Sun–Earth system. Objects here are never shadowed by Earth or the Moon and, if observing Earth, always view the sunlit hemisphere. The first mission of this type was the 1978 International Sun Earth Explorer 3 (ISEE-3) mission used as an interplanetary early warning storm monitor for solar disturbances. Since June 2015, DSCOVR has orbited the L1 point. Conversely, it is also useful for space-based solar telescopes, because it provides an uninterrupted view of the Sun and any space weather (including the solar wind and coronal mass ejections) reaches L1 up to an hour before Earth. Solar and heliospheric missions currently located around L1 include the Solar and Heliospheric Observatory, Wind, Aditya-L1 Mission and the Advanced Composition Explorer. Planned missions include the Interstellar Mapping and Acceleration Probe(IMAP) and the NEO Surveyor.
Sun–Earth is a good spot for space-based observatories. Because an object around will maintain the same relative position with respect to the Sun and Earth, shielding and calibration are much simpler. It is, however, slightly beyond the reach of Earth's umbra, so solar radiation is not completely blocked at L2. Spacecraft generally orbit around L2, avoiding partial eclipses of the Sun to maintain a constant temperature. From locations near L2, the Sun, Earth and Moon are relatively close together in the sky; this means that a large sunshade with the telescope on the dark-side can allow the telescope to cool passively to around 50 K – this is especially helpful for infrared astronomy and observations of the cosmic microwave background. The James Webb Space Telescope was positioned in a halo orbit about L2 on January 24, 2022.
Sun–Earth and are saddle points and exponentially unstable with time constant of roughly 23 days. Satellites at these points will wander off in a few months unless course corrections are made.
Sun–Earth was a popular place to put a "Counter-Earth" in pulp science fiction and comic books, despite the fact that the existence of a planetary body in this location had been understood as an impossibility once orbital mechanics and the perturbations of planets upon each other's orbits came to be understood, long before the Space Age; the influence of an Earth-sized body on other planets would not have gone undetected, nor would the fact that the foci of Earth's orbital ellipse would not have been in their expected places, due to the mass of the counter-Earth. The Sun–Earth , however, is a weak saddle point and exponentially unstable with time constant of roughly 150 years. Moreover, it could not contain a natural object, large or small, for very long because the gravitational forces of the other planets are stronger than that of Earth (for example, Venus comes within 0.3 AU of this every 20 months).
A spacecraft orbiting near Sun–Earth would be able to closely monitor the evolution of active sunspot regions before they rotate into a geoeffective position, so that a seven-day early warning could be issued by the NOAA Space Weather Prediction Center. Moreover, a satellite near Sun–Earth would provide very important observations not only for Earth forecasts, but also for deep space support (Mars predictions and for crewed missions to near-Earth asteroids). In 2010, spacecraft transfer trajectories to Sun–Earth were studied and several designs were considered.
Earth–Moon
Earth–Moon allows comparatively easy access to Lunar and Earth orbits with minimal change in velocity and this has as an advantage to position a habitable space station intended to help transport cargo and personnel to the Moon and back. The SMART-1 Mission passed through the L1 Lagrangian Point on 11 November 2004 and passed into the area dominated by the Moon's gravitational influence.
Earth–Moon has been used for a communications satellite covering the Moon's far side, for example, Queqiao, launched in 2018, and would be "an ideal location" for a propellant depot as part of the proposed depot-based space transportation architecture.
Earth–Moon and are the locations for the Kordylewski dust clouds. The L5 Society's name comes from the L4 and L5 Lagrangian points in the Earth–Moon system proposed as locations for their huge rotating space habitats. Both positions are also proposed for communication satellites covering the Moon alike communication satellites in geosynchronous orbit cover the Earth.
Sun–Venus
Scientists at the B612 Foundation were planning to use Venus's L3 point to position their planned Sentinel telescope, which aimed to look back towards Earth's orbit and compile a catalogue of near-Earth asteroids.
Sun–Mars
In 2017, the idea of positioning a magnetic dipole shield at the Sun–Mars point for use as an artificial magnetosphere for Mars was discussed at a NASA conference. The idea is that this would protect the planet's atmosphere from the Sun's radiation and solar winds.
| Physical sciences | Celestial mechanics | null |
18290 | https://en.wikipedia.org/wiki/Light-emitting%20diode | Light-emitting diode | A light-emitting diode (LED) is a semiconductor device that emits light when current flows through it. Electrons in the semiconductor recombine with electron holes, releasing energy in the form of photons. The color of the light (corresponding to the energy of the photons) is determined by the energy required for electrons to cross the band gap of the semiconductor. White light is obtained by using multiple semiconductors or a layer of light-emitting phosphor on the semiconductor device.
Appearing as practical electronic components in 1962, the earliest LEDs emitted low-intensity infrared (IR) light. Infrared LEDs are used in remote-control circuits, such as those used with a wide variety of consumer electronics. The first visible-light LEDs were of low intensity and limited to red.
Early LEDs were often used as indicator lamps, replacing small incandescent bulbs, and in seven-segment displays. Later developments produced LEDs available in visible, ultraviolet (UV), and infrared wavelengths with high, low, or intermediate light output, for instance, white LEDs suitable for room and outdoor lighting. LEDs have also given rise to new types of displays and sensors, while their high switching rates are useful in advanced communications technology. LEDs have been used in diverse applications such as aviation lighting, fairy lights, strip lights, automotive headlamps, advertising, general lighting, traffic signals, camera flashes, lighted wallpaper, horticultural grow lights, and medical devices.
LEDs have many advantages over incandescent light sources, including lower power consumption, a longer lifetime, improved physical robustness, smaller sizes, and faster switching. In exchange for these generally favorable attributes, disadvantages of LEDs include electrical limitations to low voltage and generally to DC (not AC) power, the inability to provide steady illumination from a pulsing DC or an AC electrical supply source, and a lesser maximum operating temperature and storage temperature.
LEDs are transducers of electricity into light. They operate in reverse of photodiodes, which convert light into electricity.
History
The first LED was created by Soviet inventor Oleg Losev in 1927, but electroluminescence was already known for 20 years, and relied on a diode made of silicon carbide.
Commercially viable LEDs only became available after Texas Instruments engineers patented efficient near-infrared emission from a diode based on GaAs in 1962.
From 1968, commercial LEDs were extremely costly and saw no practical use. Monsanto and Hewlett-Packard led the development of LEDs to the point where, in the 1970s, a unit cost less than five cents.
Physics of light production and emission
In a light-emitting diode, the recombination of electrons and electron holes in a semiconductor produces light (be it infrared, visible or UV), a process called "electroluminescence". The wavelength of the light depends on the energy band gap of the semiconductors used. Since these materials have a high index of refraction, design features of the devices such as special optical coatings and die shape are required to efficiently emit light.
Unlike a laser, the light emitted from an LED is neither spectrally coherent nor even highly monochromatic. Its spectrum is sufficiently narrow that it appears to the human eye as a pure (saturated) color. Also unlike most lasers, its radiation is not spatially coherent, so it cannot approach the very high intensity characteristic of lasers.
Single-color LEDs
By selection of different semiconductor materials, single-color LEDs can be made that emit light in a narrow band of wavelengths from near-infrared through the visible spectrum and into the ultraviolet range. The required operating voltages of LEDs increase as the emitted wavelengths become shorter (higher energy, red to blue), because of their increasing semiconductor band gap.
Blue LEDs have an active region consisting of one or more InGaN quantum wells sandwiched between thicker layers of GaN, called cladding layers. By varying the relative In/Ga fraction in the InGaN quantum wells, the light emission can in theory be varied from violet to amber.
Aluminium gallium nitride (AlGaN) of varying Al/Ga fraction can be used to manufacture the cladding and quantum well layers for ultraviolet LEDs, but these devices have not yet reached the level of efficiency and technological maturity of InGaN/GaN blue/green devices. If unalloyed GaN is used in this case to form the active quantum well layers, the device emits near-ultraviolet light with a peak wavelength centred around 365 nm. Green LEDs manufactured from the InGaN/GaN system are far more efficient and brighter than green LEDs produced with non-nitride material systems, but practical devices still exhibit efficiency too low for high-brightness applications.
With AlGaN and AlGaInN, even shorter wavelengths are achievable. Near-UV emitters at wavelengths around 360–395 nm are already cheap and often encountered, for example, as black light lamp replacements for inspection of anti-counterfeiting UV watermarks in documents and bank notes, and for UV curing. Substantially more expensive, shorter-wavelength diodes are commercially available for wavelengths down to 240 nm. As the photosensitivity of microorganisms approximately matches the absorption spectrum of DNA, with a peak at about 260 nm, UV LED emitting at 250–270 nm are expected in prospective disinfection and sterilization devices. Recent research has shown that commercially available UVA LEDs (365 nm) are already effective disinfection and sterilization devices.
UV-C wavelengths were obtained in laboratories using aluminium nitride (210 nm), boron nitride (215 nm) and diamond (235 nm).
White LEDs
There are two primary ways of producing white light-emitting diodes. One is to use individual LEDs that emit three primary colors—red, green and blue—and then mix all the colors to form white light. The other is to use a phosphor material to convert monochromatic light from a blue or UV LED to broad-spectrum white light, similar to a fluorescent lamp. The yellow phosphor is cerium-doped YAG crystals suspended in the package or coated on the LED. This YAG phosphor causes white LEDs to appear yellow when off, and the space between the crystals allow some blue light to pass through in LEDs with partial phosphor conversion. Alternatively, white LEDs may use other phosphors like manganese(IV)-doped potassium fluorosilicate (PFS) or other engineered phosphors. PFS assists in red light generation, and is used in conjunction with conventional Ce:YAG phosphor.
In LEDs with PFS phosphor, some blue light passes through the phosphors, the Ce:YAG phosphor converts blue light to green and red (yellow) light, and the PFS phosphor converts blue light to red light. The color, emission spectrum or color temperature of white phosphor converted and other phosphor converted LEDs can be controlled by changing the concentration of several phosphors that form a phosphor blend used in an LED package.
The 'whiteness' of the light produced is engineered to suit the human eye. Because of metamerism, it is possible to have quite different spectra that appear white. The appearance of objects illuminated by that light may vary as the spectrum varies. This is the issue of color rendition, quite separate from color temperature. An orange or cyan object could appear with the wrong color and much darker as the LED or phosphor does not emit the wavelength it reflects. The best color rendition LEDs use a mix of phosphors, resulting in less efficiency and better color rendering.
The first white light-emitting diodes (LEDs) were offered for sale in the autumn of 1996. Nichia made some of the first white LEDs which were based on blue LEDs with Ce:YAG phosphor. Ce:YAG is often grown using the Czochralski method.
RGB systems
Mixing red, green, and blue sources to produce white light needs electronic circuits to control the blending of the colors. Since LEDs have slightly different emission patterns, the color balance may change depending on the angle of view, even if the RGB sources are in a single package, so RGB diodes are seldom used to produce white lighting. Nonetheless, this method has many applications because of the flexibility of mixing different colors, and in principle, this mechanism also has higher quantum efficiency in producing white light.
There are several types of multicolor white LEDs: di-, tri-, and tetrachromatic white LEDs. Several key factors that play among these different methods include color stability, color rendering capability, and luminous efficacy. Often, higher efficiency means lower color rendering, presenting a trade-off between the luminous efficacy and color rendering. For example, the dichromatic white LEDs have the best luminous efficacy (120 lm/W), but the lowest color rendering capability. Although tetrachromatic white LEDs have excellent color rendering capability, they often have poor luminous efficacy. Trichromatic white LEDs are in between, having both good luminous efficacy (>70 lm/W) and fair color rendering capability.
One of the challenges is the development of more efficient green LEDs. The theoretical maximum for green LEDs is 683 lumens per watt but as of 2010 few green LEDs exceed even 100 lumens per watt. The blue and red LEDs approach their theoretical limits.
Multicolor LEDs offer a means to form light of different colors. Most perceivable colors can be formed by mixing different amounts of three primary colors. This allows precise dynamic color control. Their emission power decays exponentially with rising temperature,
resulting in a substantial change in color stability. Such problems inhibit industrial use. Multicolor LEDs without phosphors cannot provide good color rendering because each LED is a narrowband source. LEDs without phosphor, while a poorer solution for general lighting, are the best solution for displays, either backlight of LCD, or direct LED based pixels.
Dimming a multicolor LED source to match the characteristics of incandescent lamps is difficult because manufacturing variations, age, and temperature change the actual color value output. To emulate the appearance of dimming incandescent lamps may require a feedback system with color sensor to actively monitor and control the color.
Phosphor-based LEDs
This method involves coating LEDs of one color (mostly blue LEDs made of InGaN) with phosphors of different colors to form white light; the resultant LEDs are called phosphor-based or phosphor-converted white LEDs (pcLEDs). A fraction of the blue light undergoes the Stokes shift, which transforms it from shorter wavelengths to longer. Depending on the original LED's color, various color phosphors are used. Using several phosphor layers of distinct colors broadens the emitted spectrum, effectively raising the color rendering index (CRI).
Phosphor-based LEDs have efficiency losses due to heat loss from the Stokes shift and also other phosphor-related issues. Their luminous efficacies compared to normal LEDs depend on the spectral distribution of the resultant light output and the original wavelength of the LED itself. For example, the luminous efficacy of a typical YAG yellow phosphor based white LED ranges from 3 to 5 times the luminous efficacy of the original blue LED because of the human eye's greater sensitivity to yellow than to blue (as modeled in the luminosity function).
Due to the simplicity of manufacturing, the phosphor method is still the most popular method for making high-intensity white LEDs. The design and production of a light source or light fixture using a monochrome emitter with phosphor conversion is simpler and cheaper than a complex RGB system, and the majority of high-intensity white LEDs presently on the market are manufactured using phosphor light conversion.
Among the challenges being faced to improve the efficiency of LED-based white light sources is the development of more efficient phosphors. As of 2010, the most efficient yellow phosphor is still the YAG phosphor, with less than 10% Stokes shift loss. Losses attributable to internal optical losses due to re-absorption in the LED chip and in the LED packaging itself account typically for another 10% to 30% of efficiency loss. Currently, in the area of phosphor LED development, much effort is being spent on optimizing these devices to higher light output and higher operation temperatures. For instance, the efficiency can be raised by adapting better package design or by using a more suitable type of phosphor. Conformal coating process is frequently used to address the issue of varying phosphor thickness.
Some phosphor-based white LEDs encapsulate InGaN blue LEDs inside phosphor-coated epoxy. Alternatively, the LED might be paired with a remote phosphor, a preformed polycarbonate piece coated with the phosphor material. Remote phosphors provide more diffuse light, which is desirable for many applications. Remote phosphor designs are also more tolerant of variations in the LED emissions spectrum. A common yellow phosphor material is cerium-doped yttrium aluminium garnet (Ce3+:YAG).
White LEDs can also be made by coating near-ultraviolet (NUV) LEDs with a mixture of high-efficiency europium-based phosphors that emit red and blue, plus copper and aluminium-doped zinc sulfide (ZnS:Cu, Al) that emits green. This is a method analogous to the way fluorescent lamps work. This method is less efficient than blue LEDs with YAG:Ce phosphor, as the Stokes shift is larger, so more energy is converted to heat, but yields light with better spectral characteristics, which render color better. Due to the higher radiative output of the ultraviolet LEDs than of the blue ones, both methods offer comparable brightness. A concern is that UV light may leak from a malfunctioning light source and cause harm to human eyes or skin.
A new style of wafers composed of gallium-nitride-on-silicon (GaN-on-Si) is being used to produce white LEDs using 200-mm silicon wafers. This avoids the typical costly sapphire substrate in relatively small 100- or 150-mm wafer sizes. The sapphire apparatus must be coupled with a mirror-like collector to reflect light that would otherwise be wasted. It was predicted that since 2020, 40% of all GaN LEDs are made with GaN-on-Si. Manufacturing large sapphire material is difficult, while large silicon material is cheaper and more abundant. LED companies shifting from using sapphire to silicon should be a minimal investment.
Mixed white LEDs
There are RGBW LEDs that combine RGB units with a phosphor white LED on the market. Doing so retains the extremely tunable color of RGB LED, but allows color rendering and efficiency to be optimized when a color close to white is selected.
Some phosphor white LED units are "tunable white", blending two extremes of color temperatures (commonly 2700K and 6500K) to produce intermediate values. This feature allows users to change the lighting to suit the current use of a multifunction room. As illustrated by a straight line on the chromaticity diagram, simple two-white blends will have a pink bias, becoming most severe in the middle. A small amount of green light, provided by another LED, could correct the problem. Some products are RGBWW, i.e. RGBW with tunable white.
A final class of white LED with mixed light is dim-to-warm. These are ordinary 2700K white LED bulbs with a small red LED that turns on when the bulb is dimmed. Doing so makes the color warmer, emulating an incandescent light bulb.
Other white LEDs
Another method used to produce experimental white light LEDs used no phosphors at all and was based on homoepitaxially grown zinc selenide (ZnSe) on a ZnSe substrate that simultaneously emitted blue light from its active region and yellow light from the substrate.
Organic light-emitting diodes (OLEDs)
In an organic light-emitting diode (OLED), the electroluminescent material composing the emissive layer of the diode is an organic compound. The organic material is electrically conductive due to the delocalization of pi electrons caused by conjugation over all or part of the molecule, and the material therefore functions as an organic semiconductor. The organic materials can be small organic molecules in a crystalline phase, or polymers.
The potential advantages of OLEDs include thin, low-cost displays with a low driving voltage, wide viewing angle, and high contrast and color gamut. Polymer LEDs have the added benefit of printable and flexible displays. OLEDs have been used to make visual displays for portable electronic devices such as cellphones, digital cameras, lighting and televisions.
Types
LEDs are made in different packages for different applications. A single or a few LED junctions may be packed in one miniature device for use as an indicator or pilot lamp. An LED array may include controlling circuits within the same package, which may range from a simple resistor, blinking or color changing control, or an addressable controller for RGB devices. Higher-powered white-emitting devices will be mounted on heat sinks and will be used for illumination. Alphanumeric displays in dot matrix or bar formats are widely available. Special packages permit connection of LEDs to optical fibers for high-speed data communication links.
Miniature
These are mostly single-die LEDs used as indicators, and they come in various sizes from 1.8 mm to 10 mm, through-hole and surface mount packages. Typical current ratings range from around 1 mA to above 20 mA. LED's can be soldered to a flexible PCB strip to form LED tape popularly used for decoration.
Common package shapes include round, with a domed or flat top, rectangular with a flat top (as used in bar-graph displays), and triangular or square with a flat top. The encapsulation may also be clear or tinted to improve contrast and viewing angle. Infrared devices may have a black tint to block visible light while passing infrared radiation, such as the Osram SFH 4546.
5 V and 12 V LEDs are ordinary miniature LEDs that have a series resistor for direct connection to a 5V or 12V supply.
High-power
High-power LEDs (HP-LEDs) or high-output LEDs (HO-LEDs) can be driven at currents from hundreds of mA to more than an ampere, compared with the tens of mA for other LEDs. Some can emit over a thousand lumens. LED power densities up to 300 W/cm2 have been achieved. Since overheating is destructive, the HP-LEDs must be mounted on a heat sink to allow for heat dissipation. If the heat from an HP-LED is not removed, the device fails in seconds. One HP-LED can often replace an incandescent bulb in a flashlight, or be set in an array to form a powerful LED lamp.
Some HP-LEDs in this category are the Nichia 19 series, Lumileds Rebel Led, Osram Opto Semiconductors Golden Dragon, and Cree X-lamp. As of September 2009, some HP-LEDs manufactured by Cree exceed 105 lm/W.
Examples for Haitz's law—which predicts an exponential rise in light output and efficacy of LEDs over time—are the CREE XP-G series LED, which achieved 105lm/W in 2009 and the Nichia 19 series with a typical efficacy of 140lm/W, released in 2010.
AC-driven
LEDs developed by Seoul Semiconductor can operate on AC power without a DC converter. For each half-cycle, part of the LED emits light and part is dark, and this is reversed during the next half-cycle. The efficiency of this type of HP-LED is typically 40lm/W. A large number of LED elements in series may be able to operate directly from line voltage. In 2009, Seoul Semiconductor released a high DC voltage LED, named 'Acrich MJT', capable of being driven from AC power with a simple controlling circuit. The low-power dissipation of these LEDs affords them more flexibility than the original AC LED design.
Strip
Application-specific
Flashing Flashing LEDs are used as attention seeking indicators without requiring external electronics. Flashing LEDs resemble standard LEDs but they contain an integrated voltage regulator and a multivibrator circuit that causes the LED to flash with a typical period of one second. In diffused lens LEDs, this circuit is visible as a small black dot. Most flashing LEDs emit light of one color, but more sophisticated devices can flash between multiple colors and even fade through a color sequence using RGB color mixing. Flashing SMD LEDs in the 0805 and other size formats have been available since early 2019.
Flickering Integrated electronics Simple electronic circuits integrated into the LED package have been around since at least 2011 which produce a random LED intensity pattern reminiscent of a flickering candle. Reverse engineering in 2024 has suggested that some flickering LEDs with automatic sleep and wake modes might be using an integrated 8-bit microcontroller for such functionally.
Bi-color Bi-color LEDs contain two different LED emitters in one case. There are two types of these. One type consists of two dies connected to the same two leads antiparallel to each other. Current flow in one direction emits one color, and current in the opposite direction emits the other color. The other type consists of two dies with separate leads for both dies and another lead for common anode or cathode so that they can be controlled independently. The most common bi-color combination is red/traditional green. Others include amber/traditional green, red/pure green, red/blue, and blue/pure green.
RGB tri-color Tri-color LEDs contain three different LED emitters in one case. Each emitter is connected to a separate lead so they can be controlled independently. A four-lead arrangement is typical with one common lead (anode or cathode) and an additional lead for each color. Others have only two leads (positive and negative) and have a built-in electronic controller. RGB LEDs consist of one red, one green, and one blue LED. By independently adjusting each of the three, RGB LEDs are capable of producing a wide color gamut. Unlike dedicated-color LEDs, these do not produce pure wavelengths. Modules may not be optimized for smooth color mixing.
Decorative-multicolor Decorative-multicolor LEDs incorporate several emitters of different colors supplied by only two lead-out wires. Colors are switched internally by varying the supply voltage.
Alphanumeric Alphanumeric LEDs are available in seven-segment, starburst, and dot-matrix format. Seven-segment displays handle all numbers and a limited set of letters. Starburst displays can display all letters. Dot-matrix displays typically use 5×7 pixels per character. Seven-segment LED displays were in widespread use in the 1970s and 1980s, but rising use of liquid crystal displays, with their lower power needs and greater display flexibility, has reduced the popularity of numeric and alphanumeric LED displays.
Digital RGB Digital RGB addressable LEDs contain their own "smart" control electronics. In addition to power and ground, these provide connections for data-in, data-out, clock and sometimes a strobe signal. These are connected in a daisy chain, which allows individual LEDs in a long LED strip light to be easily controlled by a microcontroller. Data sent to the first LED of the chain can control the brightness and color of each LED independently of the others. They are used where a combination of maximum control and minimum visible electronics are needed such as strings for Christmas and LED matrices. Some even have refresh rates in the kHz range, allowing for basic video applications. These devices are known by their part number (WS2812 being common) or a brand name such as NeoPixel.
Filament An LED filament consists of multiple LED chips connected in series on a common longitudinal substrate that forms a thin rod reminiscent of a traditional incandescent filament. These are being used as a low-cost decorative alternative for traditional light bulbs that are being phased out in many countries. The filaments use a rather high voltage, allowing them to work efficiently with mains voltages. Often a simple rectifier and capacitive current limiting are employed to create a low-cost replacement for a traditional light bulb without the complexity of the low voltage, high current converter that single die LEDs need. Usually, they are packaged in bulb similar to the lamps they were designed to replace, and filled with inert gas at slightly lower than ambient pressure to remove heat efficiently and prevent corrosion.
Chip-on-board arrays Surface-mounted LEDs are frequently produced in chip on board (COB) arrays, allowing better heat dissipation than with a single LED of comparable luminous output. The LEDs can be arranged around a cylinder, and are called "corn cob lights" because of the rows of yellow LEDs.
Considerations for use
Efficiency: LEDs emit more lumens per watt than incandescent light bulbs. The efficiency of LED lighting fixtures is not affected by shape and size, unlike fluorescent light bulbs or tubes.
Size: LEDs can be very small (smaller than 2 mm2) and are easily attached to printed circuit boards.
Power sources
The current in an LED or other diodes rises exponentially with the applied voltage (see Shockley diode equation), so a small change in voltage can cause a large change in current. Current through the LED must be regulated by an external circuit such as a constant current source to prevent damage. Since most common power supplies are (nearly) constant-voltage sources, LED fixtures must include a power converter, or at least a current-limiting resistor. In some applications, the internal resistance of small batteries is sufficient to keep current within the LED rating.
LEDs are sensitive to voltage. They must be supplied with a voltage above their threshold voltage and a current below their rating. Current and lifetime change greatly with a small change in applied voltage. They thus require a current-regulated supply (usually just a series resistor for indicator LEDs).
Efficiency droop: The efficiency of LEDs decreases as the electric current increases. Heating also increases with higher currents, which compromises LED lifetime. These effects put practical limits on the current through an LED in high power applications.
Electrical polarity
Unlike a traditional incandescent lamp, an LED will light only when voltage is applied in the forward direction of the diode. No current flows and no light is emitted if voltage is applied in the reverse direction. If the reverse voltage exceeds the breakdown voltage, which is typically about five volts, a large current flows and the LED will be damaged. If the reverse current is sufficiently limited to avoid damage, the reverse-conducting LED is a useful noise diode.
By definition, the energy band gap of any diode is higher when reverse-biased than when forward-biased. Because the band gap energy determines the wavelength of the light emitted, the color cannot be the same when reverse-biased. The reverse breakdown voltage is sufficiently high that the emitted wavelength cannot be similar enough to still be visible. Though dual-LED packages exist that contain a different color LED in each direction, it is not expected that any single LED element can emit visible light when reverse-biased.
It is not known if any zener diode could exist that emits light only in reverse-bias mode. Uniquely, this type of LED would conduct when connected backwards.
Appearance
Color: LEDs can emit light of an intended color without using any color filters as traditional lighting methods need. This is more efficient and can lower initial costs.
Cool light: In contrast to most light sources, LEDs radiate very little heat in the form of IR that can cause damage to sensitive objects or fabrics. Wasted energy is dispersed as heat through the base of the LED.
Color rendition: Most cool-white LEDs have spectra that differ significantly from a black body radiator like the sun or an incandescent light. The spike at 460 nm and dip at 500 nm can make the color of objects appear differently under cool-white LED illumination than sunlight or incandescent sources, due to metamerism, red surfaces being rendered particularly poorly by typical phosphor-based cool-white LEDs. The same is true with green surfaces. The quality of color rendition of an LED is measured by the Color Rendering Index (CRI).
Dimming: LEDs can be dimmed either by pulse-width modulation or lowering the forward current. This pulse-width modulation is why LED lights, particularly headlights on cars, when viewed on camera or by some people, seem to flash or flicker. This is a type of stroboscopic effect.
Light properties
Switch on time: LEDs light up extremely quickly. A typical red indicator LED achieves full brightness in under a microsecond. LEDs used in communications devices can have even faster response times.
Focus: The solid package of the LED can be designed to focus its light. Incandescent and fluorescent sources often require an external reflector to collect light and direct it in a usable manner. For larger LED packages total internal reflection (TIR) lenses are often used to the same effect. When large quantities of light are needed, many light sources such as LED chips are usually deployed, which are difficult to focus or collimate on the same target.
Area light source: Single LEDs do not approximate a point source of light giving a spherical light distribution, but rather a lambertian distribution. So, LEDs are difficult to apply to uses needing a spherical light field. Different fields of light can be manipulated by the application of different optics or "lenses". LEDs cannot provide divergence below a few degrees.
Reliability
Shock resistance: LEDs, being solid-state components, are difficult to damage with external shock, unlike fluorescent and incandescent bulbs, which are fragile.
Thermal runaway: Parallel strings of LEDs will not share current evenly due to the manufacturing tolerances in their forward voltage. Running two or more strings from a single current source may result in LED failure as the devices warm up. If forward voltage binning is not possible, a circuit is required to ensure even distribution of current between parallel strands.
Slow failure: LEDs mainly fail by dimming over time, rather than the abrupt failure of incandescent bulbs.
Lifetime: LEDs can have a relatively long useful life. One report estimates 35,000 to 50,000 hours of useful life, though time to complete failure may be shorter or longer. Fluorescent tubes typically are rated at about 10,000 to 25,000 hours, depending partly on the conditions of use, and incandescent light bulbs at 1,000 to 2,000 hours. Several DOE demonstrations have shown that reduced maintenance costs from this extended lifetime, rather than energy savings, is the primary factor in determining the payback period for an LED product.
Cycling: LEDs are ideal for uses subject to frequent on-off cycling, unlike incandescent and fluorescent lamps that fail faster when cycled often, or high-intensity discharge lamps (HID lamps) that require a long time to warm up to full output and to cool down before they can be lighted again if they are being restarted.
Temperature dependence: LED performance largely depends on the ambient temperature of the operating environment – or thermal management properties. Overdriving an LED in high ambient temperatures may result in overheating the LED package, eventually leading to device failure. An adequate heat sink is needed to maintain long life. This is especially important in automotive, medical, and military uses where devices must operate over a wide range of temperatures, and require low failure rates.
Manufacturing
LED manufacturing involves multiple steps, including epitaxy, chip processing, chip separation, and packaging.
In a typical LED manufacturing process, encapsulation is performed after probing, dicing, die transfer from wafer to package, and wire bonding or flip chip mounting, perhaps using indium tin oxide, a transparent electrical conductor. In this case, the bond wire(s) are attached to the ITO film that has been deposited in the LEDs.
Flip chip circuit on board (COB) is a technique that can be used to manufacture LEDs.
Colors and materials
Conventional LEDs are made from a variety of inorganic semiconductor materials. The following table shows the available colors with wavelength range, voltage drop and material:
Applications
LED uses fall into five major categories:
Visual signals where light goes more or less directly from the source to the human eye, to convey a message or meaning
Illumination where light is reflected from objects to give visual response of these objects
Measuring and interacting with processes involving no human vision
Narrow band light sensors where LEDs operate in a reverse-bias mode and respond to incident light, instead of emitting light
Indoor cultivation, including cannabis.
The application of LEDs in horticulture has revolutionized plant cultivation by providing energy-efficient, customizable lighting solutions that optimize plant growth and development. LEDs offer precise control over light spectra, intensity, and photoperiods, enabling growers to tailor lighting conditions to the specific needs of different plant species and growth stages. This technology enhances photosynthesis, improves crop yields, and reduces energy costs compared to traditional lighting systems. Additionally, LEDs generate less heat, allowing closer placement to plants without risking thermal damage, and contribute to sustainable farming practices by lowering carbon footprints and extending growing seasons in controlled environments. Light spectrum affects growth, metabolite profile, and resistance against fungal phytopathogens of Solanum lycopersicum seedlings. LEDs can also be used in micropropagation.
Indicators and signs
The low energy consumption, low maintenance and small size of LEDs has led to uses as status indicators and displays on a variety of equipment and installations. Large-area LED displays are used as stadium displays, dynamic decorative displays, and dynamic message signs on freeways. Thin, lightweight message displays are used at airports and railway stations, and as destination displays for trains, buses, trams, and ferries.
One-color light is well suited for traffic lights and signals, exit signs, emergency vehicle lighting, ships' navigation lights, and LED-based Christmas lights
Because of their long life, fast switching times, and visibility in broad daylight due to their high output and focus, LEDs have been used in automotive brake lights and turn signals. The use in brakes improves safety, due to a great reduction in the time needed to light fully, or faster rise time, about 0.1 second faster than an incandescent bulb. This gives drivers behind more time to react. In a dual intensity circuit (rear markers and brakes) if the LEDs are not pulsed at a fast enough frequency, they can create a phantom array, where ghost images of the LED appear if the eyes quickly scan across the array. White LED headlamps are beginning to appear. Using LEDs has styling advantages because LEDs can form much thinner lights than incandescent lamps with parabolic reflectors.
Due to the relative cheapness of low output LEDs, they are also used in many temporary uses such as glowsticks and throwies. Artists have also used LEDs for LED art.
Lighting
With the development of high-efficiency and high-power LEDs, it has become possible to use LEDs in lighting and illumination. To encourage the shift to LED lamps and other high-efficiency lighting, in 2008 the US Department of Energy created the L Prize competition. The Philips Lighting North America LED bulb won the first competition on August 3, 2011, after successfully completing 18 months of intensive field, lab, and product testing.
Efficient lighting is needed for sustainable architecture. As of 2011, some LED bulbs provide up to 150 lm/W and even inexpensive low-end models typically exceed 50 lm/W, so that a 6-watt LED could achieve the same results as a standard 40-watt incandescent bulb. The lower heat output of LEDs also reduces demand on air conditioning systems. Worldwide, LEDs are rapidly adopted to displace less effective sources such as incandescent lamps and CFLs and reduce electrical energy consumption and its associated emissions. Solar powered LEDs are used as street lights and in architectural lighting.
The mechanical robustness and long lifetime are used in automotive lighting on cars, motorcycles, and bicycle lights. LED street lights are employed on poles and in parking garages. In 2007, the Italian village of Torraca was the first place to convert its street lighting to LEDs.
Cabin lighting on recent Airbus and Boeing jetliners uses LED lighting. LEDs are also being used in airport and heliport lighting. LED airport fixtures currently include medium-intensity runway lights, runway centerline lights, taxiway centerline and edge lights, guidance signs, and obstruction lighting.
LEDs are also used as a light source for DLP projectors, and to backlight newer LCD television (referred to as LED TV), computer monitor (including laptop) and handheld device LCDs, succeeding older CCFL-backlit LCDs although being superseded by OLED screens. RGB LEDs raise the color gamut by as much as 45%. Screens for TV and computer displays can be made thinner using LEDs for backlighting.
LEDs are small, durable and need little power, so they are used in handheld devices such as flashlights. LED strobe lights or camera flashes operate at a safe, low voltage, instead of the 250+ volts commonly found in xenon flashlamp-based lighting. This is especially useful in cameras on mobile phones, where space is at a premium and bulky voltage-raising circuitry is undesirable.
LEDs are used for infrared illumination in night vision uses including security cameras. A ring of LEDs around a video camera, aimed forward into a retroreflective background, allows chroma keying in video productions.
LEDs are used in mining operations, as cap lamps to provide light for miners. Research has been done to improve LEDs for mining, to reduce glare and to increase illumination, reducing risk of injury to the miners.
LEDs are increasingly finding uses in medical and educational applications, for example as mood enhancement. NASA has even sponsored research for the use of LEDs to promote health for astronauts.
Data communication and other signalling
Light can be used to transmit data and analog signals. For example, lighting white LEDs can be used in systems assisting people to navigate in closed spaces while searching necessary rooms or objects.
Assistive listening devices in many theaters and similar spaces use arrays of infrared LEDs to send sound to listeners' receivers. Light-emitting diodes (as well as semiconductor lasers) are used to send data over many types of fiber optic cable, from digital audio over TOSLINK cables to the very high bandwidth fiber links that form the Internet backbone. For some time, computers were commonly equipped with IrDA interfaces, which allowed them to send and receive data to nearby machines via infrared.
Because LEDs can cycle on and off millions of times per second, very high data bandwidth can be achieved. For that reason, visible light communication (VLC) has been proposed as an alternative to the increasingly competitive radio bandwidth. VLC operates in the visible part of the electromagnetic spectrum, so data can be transmitted without occupying the frequencies of radio communications.
Machine vision systems
Machine vision systems often require bright and homogeneous illumination, so features of interest are easier to process. LEDs are often used.
Barcode scanners are the most common example of machine vision applications, and many of those scanners use red LEDs instead of lasers. Optical computer mice use LEDs as a light source for the miniature camera within the mouse.
LEDs are useful for machine vision because they provide a compact, reliable source of light. LED lamps can be turned on and off to suit the needs of the vision system, and the shape of the beam produced can be tailored to match the system's requirements.
Biological detection
The discovery of radiative recombination in aluminum gallium nitride (AlGaN) alloys by U.S. Army Research Laboratory (ARL) led to the conceptualization of UV light-emitting diodes (LEDs) to be incorporated in light-induced fluorescence sensors used for biological agent detection. In 2004, the Edgewood Chemical Biological Center (ECBC) initiated the effort to create a biological detector named TAC-BIO. The program capitalized on semiconductor UV optical sources (SUVOS) developed by the Defense Advanced Research Projects Agency (DARPA).
UV-induced fluorescence is one of the most robust techniques used for rapid real-time detection of biological aerosols. The first UV sensors were lasers lacking in-field-use practicality. In order to address this, DARPA incorporated SUVOS technology to create a low-cost, small, lightweight, low-power device. The TAC-BIO detector's response time was one minute from when it sensed a biological agent. It was also demonstrated that the detector could be operated unattended indoors and outdoors for weeks at a time.
Aerosolized biological particles fluoresce and scatter light under a UV light beam. Observed fluorescence is dependent on the applied wavelength and the biochemical fluorophores within the biological agent. UV induced fluorescence offers a rapid, accurate, efficient and logistically practical way for biological agent detection. This is because the use of UV fluorescence is reagentless, or a process that does not require an added chemical to produce a reaction, with no consumables, or produces no chemical byproducts.
Additionally, TAC-BIO can reliably discriminate between threat and non-threat aerosols. It was claimed to be sensitive enough to detect low concentrations, but not so sensitive that it would cause false positives. The particle-counting algorithm used in the device converted raw data into information by counting the photon pulses per unit of time from the fluorescence and scattering detectors, and comparing the value to a set threshold.
The original TAC-BIO was introduced in 2010, while the second-generation TAC-BIO GEN II, was designed in 2015 to be more cost-efficient, as plastic parts were used. Its small, light-weight design allows it to be mounted to vehicles, robots, and unmanned aerial vehicles. The second-generation device could also be utilized as an environmental detector to monitor air quality in hospitals, airplanes, or even in households to detect fungus and mold.
Other applications
The light from LEDs can be modulated very quickly so they are used extensively in optical fiber and free space optics communications. This includes remote controls, such as for television sets, where infrared LEDs are often used. Opto-isolators use an LED combined with a photodiode or phototransistor to provide a signal path with electrical isolation between two circuits. This is especially useful in medical equipment where the signals from a low-voltage sensor circuit (usually battery-powered) in contact with a living organism must be electrically isolated from any possible electrical failure in a recording or monitoring device operating at potentially dangerous voltages. An optoisolator also lets information be transferred between circuits that do not share a common ground potential.
Many sensor systems rely on light as the signal source. LEDs are often ideal as a light source due to the requirements of the sensors. The Nintendo Wii's sensor bar uses infrared LEDs. Pulse oximeters use them for measuring oxygen saturation. Some flatbed scanners use arrays of RGB LEDs rather than the typical cold-cathode fluorescent lamp as the light source. Having independent control of three illuminated colors allows the scanner to calibrate itself for more accurate color balance, and there is no need for warm-up. Further, its sensors only need be monochromatic, since at any one time the page being scanned is only lit by one color of light.
Since LEDs can also be used as photodiodes, they can be used for both photo emission and detection. This could be used, for example, in a touchscreen that registers reflected light from a finger or stylus. Many materials and biological systems are sensitive to, or dependent on, light. Grow lights use LEDs to increase photosynthesis in plants, and bacteria and viruses can be removed from water and other substances using UV LEDs for sterilization. LEDs of certain wavelengths have also been used for light therapy treatment of neonatal jaundice and acne.
UV LEDs, with spectra range of 220 nm to 395 nm, have other applications, such as water/air purification, surface disinfection, glue curing, free-space non-line-of-sight communication, high performance liquid chromatography, UV curing dye printing, phototherapy (295nm Vitamin D, 308nm Excimer lamp or laser replacement), medical/ analytical instrumentation, and DNA absorption.
LEDs have also been used as a medium-quality voltage reference in electronic circuits. The forward voltage drop (about 1.7 V for a red LED or 1.2V for an infrared) can be used instead of a Zener diode in low-voltage regulators. Red LEDs have the flattest I/V curve above the knee. Nitride-based LEDs have a fairly steep I/V curve and are useless for this purpose. Although LED forward voltage is far more current-dependent than a Zener diode, Zener diodes with breakdown voltages below 3 V are not widely available.
The progressive miniaturization of low-voltage lighting technology, such as LEDs and OLEDs, suitable to incorporate into low-thickness materials has fostered experimentation in combining light sources and wall covering surfaces for interior walls in the form of LED wallpaper.
Research and development
Key challenges
LEDs require optimized efficiency to hinge on ongoing improvements such as phosphor materials and quantum dots.
The process of down-conversion (the method by which materials convert more-energetic photons to different, less energetic colors) also needs improvement. For example, the red phosphors that are used today are thermally sensitive and need to be improved in that aspect so that they do not color shift and experience efficiency drop-off with temperature. Red phosphors could also benefit from a narrower spectral width to emit more lumens and becoming more efficient at converting photons.
In addition, work remains to be done in the realms of current efficiency droop, color shift, system reliability, light distribution, dimming, thermal management, and power supply performance.
Early suspicions were that the LED droop was caused by elevated temperatures. Scientists showed that temperature was not the root cause of efficiency droop. The mechanism causing efficiency droop was identified in 2007 as Auger recombination, which was taken with mixed reaction. A 2013 study conclusively identified Auger recombination as the cause.
Potential technology
A new family of LEDs are based on the semiconductors called perovskites. In 2018, less than four years after their discovery, the ability of perovskite LEDs (PLEDs) to produce light from electrons already rivaled those of the best performing OLEDs. They have a potential for cost-effectiveness as they can be processed from solution, a low-cost and low-tech method, which might allow perovskite-based devices that have large areas to be made with extremely low cost. Their efficiency is superior by eliminating non-radiative losses, in other words, elimination of recombination pathways that do not produce photons; or by solving outcoupling problem (prevalent for thin-film LEDs) or balancing charge carrier injection to increase the EQE (external quantum efficiency). The most up-to-date PLED devices have broken the performance barrier by shooting the EQE above 20%.
In 2018, Cao et al. and Lin et al. independently published two papers on developing perovskite LEDs with EQE greater than 20%, which made these two papers a mile-stone in PLED development. Their device have similar planar structure, i.e. the active layer (perovskite) is sandwiched between two electrodes. To achieve a high EQE, they not only reduced non-radiative recombination, but also utilized their own, subtly different methods to improve the EQE.
In the work of Cao et al., researchers targeted the outcoupling problem, which is that the optical physics of thin-film LEDs causes the majority of light generated by the semiconductor to be trapped in the device. To achieve this goal, they demonstrated that solution-processed perovskites can spontaneously form submicrometre-scale crystal platelets, which can efficiently extract light from the device. These perovskites are formed via the introduction of amino acid additives into the perovskite precursor solutions. In addition, their method is able to passivate perovskite surface defects and reduce nonradiative recombination. Therefore, by improving the light outcoupling and reducing nonradiative losses, Cao and his colleagues successfully achieved PLED with EQE up to 20.7%.
Lin and his colleague used a different approach to generate high EQE. Instead of modifying the microstructure of perovskite layer, they chose to adopt a new strategy for managing the compositional distribution in the device—an approach that simultaneously provides high luminescence and balanced charge injection. In other words, they still used flat emissive layer, but tried to optimize the balance of electrons and holes injected into the perovskite, so as to make the most efficient use of the charge carriers. Moreover, in the perovskite layer, the crystals are perfectly enclosed by MABr additive (where MA is CH3NH3). The MABr shell passivates the nonradiative defects that would otherwise be present perovskite crystals, resulting in reduction of the nonradiative recombination. Therefore, by balancing charge injection and decreasing nonradiative losses, Lin and his colleagues developed PLED with EQE up to 20.3%.
Health and safety
Certain blue LEDs and cool-white LEDs can exceed safe limits of the so-called blue-light hazard as defined in eye safety specifications such as "ANSI/IESNA RP-27.1–05: Recommended Practice for Photobiological Safety for Lamp and Lamp Systems". One study showed no evidence of a risk in normal use at domestic illuminance, and that caution is only needed for particular occupational situations or for specific populations. In 2006, the International Electrotechnical Commission published IEC 62471 Photobiological safety of lamps and lamp systems, replacing the application of early laser-oriented standards for classification of LED sources.
While LEDs have the advantage over fluorescent lamps, in that they do not contain mercury, they may contain other hazardous metals such as lead and arsenic.
In 2016 the American Medical Association (AMA) issued a statement concerning the possible adverse influence of blueish street lighting on the sleep-wake cycle of city-dwellers. Critics in the industry claim exposure levels are not high enough to have a noticeable effect.
Environmental issues
Light pollution: Because white LEDs emit more short wavelength light than sources such as high-pressure sodium vapor lamps, the increased blue and green sensitivity of scotopic vision means that white LEDs used in outdoor lighting cause substantially more sky glow.
Impact on wildlife: LEDs are much more attractive to insects than sodium-vapor lights, so much so that there has been speculative concern about the possibility of disruption to food webs. LED lighting near beaches, particularly intense blue and white colors, can disorient turtle hatchlings and make them wander inland instead. The use of "turtle-safe lighting" LEDs that emit only at narrow portions of the visible spectrum is encouraged by conservancy groups in order to reduce harm.
Use in winter conditions: Since they do not give off much heat in comparison to incandescent lights, LED lights used for traffic control can have snow obscuring them, leading to accidents.
| Technology | Components | null |
18298 | https://en.wikipedia.org/wiki/Lunar%20eclipse | Lunar eclipse | A lunar eclipse is an astronomical event that occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. Such an alignment occurs during an eclipse season, approximately every six months, during the full moon phase, when the Moon's orbital plane is closest to the plane of the Earth's orbit.
This can occur only when the Sun, Earth, and Moon are exactly or very closely aligned (in syzygy) with Earth between the other two, which can happen only on the night of a full moon when the Moon is near either lunar node. The type and length of a lunar eclipse depend on the Moon's proximity to the lunar node.
When the Moon is totally eclipsed by the Earth (a "deep eclipse"), it takes on a reddish color that is caused by the planet when it completely blocks direct sunlight from reaching the Moon's surface, as the only light that is reflected from the lunar surface is what has been refracted by the Earth's atmosphere. This light appears reddish due to the Rayleigh scattering of blue light, the same reason sunrises and sunsets are more orange than during the day.
Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. A total lunar eclipse can last up to nearly two hours (while a total solar eclipse lasts only a few minutes at any given place) because the Moon's shadow is smaller. Also unlike solar eclipses, lunar eclipses are safe to view without any eye protection or special precautions.
The symbol for a lunar eclipse (or any body in the shadow of another) is (U+1F776 🝶).
Types of lunar eclipse
Earth's shadow can be divided into two distinctive parts: the umbra and penumbra. Earth totally occludes direct solar radiation within the umbra, the central region of the shadow. However, since the Sun's diameter appears to be about one-quarter of Earth's in the lunar sky, the planet only partially blocks direct sunlight within the penumbra, the outer portion of the shadow.
Penumbral lunar eclipse
A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. No part of the moon is in the Earth's umbra during this event, meaning that on all or a part of the Moon's surface facing Earth, the sun is partially blocked. The penumbra causes a subtle dimming of the lunar surface, which is only visible to the naked eye when the majority of the Moon's diameter has immersed into Earth's penumbra. A special type of penumbral eclipse is a total penumbral lunar eclipse, during which the entire Moon lies exclusively within Earth's penumbra. Total penumbral eclipses are rare, and when these occur, the portion of the Moon closest to the umbra may appear slightly darker than the rest of the lunar disk.
Partial lunar eclipse
When the Moon's near side penetrates partially into the Earth's umbra, it is known as a partial lunar eclipse, while a total lunar eclipse occurs when the entire Moon enters the Earth's umbra. During this event, one part of the Moon is in the Earth's umbra, while the other part is in the Earth's penumbra. The Moon's average orbital speed is about , or a little more than its diameter per hour, so totality may last up to nearly 107 minutes. Nevertheless, the total time between the first and last contacts of the Moon's limb with Earth's shadow is much longer and could last up to 236 minutes.
Total lunar eclipse
When the Moon's near side entirely passes into the Earth's umbral shadow, a total lunar eclipse occurs. Just prior to complete entry, the brightness of the lunar limb—the curved edge of the Moon still being hit by direct sunlight—will cause the rest of the Moon to appear comparatively dim. The moment the Moon enters a complete eclipse, the entire surface will become more or less uniformly bright, being able to reveal stars surrounding it. Later, as the Moon's opposite limb is struck by sunlight, the overall disk will again become obscured. This is because, as viewed from the Earth, the brightness of a lunar limb is generally greater than that of the rest of the surface due to reflections from the many surface irregularities within the limb: sunlight striking these irregularities is always reflected back in greater quantities than that striking more central parts, which is why the edges of full moons generally appear brighter than the rest of the lunar surface. This is similar to the effect of velvet fabric over a convex curved surface, which, to an observer, will appear darkest at the center of the curve. It will be true of any planetary body with little or no atmosphere and an irregular cratered surface (e.g., Mercury) when viewed opposite the Sun.
Central lunar eclipse
Central lunar eclipse is a total lunar eclipse during which the Moon passes near and through the centre of Earth's shadow, contacting the antisolar point. This type of lunar eclipse is relatively rare.
The relative distance of the Moon from Earth at the time of an eclipse can affect the eclipse's duration. In particular, when the Moon is near apogee, the farthest point from Earth in its orbit, its orbital speed is the slowest. The diameter of Earth's umbra does not decrease appreciably within the changes in the Moon's orbital distance. Thus, the concurrence of a totally eclipsed Moon near apogee will lengthen the duration of totality.
Selenelion
A selenelion or selenehelion, also called a horizontal eclipse, occurs where and when both the Sun and an eclipsed Moon can be observed at the same time. The event can only be observed just before sunset or just after sunrise, when both bodies will appear just above opposite horizons at nearly opposite points in the sky. A selenelion occurs during every total lunar eclipse—it is an experience of the observer, not a planetary event separate from the lunar eclipse itself. Typically, observers on Earth located on high mountain ridges undergoing false sunrise or false sunset at the same moment of a total lunar eclipse will be able to experience it. Although during selenelion the Moon is completely within the Earth's umbra, both it and the Sun can be observed in the sky because atmospheric refraction causes each body to appear higher (i.e., more central) in the sky than its true geometric planetary position.
Timing
The timing of total lunar eclipses is determined by what are known as its "contacts" (moments of contact with Earth's shadow):
P1 (First contact): Beginning of the penumbral eclipse. Earth's penumbra touches the Moon's outer limb.
U1 (Second contact): Beginning of the partial eclipse. Earth's umbra touches the Moon's outer limb.
U2 (Third contact): Beginning of the total eclipse. The Moon's surface is entirely within Earth's umbra.
Greatest eclipse: The peak stage of the total eclipse. The Moon is at its closest to the center of Earth's umbra.
U3 (Fourth contact): End of the total eclipse. The Moon's outer limb exits Earth's umbra.
U4 (Fifth contact): End of the partial eclipse. Earth's umbra leaves the Moon's surface.
P4 (Sixth contact): End of the penumbral eclipse. Earth's penumbra no longer makes contact with the Moon.
Danjon scale
The following scale (the Danjon scale) was devised by André Danjon for rating the overall darkness of lunar eclipses:
L = 0: Very dark eclipse. Moon almost invisible, especially at mid-totality.
L = 1: Dark eclipse, gray or brownish in coloration. Details distinguishable only with difficulty.
L = 2: Deep red or rust-colored eclipse. Very dark central shadow, while outer edge of umbra is relatively bright.
L = 3: Brick-red eclipse. Umbral shadow usually has a bright or yellow rim.
L = 4: Very bright copper-red or orange eclipse. Umbral shadow is bluish and has a very bright rim.
Lunar versus solar eclipse
There is often confusion between a solar eclipse and a lunar eclipse. While both involve interactions between the Sun, Earth, and the Moon, they are very different in their interactions.
The Moon does not completely darken as it passes through the umbra because of the refraction of sunlight by Earth's atmosphere into the shadow cone; if Earth had no atmosphere, the Moon would be completely dark during the eclipse. The reddish coloration arises because sunlight reaching the Moon must pass through a long and dense layer of Earth's atmosphere, where it is scattered. Shorter wavelengths are more likely to be scattered by the air molecules and small particles; thus, the longer wavelengths predominate by the time the light rays have penetrated the atmosphere. Human vision perceives this resulting light as red. This is the same effect that causes sunsets and sunrises to turn the sky a reddish color. An alternative way of conceiving this scenario is to realize that, as viewed from the Moon, the Sun would appear to be setting (or rising) behind Earth.
The amount of refracted light depends on the amount of dust or clouds in the atmosphere; this also controls how much light is scattered. In general, the dustier the atmosphere, the more that other wavelengths of light will be removed (compared to red light), leaving the resulting light a deeper red color. This causes the resulting coppery-red hue of the Moon to vary from one eclipse to the next. Volcanoes are notable for expelling large quantities of dust into the atmosphere, and a large eruption shortly before an eclipse can have a large effect on the resulting color.
In culture
Several cultures have myths related to lunar eclipses or allude to the lunar eclipse as being a good or bad omen. The Egyptians saw the eclipse as a sow swallowing the Moon for a short time; other cultures view the eclipse as the Moon being swallowed by other animals, such as a jaguar in Mayan tradition, or a mythical three-legged toad known as Chan Chu in China. Some societies thought it was a demon swallowing the Moon, and that they could chase it away by throwing stones and curses at it. The Ancient Greeks correctly believed the Earth was round and used the shadow from the lunar eclipse as evidence. Some Hindus believe in the importance of bathing in the Ganges River following an eclipse because it will help to achieve salvation.
Inca
Similarly to the Mayans, the Incans believed that lunar eclipses occurred when a jaguar ate the Moon, which is why a blood moon looks red. The Incans also believed that once the jaguar finished eating the Moon, it could come down and devour all the animals on Earth, so they would take spears and shout at the Moon to keep it away.
Mesopotamians
The ancient Mesopotamians believed that a lunar eclipse was when the Moon was being attacked by seven demons. This attack was more than just one on the Moon, however, for the Mesopotamians linked what happened in the sky with what happened on the land, and because the king of Mesopotamia represented the land, the seven demons were thought to be also attacking the king. In order to prevent this attack on the king, the Mesopotamians made someone pretend to be the king so they would be attacked instead of the true king. After the lunar eclipse was over, the substitute king was made to disappear (possibly by poisoning).
Chinese
In some Chinese cultures, people would ring bells to prevent a dragon or other wild animals from biting the Moon. In the 19th century, during a lunar eclipse, the Chinese navy fired its artillery because of this belief. During the Zhou Dynasty ( 1046–256 BC) in the Book of Songs, the sight of a Red Moon engulfed in darkness was believed to foreshadow famine or disease.
Blood moon
Certain lunar eclipses have been referred to as "blood moons" in popular articles but this is not a scientifically recognized term. This term has been given two separate, but overlapping, meanings.
The meaning usually relates to the reddish color a totally eclipsed Moon takes on to observers on Earth. As sunlight penetrates the atmosphere of Earth, the gaseous layer filters and refracts the rays in such a way that the green to violet wavelengths on the visible spectrum scatter more strongly than the red, thus giving the Moon a reddish cast. This is possible because the rays from the Sun are able to wrap around the Earth and reflect off the Moon.
Occurrence
At least two lunar eclipses and as many as five occur every year, although total lunar eclipses are significantly less common than partial lunar eclipses. If the date and time of an eclipse is known, the occurrences of upcoming eclipses are predictable using an eclipse cycle, like the saros. Eclipses occur only during an eclipse season, when the Sun appears to pass near either node of the Moon's orbit.
View from the Moon
A lunar eclipse is on the Moon a solar eclipse. The occurrence makes Earth's atmosphere appear as a red ring around the dark Earth. During full moon, the phase when lunar eclipses take place, the dark side of the Earth is illuminated by the Moon and its moon light.
| Physical sciences | Celestial mechanics | null |
18308 | https://en.wikipedia.org/wiki/Lanthanide | Lanthanide | The lanthanide () or lanthanoid () series of chemical elements comprises at least the 14 metallic chemical elements with atomic numbers 57–70, from lanthanum through ytterbium. In the periodic table, they fill the 4f orbitals. Lutetium (element 71) is also sometimes considered a lanthanide, despite being a d-block element and a transition metal.
The informal chemical symbol Ln is used in general discussions of lanthanide chemistry to refer to any lanthanide. All but one of the lanthanides are f-block elements, corresponding to the filling of the 4f electron shell. Lutetium is a d-block element (thus also a transition metal), and on this basis its inclusion has been questioned; however, like its congeners scandium and yttrium in group 3, it behaves similarly to the other 14. The term rare-earth element or rare-earth metal is often used to include the stable group 3 elements Sc, Y, and Lu in addition to the 4f elements. All lanthanide elements form trivalent cations, Ln3+, whose chemistry is largely determined by the ionic radius, which decreases steadily from lanthanum (La) to lutetium (Lu).
These elements are called lanthanides because the elements in the series are chemically similar to lanthanum. Because "lanthanide" means "like lanthanum", it has been argued that lanthanum cannot logically be a lanthanide, but the International Union of Pure and Applied Chemistry (IUPAC) acknowledges its inclusion based on common usage.
In presentations of the periodic table, the f-block elements are customarily shown as two additional rows below the main body of the table. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods), respectively.
The 1985 IUPAC "Red Book" (p. 45) recommends using lanthanoid instead of lanthanide, as the ending normally indicates a negative ion. However, owing to widespread current use, lanthanide is still allowed.
Etymology
The term "lanthanide" was introduced by Victor Goldschmidt in 1925. Despite their abundance, the technical term "lanthanides" is interpreted to reflect a sense of elusiveness on the part of these elements, as it comes from the Greek λανθανειν (lanthanein), "to lie hidden".
Rather than referring to their natural abundance, the word reflects their property of "hiding" behind each other in minerals. The term derives from lanthanum, first discovered in 1838, at that time a so-called new rare-earth element "lying hidden" or "escaping notice" in a cerium mineral, and it is an irony that lanthanum was later identified as the first in an entire series of chemically similar elements and gave its name to the whole series.
Together with the stable elements of group 3, scandium, yttrium, and lutetium, the trivial name "rare earths" is sometimes used to describe the set of lanthanides. The "earth" in the name "rare earths" arises from the minerals from which they were isolated, which were uncommon oxide-type minerals. However, these elements are neither rare in abundance nor "earths" (an obsolete term for water-insoluble strongly basic oxides of electropositive metals incapable of being smelted into metal using late 18th century technology). Group 2 is known as the alkaline earth elements for much the same reason.
The "rare" in the name "rare earths" has more to do with the difficulty of separating of the individual elements than the scarcity of any of them. By way of the Greek dysprositos for "hard to get at", element 66, dysprosium was similarly named. The elements 57 (La) to 71 (Lu) are very similar chemically to one another and frequently occur together in nature. Often a mixture of three to all 15 of the lanthanides (along with yttrium as a 16th) occur in minerals, such as monazite and samarskite (for which samarium is named). These minerals can also contain group 3 elements, and actinides such as uranium and thorium. A majority of the rare earths were discovered at the same mine in Ytterby, Sweden and four of them are named (yttrium, ytterbium, erbium, terbium) after the village and a fifth (holmium) after Stockholm; scandium is named after Scandinavia, thulium after the old name Thule, and the immediately-following group 4 element (number 72) hafnium is named for the Latin name of the city of Copenhagen.
The properties of the lanthanides arise from the order in which the electron shells of these elements are filled—the outermost (6s) has the same configuration for all of them, and a deeper (4f) shell is progressively filled with electrons as the atomic number increases from 57 towards 71. For many years, mixtures of more than one rare earth were considered to be single elements, such as neodymium and praseodymium being thought to be the single element didymium. Very small differences in solubility are used in solvent and ion-exchange purification methods for these elements, which require repeated application to obtain a purified metal. The diverse applications of refined metals and their compounds can be attributed to the subtle and pronounced variations in their electronic, electrical, optical, and magnetic properties.
By way of example of the term meaning "hidden" rather than "scarce", cerium is almost as abundant as copper; on the other hand promethium, with no stable or long-lived isotopes, is truly rare.
Physical properties of the elements
* Between initial Xe and final 6s2 electronic shells
** Sm has a close packed structure like most of the lanthanides but has an unusual 9 layer repeat
Gschneider and Daane (1988) attribute the trend in melting point which increases across the series, (lanthanum (920 °C) – lutetium (1622 °C)) to the extent of hybridization of the 6s, 5d, and 4f orbitals. The hybridization is believed to be at its greatest for cerium, which has the lowest melting point of all, 795 °C.
The lanthanide metals are soft; their hardness increases across the series. Europium stands out, as it has the lowest density in the series at 5.24 g/cm3 and the largest metallic radius in the series at 208.4 pm. It can be compared to barium, which has a metallic radius of 222 pm. It is believed that the metal contains the larger Eu2+ ion and that there are only two electrons in the conduction band. Ytterbium also has a large metallic radius, and a similar explanation is suggested.
The resistivities of the lanthanide metals are relatively high, ranging from 29 to 134 μΩ·cm. These values can be compared to a good conductor such as aluminium, which has a resistivity of 2.655 μΩ·cm.
With the exceptions of La, Yb, and Lu (which have no unpaired f electrons), the lanthanides are strongly paramagnetic, and this is reflected in their magnetic susceptibilities. Gadolinium becomes ferromagnetic at below 16 °C (Curie point). The other heavier lanthanides – terbium, dysprosium, holmium, erbium, thulium, and ytterbium – become ferromagnetic at much lower temperatures.
Chemistry and compounds
* Not including initial [Xe] core
f → f transitions are symmetry forbidden (or Laporte-forbidden), which is also true of transition metals. However, transition metals are able to use vibronic coupling to break this rule. The valence orbitals in lanthanides are almost entirely non-bonding and as such little effective vibronic coupling takes, hence the spectra from f → f transitions are much weaker and narrower than those from d → d transitions. In general this makes the colors of lanthanide complexes far fainter than those of transition metal complexes.
Effect of 4f orbitals
Viewing the lanthanides from left to right in the periodic table, the seven 4f atomic orbitals become progressively more filled (see above and ). The electronic configuration of most neutral gas-phase lanthanide atoms is [Xe]6s24fn, where n is 56 less than the atomic number Z. Exceptions are La, Ce, Gd, and Lu, which have 4fn−15d1 (though even then 4fn is a low-lying excited state for La, Ce, and Gd; for Lu, the 4f shell is already full, and the fifteenth electron has no choice but to enter 5d). With the exception of lutetium, the 4f orbitals are chemically active in all lanthanides and produce profound differences between lanthanide chemistry and transition metal chemistry. The 4f orbitals penetrate the [Xe] core and are isolated, and thus they do not participate much in bonding. This explains why crystal field effects are small and why they do not form π bonds. As there are seven 4f orbitals, the number of unpaired electrons can be as high as 7, which gives rise to the large magnetic moments observed for lanthanide compounds.
Measuring the magnetic moment can be used to investigate the 4f electron configuration, and this is a useful tool in providing an insight into the chemical bonding. The lanthanide contraction, i.e. the reduction in size of the Ln3+ ion from La3+ (103 pm) to Lu3+ (86.1 pm), is often explained by the poor shielding of the 5s and 5p electrons by the 4f electrons.
The chemistry of the lanthanides is dominated by the +3 oxidation state, and in LnIII compounds the 6s electrons and (usually) one 4f electron are lost and the ions have the configuration [Xe]4f(n−1). All the lanthanide elements exhibit the oxidation state +3. In addition, Ce3+ can lose its single f electron to form Ce4+ with the stable electronic configuration of xenon. Also, Eu3+ can gain an electron to form Eu2+ with the f7 configuration that has the extra stability of a half-filled shell. Other than Ce(IV) and Eu(II), none of the lanthanides are stable in oxidation states other than +3 in aqueous solution.
In terms of reduction potentials, the Ln0/3+ couples are nearly the same for all lanthanides, ranging from −1.99 (for Eu) to −2.35 V (for Pr). Thus these metals are highly reducing, with reducing power similar to alkaline earth metals such as Mg (−2.36 V).
Lanthanide oxidation states
The ionization energies for the lanthanides can be compared with aluminium. In aluminium the sum of the first three ionization energies is 5139 kJ·mol−1, whereas the lanthanides fall in the range 3455 – 4186 kJ·mol−1. This correlates with the highly reactive nature of the lanthanides.
The sum of the first two ionization energies for europium, 1632 kJ·mol−1 can be compared with that of barium 1468.1 kJ·mol−1 and europium's third ionization energy is the highest of the lanthanides. The sum of the first two ionization energies for ytterbium are the second lowest in the series and its third ionization energy is the second highest. The high third ionization energy for Eu and Yb correlate with the half filling 4f7 and complete filling 4f14 of the 4f subshell, and the stability afforded by such configurations due to exchange energy. Europium and ytterbium form salt like compounds with Eu2+ and Yb2+, for example the salt like dihydrides. Both europium and ytterbium dissolve in liquid ammonia forming solutions of Ln2+(NH3)x again demonstrating their similarities to the alkaline earth metals.
The relative ease with which the 4th electron can be removed in cerium and (to a lesser extent praseodymium) indicates why Ce(IV) and Pr(IV) compounds can be formed, for example CeO2 is formed rather than Ce2O3 when cerium reacts with oxygen. Also Tb has a well-known IV state, as removing the 4th electron in this case produces a half-full 4f7 configuration.
The additional stable valences for Ce and Eu mean that their abundances in rocks sometimes varies significantly relative to the other rare earth elements: see cerium anomaly and europium anomaly.
Separation of lanthanides
The similarity in ionic radius between adjacent lanthanide elements makes it difficult to separate them from each other in naturally occurring ores and other mixtures. Historically, the very laborious processes of cascading and fractional crystallization were used. Because the lanthanide ions have slightly different radii, the lattice energy of their salts and hydration energies of the ions will be slightly different, leading to a small difference in solubility. Salts of the formula Ln(NO3)3·2NH4NO3·4H2O can be used. Industrially, the elements are separated from each other by solvent extraction. Typically an aqueous solution of nitrates is extracted into kerosene containing tri-n-butylphosphate. The strength of the complexes formed increases as the ionic radius decreases, so solubility in the organic phase increases. Complete separation can be achieved continuously by use of countercurrent exchange methods. The elements can also be separated by ion-exchange chromatography, making use of the fact that the stability constant for formation of EDTA complexes increases for log K ≈ 15.5 for [La(EDTA)]− to log K ≈ 19.8 for [Lu(EDTA)]−.
Coordination chemistry and catalysis
When in the form of coordination complexes, lanthanides exist overwhelmingly in their +3 oxidation state, although particularly stable 4f configurations can also give +4 (Ce, Pr, Tb) or +2 (Sm, Eu, Yb) ions. All of these forms are strongly electropositive and thus lanthanide ions are hard Lewis acids. The oxidation states are also very stable; with the exceptions of SmI2 and cerium(IV) salts, lanthanides are not used for redox chemistry. 4f electrons have a high probability of being found close to the nucleus and are thus strongly affected as the nuclear charge increases across the series; this results in a corresponding decrease in ionic radii referred to as the lanthanide contraction.
The low probability of the 4f electrons existing at the outer region of the atom or ion permits little effective overlap between the orbitals of a lanthanide ion and any binding ligand. Thus lanthanide complexes typically have little or no covalent character and are not influenced by orbital geometries. The lack of orbital interaction also means that varying the metal typically has little effect on the complex (other than size), especially when compared to transition metals. Complexes are held together by weaker electrostatic forces which are omni-directional and thus the ligands alone dictate the symmetry and coordination of complexes. Steric factors therefore dominate, with coordinative saturation of the metal being balanced against inter-ligand repulsion. This results in a diverse range of coordination geometries, many of which are irregular, and also manifests itself in the highly fluxional nature of the complexes. As there is no energetic reason to be locked into a single geometry, rapid intramolecular and intermolecular ligand exchange will take place. This typically results in complexes that rapidly fluctuate between all possible configurations.
Many of these features make lanthanide complexes effective catalysts. Hard Lewis acids are able to polarise bonds upon coordination and thus alter the electrophilicity of compounds, with a classic example being the Luche reduction. The large size of the ions coupled with their labile ionic bonding allows even bulky coordinating species to bind and dissociate rapidly, resulting in very high turnover rates; thus excellent yields can often be achieved with loadings of only a few mol%. The lack of orbital interactions combined with the lanthanide contraction means that the lanthanides change in size across the series but that their chemistry remains much the same. This allows for easy tuning of the steric environments and examples exist where this has been used to improve the catalytic activity of the complex and change the nuclearity of metal clusters.
Despite this, the use of lanthanide coordination complexes as homogeneous catalysts is largely restricted to the laboratory and there are currently few examples them being used on an industrial scale. Lanthanides exist in many forms other than coordination complexes and many of these are industrially useful. In particular lanthanide metal oxides are used as heterogeneous catalysts in various industrial processes.
Ln(III) compounds
The trivalent lanthanides mostly form ionic salts. The trivalent ions are hard acceptors and form more stable complexes with oxygen-donor ligands than with nitrogen-donor ligands. The larger ions are 9-coordinate in aqueous solution, [Ln(H2O)9]3+ but the smaller ions are 8-coordinate, [Ln(H2O)8]3+. There is some evidence that the later lanthanides have more water molecules in the second coordination sphere. Complexation with monodentate ligands is generally weak because it is difficult to displace water molecules from the first coordination sphere. Stronger complexes are formed with chelating ligands because of the chelate effect, such as the tetra-anion derived from 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA).
Ln(II) and Ln(IV) compounds
The most common divalent derivatives of the lanthanides are for Eu(II), which achieves a favorable f7 configuration. Divalent halide derivatives are known for all of the lanthanides. They are either conventional salts or are Ln(III) "electride"-like salts. The simple salts include YbI2, EuI2, and SmI2. The electride-like salts, described as Ln3+, 2I−, e−, include LaI2, CeI2 and GdI2. Many of the iodides form soluble complexes with ethers, e.g. TmI2(dimethoxyethane)3. Samarium(II) iodide is a useful reducing agent. Ln(II) complexes can be synthesized by transmetalation reactions. The normal range of oxidation states can be expanded via the use of sterically bulky cyclopentadienyl ligands, in this way many lanthanides can be isolated as Ln(II) compounds.
Ce(IV) in ceric ammonium nitrate is a useful oxidizing agent. The Ce(IV) is the exception owing to the tendency to form an unfilled f shell. Otherwise tetravalent lanthanides are rare. However, recently Tb(IV) and Pr(IV) complexes have been shown to exist.
Hydrides
Lanthanide metals react exothermically with hydrogen to form LnH2, dihydrides. With the exception of Eu and Yb, which resemble the Ba and Ca hydrides (non-conducting, transparent salt-like compounds), they form black, pyrophoric, conducting compounds where the metal sub-lattice is face centred cubic and the H atoms occupy tetrahedral sites. Further hydrogenation produces a trihydride which is non-stoichiometric, non-conducting, more salt like. The formation of trihydride is associated with and increase in 8–10% volume and this is linked to greater localization of charge on the hydrogen atoms which become more anionic (H− hydride anion) in character.
Halides
The only tetrahalides known are the tetrafluorides of cerium, praseodymium, terbium, neodymium and dysprosium, the last two known only under matrix isolation conditions.
All of the lanthanides form trihalides with fluorine, chlorine, bromine and iodine. They are all high melting and predominantly ionic in nature. The fluorides are only slightly soluble in water and are not sensitive to air, and this contrasts with the other halides which are air sensitive, readily soluble in water and react at high temperature to form oxohalides.
The trihalides were important as pure metal can be prepared from them. In the gas phase the trihalides are planar or approximately planar, the lighter lanthanides have a lower % of dimers, the heavier lanthanides a higher proportion. The dimers have a similar structure to Al2Cl6.
Some of the dihalides are conducting while the rest are insulators. The conducting forms can be considered as LnIII electride compounds where the electron is delocalised into a conduction band, Ln3+ (X−)2(e−). All of the diiodides have relatively short metal-metal separations. The CuTi2 structure of the lanthanum, cerium and praseodymium diiodides along with HP-NdI2 contain 44 nets of metal and iodine atoms with short metal-metal bonds (393-386 La-Pr). these compounds should be considered to be two-dimensional metals (two-dimensional in the same way that graphite is). The salt-like dihalides include those of Eu, Dy, Tm, and Yb. The formation of a relatively stable +2 oxidation state for Eu and Yb is usually explained by the stability (exchange energy) of half filled (f7) and fully filled f14. GdI2 possesses the layered MoS2 structure, is ferromagnetic and exhibits colossal magnetoresistance.
The sesquihalides Ln2X3 and the Ln7I12 compounds listed in the table contain metal clusters, discrete Ln6I12 clusters in Ln7I12 and condensed clusters forming chains in the sesquihalides. Scandium forms a similar cluster compound with chlorine, Sc7Cl12 Unlike many transition metal clusters these lanthanide clusters do not have strong metal-metal interactions and this is due to the low number of valence electrons involved, but instead are stabilised by the surrounding halogen atoms.
LaI and TmI are the only known monohalides. LaI, prepared from the reaction of LaI3 and La metal, it has a NiAs type structure and can be formulated La3+ (I−)(e−)2. TmI is a true Tm(I) compound, however it is not isolated in a pure state.
Oxides and hydroxides
All of the lanthanides form sesquioxides, Ln2O3. The lighter/larger lanthanides adopt a hexagonal 7-coordinate structure while the heavier/smaller ones adopt a cubic 6-coordinate "C-M2O3" structure. All of the sesquioxides are basic, and absorb water and carbon dioxide from air to form carbonates, hydroxides and hydroxycarbonates. They dissolve in acids to form salts.
Cerium forms a stoichiometric dioxide, CeO2, where cerium has an oxidation state of +4. CeO2 is basic and dissolves with difficulty in acid to form Ce4+ solutions, from which CeIV salts can be isolated, for example the hydrated nitrate Ce(NO3)4.5H2O. CeO2 is used as an oxidation catalyst in catalytic converters. Praseodymium and terbium form non-stoichiometric oxides containing LnIV, although more extreme reaction conditions can produce stoichiometric (or near stoichiometric) PrO2 and TbO2.
Europium and ytterbium form salt-like monoxides, EuO and YbO, which have a rock salt structure. EuO is ferromagnetic at low temperatures, and is a semiconductor with possible applications in spintronics. A mixed EuII/EuIII oxide Eu3O4 can be produced by reducing Eu2O3 in a stream of hydrogen. Neodymium and samarium also form monoxides, but these are shiny conducting solids, although the existence of samarium monoxide is considered dubious.
All of the lanthanides form hydroxides, Ln(OH)3. With the exception of lutetium hydroxide, which has a cubic structure, they have the hexagonal UCl3 structure. The hydroxides can be precipitated from solutions of LnIII. They can also be formed by the reaction of the sesquioxide, Ln2O3, with water, but although this reaction is thermodynamically favorable it is kinetically slow for the heavier members of the series. Fajans' rules indicate that the smaller Ln3+ ions will be more polarizing and their salts correspondingly less ionic. The hydroxides of the heavier lanthanides become less basic, for example Yb(OH)3 and Lu(OH)3 are still basic hydroxides but will dissolve in hot concentrated NaOH.
Chalcogenides (S, Se, Te)
All of the lanthanides form Ln2Q3 (Q= S, Se, Te). The sesquisulfides can be produced by reaction of the elements or (with the exception of Eu2S3) sulfidizing the oxide (Ln2O3) with H2S. The sesquisulfides, Ln2S3 generally lose sulfur when heated and can form a range of compositions between Ln2S3 and Ln3S4. The sesquisulfides are insulators but some of the Ln3S4 are metallic conductors (e.g. Ce3S4) formulated (Ln3+)3 (S2−)4 (e−), while others (e.g. Eu3S4 and Sm3S4) are semiconductors. Structurally the sesquisulfides adopt structures that vary according to the size of the Ln metal. The lighter and larger lanthanides favoring 7-coordinate metal atoms, the heaviest and smallest lanthanides (Yb and Lu) favoring 6 coordination and the rest structures with a mixture of 6 and 7 coordination.
Polymorphism is common amongst the sesquisulfides. The colors of the sesquisulfides vary metal to metal and depend on the polymorphic form. The colors of the γ-sesquisulfides are La2S3, white/yellow; Ce2S3, dark red; Pr2S3, green; Nd2S3, light green; Gd2S3, sand; Tb2S3, light yellow and Dy2S3, orange. The shade of γ-Ce2S3 can be varied by doping with Na or Ca with hues ranging from dark red to yellow, and Ce2S3 based pigments are used commercially and are seen as low toxicity substitutes for cadmium based pigments.
All of the lanthanides form monochalcogenides, LnQ, (Q= S, Se, Te). The majority of the monochalcogenides are conducting, indicating a formulation LnIIIQ2−(e-) where the electron is in conduction bands. The exceptions are SmQ, EuQ and YbQ which are semiconductors or insulators but exhibit a pressure induced transition to a conducting state.
Compounds LnQ2 are known but these do not contain LnIV but are LnIII compounds containing polychalcogenide anions.
Oxysulfides Ln2O2S are well known, they all have the same structure with 7-coordinate Ln atoms, and 3 sulfur and 4 oxygen atoms as near neighbours.
Doping these with other lanthanide elements produces phosphors. As an example, gadolinium oxysulfide, Gd2O2S doped with Tb3+ produces visible photons when irradiated with high energy X-rays and is used as a scintillator in flat panel detectors.
When mischmetal, an alloy of lanthanide metals, is added to molten steel to remove oxygen and sulfur, stable oxysulfides are produced that form an immiscible solid.
Pnictides (group 15)
All of the lanthanides form a mononitride, LnN, with the rock salt structure. The mononitrides have attracted interest because of their unusual physical properties. SmN and EuN are reported as being "half metals". NdN, GdN, TbN and DyN are ferromagnetic, SmN is antiferromagnetic. Applications in the field of spintronics are being investigated.
CeN is unusual as it is a metallic conductor, contrasting with the other nitrides also with the other cerium pnictides. A simple description is Ce4+N3− (e–) but the interatomic distances are a better match for the trivalent state rather than for the tetravalent state. A number of different explanations have been offered.
The nitrides can be prepared by the reaction of lanthanum metals with nitrogen. Some nitride is produced along with the oxide, when lanthanum metals are ignited in air. Alternative methods of synthesis are a high temperature reaction of lanthanide metals with ammonia or the decomposition of lanthanide amides, Ln(NH2)3. Achieving pure stoichiometric compounds, and crystals with low defect density has proved difficult. The lanthanide nitrides are sensitive to air and hydrolyse producing ammonia.
The other pnictides phosphorus, arsenic, antimony and bismuth also react with the lanthanide metals to form monopnictides, LnQ, where Q = P, As, Sb or Bi. Additionally a range of other compounds can be produced with varying stoichiometries, such as LnP2, LnP5, LnP7, Ln3As, Ln5As3 and LnAs2.
Carbides
Carbides of varying stoichiometries are known for the lanthanides. Non-stoichiometry is common. All of the lanthanides form LnC2 and Ln2C3 which both contain C2 units. The dicarbides with exception of EuC2, are metallic conductors with the calcium carbide structure and can be formulated as Ln3+C22−(e–). The C-C bond length is longer than that in CaC2, which contains the C22− anion, indicating that the antibonding orbitals of the C22− anion are involved in the conduction band. These dicarbides hydrolyse to form hydrogen and a mixture of hydrocarbons. EuC2 and to a lesser extent YbC2 hydrolyse differently producing a higher percentage of acetylene (ethyne). The sesquicarbides, Ln2C3 can be formulated as Ln4(C2)3.
These compounds adopt the Pu2C3 structure which has been described as having C22− anions in bisphenoid holes formed by eight near Ln neighbours. The lengthening of the C-C bond is less marked in the sesquicarbides than in the dicarbides, with the exception of Ce2C3.
Other carbon rich stoichiometries are known for some lanthanides. Ln3C4 (Ho-Lu) containing C, C2 and C3 units; Ln4C7 (Ho-Lu) contain C atoms and C3 units and Ln4C5 (Gd-Ho) containing C and C2 units.
Metal rich carbides contain interstitial C atoms and no C2 or C3 units. These are Ln4C3 (Tb and Lu); Ln2C (Dy, Ho, Tm) and Ln3C (Sm-Lu).
Borides
All of the lanthanides form a number of borides. The "higher" borides (LnBx where x > 12) are insulators/semiconductors whereas the lower borides are typically conducting. The lower borides have stoichiometries of LnB2, LnB4, LnB6 and LnB12. Applications in the field of spintronics are being investigated. The range of borides formed by the lanthanides can be compared to those formed by the transition metals. The boron rich borides are typical of the lanthanides (and groups 1–3) whereas for the transition metals tend to form metal rich, "lower" borides. The lanthanide borides are typically grouped together with the group 3 metals with which they share many similarities of reactivity, stoichiometry and structure. Collectively these are then termed the rare earth borides.
Many methods of producing lanthanide borides have been used, amongst them are direct reaction of the elements; the reduction of Ln2O3 with boron; reduction of boron oxide, B2O3, and Ln2O3 together with carbon; reduction of metal oxide with boron carbide, B4C. Producing high purity samples has proved to be difficult. Single crystals of the higher borides have been grown in a low melting metal (e.g. Sn, Cu, Al).
Diborides, LnB2, have been reported for Sm, Gd, Tb, Dy, Ho, Er, Tm, Yb and Lu. All have the same, AlB2, structure containing a graphitic layer of boron atoms. Low temperature ferromagnetic transitions for Tb, Dy, Ho and Er. TmB2 is ferromagnetic at 7.2 K.
Tetraborides, LnB4 have been reported for all of the lanthanides except EuB4, all have the same UB4 structure. The structure has a boron sub-lattice consists of chains of octahedral B6 clusters linked by boron atoms. The unit cell decreases in size successively from LaB4 to LuB4. The tetraborides of the lighter lanthanides melt with decomposition to LnB6. Attempts to make EuB4 have failed. The LnB4 are good conductors and typically antiferromagnetic.
Hexaborides, LnB6 have been reported for all of the lanthanides. They all have the CaB6 structure, containing B6 clusters. They are non-stoichiometric due to cation defects. The hexaborides of the lighter lanthanides (La – Sm) melt without decomposition, EuB6 decomposes to boron and metal and the heavier lanthanides decompose to LnB4 with exception of YbB6 which decomposes forming YbB12. The stability has in part been correlated to differences in volatility between the lanthanide metals. In EuB6 and YbB6 the metals have an oxidation state of +2 whereas in the rest of the lanthanide hexaborides it is +3. This rationalises the differences in conductivity, the extra electrons in the LnIII hexaborides entering conduction bands. EuB6 is a semiconductor and the rest are good conductors. LaB6 and CeB6 are thermionic emitters, used, for example, in scanning electron microscopes.
Dodecaborides, LnB12, are formed by the heavier smaller lanthanides, but not by the lighter larger metals, La – Eu. With the exception YbB12 (where Yb takes an intermediate valence and is a Kondo insulator), the dodecaborides are all metallic compounds. They all have the UB12 structure containing a 3 dimensional framework of cubooctahedral B12 clusters.
The higher boride LnB66 is known for all lanthanide metals. The composition is approximate as the compounds are non-stoichiometric. They all have similar complex structure with over 1600 atoms in the unit cell. The boron cubic sub lattice contains super icosahedra made up of a central B12 icosahedra surrounded by 12 others, B12(B12)12. Other complex higher borides LnB50 (Tb, Dy, Ho Er Tm Lu) and LnB25 are known (Gd, Tb, Dy, Ho, Er) and these contain boron icosahedra in the boron framework.
Organometallic compounds
Lanthanide-carbon σ bonds are well known; however as the 4f electrons have a low probability of existing at the outer region of the atom there is little effective orbital overlap, resulting in bonds with significant ionic character. As such organo-lanthanide compounds exhibit carbanion-like behavior, unlike the behavior in transition metal organometallic compounds. Because of their large size, lanthanides tend to form more stable organometallic derivatives with bulky ligands to give compounds such as Ln[CH(SiMe3)3]. Analogues of uranocene are derived from dilithiocyclooctatetraene, Li2C8H8. Organic lanthanide(II) compounds are also known, such as Cp*2Eu.
Physical properties
Magnetic and spectroscopic
All the trivalent lanthanide ions, except lanthanum and lutetium, have unpaired f electrons. (Ligand-to-metal charge transfer can nonetheless produce a nonzero f-occupancy even in La(III) compounds.) However, the magnetic moments deviate considerably from the spin-only values because of strong spin–orbit coupling. The maximum number of unpaired electrons is 7, in Gd3+, with a magnetic moment of 7.94 B.M., but the largest magnetic moments, at 10.4–10.7 B.M., are exhibited by Dy3+ and Ho3+. However, in Gd3+ all the electrons have parallel spin and this property is important for the use of gadolinium complexes as contrast reagent in MRI scans.
Crystal field splitting is rather small for the lanthanide ions and is less important than spin–orbit coupling in regard to energy levels. Transitions of electrons between f orbitals are forbidden by the Laporte rule. Furthermore, because of the "buried" nature of the f orbitals, coupling with molecular vibrations is weak. Consequently, the spectra of lanthanide ions are rather weak and the absorption bands are similarly narrow. Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid) have sharp optical absorption peaks in the spectral range 200–900 nm and can be used as a wavelength calibration standard for optical spectrophotometers, and are available commercially.
As f-f transitions are Laporte-forbidden, once an electron has been excited, decay to the ground state will be slow. This makes them suitable for use in lasers as it makes the population inversion easy to achieve. The Nd:YAG laser is one that is widely used. Europium-doped yttrium vanadate was the first red phosphor to enable the development of color television screens. Lanthanide ions have notable luminescent properties due to their unique 4f orbitals. Laporte forbidden f-f transitions can be activated by excitation of a bound "antenna" ligand. This leads to sharp emission bands throughout the visible, NIR, and IR and relatively long luminescence lifetimes.
Occurrence
Samarskite and similar minerals contain lanthanides in association with the elements such as tantalum, niobium, hafnium, zirconium, vanadium, and titanium, from group 4 and group 5, often in similar oxidation states. Monazite is a phosphate of numerous group 3 + lanthanide + actinide metals and mined especially for the thorium content and specific rare earths, especially lanthanum, yttrium and cerium. Cerium and lanthanum as well as other members of the rare-earth series are often produced as a metal called mischmetal containing a variable mixture of these elements with cerium and lanthanum predominating; it has direct uses such as lighter flints and other spark sources which do not require extensive purification of one of these metals.
There are also lanthanide-bearing minerals based on group-2 elements, such as yttrocalcite, yttrocerite and yttrofluorite, which vary in content of yttrium, cerium, lanthanum and others. Other lanthanide-bearing minerals include bastnäsite, florencite, chernovite, perovskite, xenotime, cerite, gadolinite, lanthanite, fergusonite, polycrase, blomstrandine, håleniusite, miserite, loparite, lepersonnite, euxenite, all of which have a range of relative element concentration and may be denoted by a predominating one, as in monazite-(Ce). Group 3 elements do not occur as native-element minerals in the fashion of gold, silver, tantalum and many others on Earth, but may occur in lunar soil. Very rare halides of cerium, lanthanum, and presumably other lanthanides, feldspars and garnets are also known to exist.
The lanthanide contraction is responsible for the great geochemical divide that splits the lanthanides into light and heavy-lanthanide enriched minerals, the latter being almost inevitably associated with and dominated by yttrium. This divide is reflected in the first two "rare earths" that were discovered: yttria (1794) and ceria (1803). The geochemical divide has put more of the light lanthanides in the Earth's crust, but more of the heavy members in the Earth's mantle. The result is that although large rich ore-bodies are found that are enriched in the light lanthanides, correspondingly large ore-bodies for the heavy members are few. The principal ores are monazite and bastnäsite. Monazite sands usually contain all the lanthanide elements, but the heavier elements are lacking in bastnäsite. The lanthanides obey the Oddo–Harkins rule – odd-numbered elements are less abundant than their even-numbered neighbors.
Three of the lanthanide elements have radioactive isotopes with long half-lives (138La, 147Sm and 176Lu) that can be used to date minerals and rocks from Earth, the Moon and meteorites. Promethium is effectively a man-made element, as all its isotopes are radioactive with half-lives shorter than 20 years.
Applications
Industrial
Lanthanide elements and their compounds have many uses but the quantities consumed are relatively small in comparison to other elements. About 15000 ton/year of the lanthanides are consumed as catalysts and in the production of glasses. This 15000 tons corresponds to about 85% of the lanthanide production. From the perspective of value, however, applications in phosphors and magnets are more important.
The devices lanthanide elements are used in include superconductors, samarium-cobalt and neodymium-iron-boron high-flux rare-earth magnets, magnesium alloys, electronic polishers, refining catalysts and hybrid car components (primarily batteries and magnets). Lanthanide ions are used as the active ions in luminescent materials used in optoelectronics applications, most notably the Nd:YAG laser. Erbium-doped fiber amplifiers are significant devices in optical-fiber communication systems. Phosphors with lanthanide dopants are also widely used in cathode-ray tube technology such as television sets. The earliest color television CRTs had a poor-quality red; europium as a phosphor dopant made good red phosphors possible. Yttrium iron garnet (YIG) spheres can act as tunable microwave resonators.
Lanthanide oxides are mixed with tungsten to improve their high temperature properties for TIG welding, replacing thorium, which was mildly hazardous to work with. Many defense-related products also use lanthanide elements such as night-vision goggles and rangefinders. The SPY-1 radar used in some Aegis equipped warships, and the hybrid propulsion system of s all use rare earth magnets in critical capacities.
The price for lanthanum oxide used in fluid catalytic cracking has risen from $5 per kilogram in early 2010 to $140 per kilogram in June 2011.
Most lanthanides are widely used in lasers, and as (co-)dopants in doped-fiber optical amplifiers; for example, in Er-doped fiber amplifiers, which are used as repeaters in the terrestrial and submarine fiber-optic transmission links that carry internet traffic. These elements deflect ultraviolet and infrared radiation and are commonly used in the production of sunglass lenses. Other applications are summarized in the following table:
The complex Gd(DOTA) is used in magnetic resonance imaging.
Life science
Lanthanide complexes can be used for optical imaging. Applications are limited by the lability of the complexes.
Some applications depend on the unique luminescence properties of lanthanide chelates or cryptates. These are well-suited for this application due to their large Stokes shifts and extremely long emission lifetimes (from microseconds to milliseconds) compared to more traditional fluorophores (e.g., fluorescein, allophycocyanin, phycoerythrin, and rhodamine).
The biological fluids or serum commonly used in these research applications contain many compounds and proteins which are naturally fluorescent. Therefore, the use of conventional, steady-state fluorescence measurement presents serious limitations in assay sensitivity. Long-lived fluorophores, such as lanthanides, combined with time-resolved detection (a delay between excitation and emission detection) minimizes prompt fluorescence interference.
Time-resolved fluorometry (TRF) combined with Förster resonance energy transfer (FRET) offers a powerful tool for drug discovery researchers: Time-Resolved Förster Resonance Energy Transfer or TR-FRET. TR-FRET combines the low background aspect of TRF with the homogeneous assay format of FRET. The resulting assay provides an increase in flexibility, reliability and sensitivity in addition to higher throughput and fewer false positive/false negative results.
This method involves two fluorophores: a donor and an acceptor. Excitation of the donor fluorophore (in this case, the lanthanide ion complex) by an energy source (e.g. flash lamp or laser) produces an energy transfer to the acceptor fluorophore if they are within a given proximity to each other (known as the Förster's radius). The acceptor fluorophore in turn emits light at its characteristic wavelength.
The two most commonly used lanthanides in life science assays are shown below along with their corresponding acceptor dye as well as their excitation and emission wavelengths and resultant Stokes shift (separation of excitation and emission wavelengths).
Possible medical uses
Currently there is research showing that lanthanide elements can be used as anticancer agents. The main role of the lanthanides in these studies is to inhibit proliferation of the cancer cells. Specifically cerium and lanthanum have been studied for their role as anti-cancer agents.
One of the specific elements from the lanthanide group that has been tested and used is cerium (Ce). There have been studies that use a protein-cerium complex to observe the effect of cerium on the cancer cells. The hope was to inhibit cell proliferation and promote cytotoxicity. Transferrin receptors in cancer cells, such as those in breast cancer cells and epithelial cervical cells, promote the cell proliferation and malignancy of the cancer. Transferrin is a protein used to transport iron into the cells and is needed to aid the cancer cells in DNA replication. Transferrin acts as a growth factor for the cancerous cells and is dependent on iron. Cancer cells have much higher levels of transferrin receptors than normal cells and are very dependent on iron for their proliferation.
In the field of magnetic resonance imaging (MRI), compounds containing gadolinium are utilized extensively.
The photobiological characteristics, anticancer, anti-leukemia, and anti-HIV activities of the lanthanides with coumarin and its related compounds are demonstrated by the biological activities of the complex.
Cerium has shown results as an anti-cancer agent due to its similarities in structure and biochemistry to iron. Cerium may bind in the place of iron on to the transferrin and then be brought into the cancer cells by transferrin-receptor mediated endocytosis. The cerium binding to the transferrin in place of the iron inhibits the transferrin activity in the cell. This creates a toxic environment for the cancer cells and causes a decrease in cell growth. This is the proposed mechanism for cerium's effect on cancer cells, though the real mechanism may be more complex in how cerium inhibits cancer cell proliferation. Specifically in HeLa cancer cells studied in vitro, cell viability was decreased after 48 to 72 hours of cerium treatments. Cells treated with just cerium had decreases in cell viability, but cells treated with both cerium and transferrin had more significant inhibition for cellular activity.
Another specific element that has been tested and used as an anti-cancer agent is lanthanum, more specifically lanthanum chloride (LaCl3). The lanthanum ion is used to affect the levels of let-7a and microRNAs miR-34a in a cell throughout the cell cycle. When the lanthanum ion was introduced to the cell in vivo or in vitro, it inhibited the rapid growth and induced apoptosis of the cancer cells (specifically cervical cancer cells). This effect was caused by the regulation of the let-7a and microRNAs by the lanthanum ions. The mechanism for this effect is still unclear but it is possible that the lanthanum is acting in a similar way as the cerium and binding to a ligand necessary for cancer cell proliferation.
In the field of magnetic resonance imaging (MRI), compounds containing gadolinium are utilized extensively.
Biological effects
Due to their sparse distribution in the earth's crust and low aqueous solubility, the lanthanides have a low availability in the biosphere, and for a long time were not known to naturally form part of any biological molecules. In 2007 a novel methanol dehydrogenase that strictly uses lanthanides as enzymatic cofactors was discovered in a bacterium from the phylum Verrucomicrobiota, Methylacidiphilum fumariolicum. This bacterium was found to survive only if there are lanthanides present in the environment. Compared to most other nondietary elements, non-radioactive lanthanides are classified as having low toxicity. The same nutritional requirement has also been observed in Methylorubrum extorquens and Methylobacterium radiotolerans.
| Physical sciences | Chemical element groups | null |
18320 | https://en.wikipedia.org/wiki/Lens | Lens | A lens is a transmissive optical device that focuses or disperses a light beam by means of refraction. A simple lens consists of a single piece of transparent material, while a compound lens consists of several simple lenses (elements), usually arranged along a common axis. Lenses are made from materials such as glass or plastic and are ground, polished, or molded to the required shape. A lens can focus light to form an image, unlike a prism, which refracts light without focusing. Devices that similarly focus or disperse waves and radiation other than visible light are also called "lenses", such as microwave lenses, electron lenses, acoustic lenses, or explosive lenses.
Lenses are used in various imaging devices such as telescopes, binoculars, and cameras. They are also used as visual aids in glasses to correct defects of vision such as myopia and hypermetropia.
History
The word lens comes from , the Latin name of the lentil (a seed of a lentil plant), because a double-convex lens is lentil-shaped. The lentil also gives its name to a geometric figure.
Some scholars argue that the archeological evidence indicates that there was widespread use of lenses in antiquity, spanning several millennia. The so-called Nimrud lens is a rock crystal artifact dated to the 7th century BCE which may or may not have been used as a magnifying glass, or a burning glass. Others have suggested that certain Egyptian hieroglyphs depict "simple glass meniscal lenses".
The oldest certain reference to the use of lenses is from Aristophanes' play The Clouds (424 BCE) mentioning a burning-glass.
Pliny the Elder (1st century) confirms that burning-glasses were known in the Roman period.
Pliny also has the earliest known reference to the use of a corrective lens when he mentions that Nero was said to watch the gladiatorial games using an emerald (presumably concave to correct for nearsightedness, though the reference is vague). Both Pliny and Seneca the Younger (3 BC–65 AD) described the magnifying effect of a glass globe filled with water.
Ptolemy (2nd century) wrote a book on Optics, which however survives only in the Latin translation of an incomplete and very poor Arabic translation.
The book was, however, received by medieval scholars in the Islamic world, and commented upon by Ibn Sahl (10th century), who was in turn improved upon by Alhazen (Book of Optics, 11th century). The Arabic translation of Ptolemy's Optics became available in Latin translation in the 12th century (Eugenius of Palermo 1154). Between the 11th and 13th century "reading stones" were invented. These were primitive plano-convex lenses initially made by cutting a glass sphere in half. The medieval (11th or 12th century) rock crystal Visby lenses may or may not have been intended for use as burning glasses.
Spectacles were invented as an improvement of the "reading stones" of the high medieval period in Northern Italy in the second half of the 13th century. This was the start of the optical industry of grinding and polishing lenses for spectacles, first in Venice and Florence in the late 13th century, and later in the spectacle-making centres in both the Netherlands and Germany.
Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses (probably without the knowledge of the rudimentary optical theory of the day). The practical development and experimentation with lenses led to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle-making centres in the Netherlands.
With the invention of the telescope and microscope there was a great deal of experimentation with lens shapes in the 17th and early 18th centuries by those trying to correct chromatic errors seen in lenses. Opticians tried to construct lenses of varying forms of curvature, wrongly assuming errors arose from defects in the spherical figure of their surfaces. Optical theory on refraction and experimentation was showing no single-element lens could bring all colours to a focus. This led to the invention of the compound achromatic lens by Chester Moore Hall in England in 1733, an invention also claimed by fellow Englishman John Dollond in a 1758 patent.
Developments in transatlantic commerce were the impetus for the construction of modern lighthouses in the 18th century, which utilize a combination of elevated sightlines, lighting sources, and lenses to provide navigational aid overseas. With maximal distance of visibility needed in lighthouses, conventional convex lenses would need to be significantly sized which would negatively affect the development of lighthouses in terms of cost, design, and implementation. Fresnel lens were developed that considered these constraints by featuring less material through their concentric annular sectioning. They were first fully implemented into a lighthouse in 1823.
Construction of simple lenses
Most lenses are spherical lenses: their two surfaces are parts of the surfaces of spheres. Each surface can be convex (bulging outwards from the lens), concave (depressed into the lens), or planar (flat). The line joining the centres of the spheres making up the lens surfaces is called the axis of the lens. Typically the lens axis passes through the physical centre of the lens, because of the way they are manufactured. Lenses may be cut or ground after manufacturing to give them a different shape or size. The lens axis may then not pass through the physical centre of the lens.
Toric or sphero-cylindrical lenses have surfaces with two different radii of curvature in two orthogonal planes. They have a different focal power in different meridians. This forms an astigmatic lens. An example is eyeglass lenses that are used to correct astigmatism in someone's eye.
Types of simple lenses
Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus. Convex-concave lenses are most commonly used in corrective lenses, since the shape minimizes some aberrations.
For a biconvex or plano-convex lens in a lower-index medium, a collimated beam of light passing through the lens converges to a spot (a focus) behind the lens. In this case, the lens is called a positive or converging lens. For a thin lens in air, the distance from the lens to the spot is the focal length of the lens, which is commonly represented by in diagrams and equations. An extended hemispherical lens is a special type of plano-convex lens, in which the lens's curved surface is a full hemisphere and the lens is much thicker than the radius of curvature.
Another extreme case of a thick convex lens is a ball lens, whose shape is completely round. When used in novelty photography it is often called a "lensball". A ball-shaped lens has the advantage of being omnidirectional, but for most optical glass types, its focal point lies close to the ball's surface. Because of the ball's curvature extremes compared to the lens size, optical aberration is much worse than thin lenses, with the notable exception of chromatic aberration.
For a biconcave or plano-concave lens in a lower-index medium, a collimated beam of light passing through the lens is diverged (spread); the lens is thus called a negative or diverging lens. The beam, after passing through the lens, appears to emanate from a particular point on the axis in front of the lens. For a thin lens in air, the distance from this point to the lens is the focal length, though it is negative with respect to the focal length of a converging lens.
The behavior reverses when a lens is placed in a medium with higher refractive index than the material of the lens. In this case a biconvex or plano-convex lens diverges light, and a biconcave or plano-concave one converges it.
Convex-concave (meniscus) lenses can be either positive or negative, depending on the relative curvatures of the two surfaces. A negative meniscus lens has a steeper concave surface (with a shorter radius than the convex surface) and is thinner at the centre than at the periphery. Conversely, a positive meniscus lens has a steeper convex surface (with a shorter radius than the concave surface) and is thicker at the centre than at the periphery.
An ideal thin lens with two surfaces of equal curvature (also equal in the sign) would have zero optical power (as its focal length becomes infinity as shown in the lensmaker's equation), meaning that it would neither converge nor diverge light. All real lenses have a nonzero thickness, however, which makes a real lens with identical curved surfaces slightly positive. To obtain exactly zero optical power, a meniscus lens must have slightly unequal curvatures to account for the effect of the lens' thickness.
For a spherical surface
For a single refraction for a circular boundary, the relation between object and its image in the paraxial approximation is given by
where is the radius of the spherical surface, is the refractive index of the material of the surface, is the refractive index of medium (the medium other than the spherical surface material), is the on-axis (on the optical axis) object distance from the line perpendicular to the axis toward the refraction point on the surface (which height is h), and is the on-axis image distance from the line. Due to paraxial approximation where the line of h is close to the vertex of the spherical surface meeting the optical axis on the left, and are also considered distances with respect to the vertex.
Moving toward the right infinity leads to the first or object focal length for the spherical surface. Similarly, toward the left infinity leads to the second or image focal length .
Applying this equation on the two spherical surfaces of a lens and approximating the lens thickness to zero (so a thin lens) leads to the lensmaker's formula.
Derivation
Applying Snell's law on the spherical surface,
Also in the diagram,, and using small angle approximation (paraxial approximation) and eliminating , , and ,
Lensmaker's equation
The (effective) focal length of a spherical lens in air or vacuum for paraxial rays can be calculated from the lensmaker's equation:
where
is the refractive index of the lens material;
is the (signed, see below) radius of curvature of the lens surface closer to the light source;
is the radius of curvature of the lens surface farther from the light source; and
is the thickness of the lens (the distance along the lens axis between the two surface vertices).
The focal length is with respect to the principal planes of the lens, and the locations of the principal planes and with respect to the respective lens vertices are given by the following formulas, where it is a positive value if it is right to the respective vertex.
The focal length is positive for converging lenses, and negative for diverging lenses. The reciprocal of the focal length, is the optical power of the lens. If the focal length is in metres, this gives the optical power in dioptres (reciprocal metres).
Lenses have the same focal length when light travels from the back to the front as when light goes from the front to the back. Other properties of the lens, such as the aberrations are not the same in both directions.
Sign convention for radii of curvature and
The signs of the lens' radii of curvature indicate whether the corresponding surfaces are convex or concave. The sign convention used to represent this varies, but in this article a positive indicates a surface's center of curvature is further along in the direction of the ray travel (right, in the accompanying diagrams), while negative means that rays reaching the surface have already passed the center of curvature. Consequently, for external lens surfaces as diagrammed above, and indicate convex surfaces (used to converge light in a positive lens), while and indicate concave surfaces. The reciprocal of the radius of curvature is called the curvature. A flat surface has zero curvature, and its radius of curvature is infinite.
Sign convention for other parameters
This convention seems to be mainly used for this article, although there is another convention such as Cartesian sign convention requiring different lens equation forms.
Thin lens approximation
If is small compared to and then the approximation can be made. For a lens in air, is then given by
Derivation
The spherical thin lens equation in paraxial approximation is derived here with respect to the right figure. The 1st spherical lens surface (which meets the optical axis at as its vertex) images an on-axis object point O to the virtual image I, which can be described by the following equation, For the imaging by second lens surface, by taking the above sign convention, and Adding these two equations yields For the thin lens approximation where the 2nd term of the RHS (Right Hand Side) is gone, so
The focal length of the thin lens is found by limiting
So, the Gaussian thin lens equation is
For the thin lens in air or vacuum where can be assumed, becomes
where the subscript of 2 in is dropped.
Imaging properties
As mentioned above, a positive or converging lens in air focuses a collimated beam travelling along the lens axis to a spot (known as the focal point) at a distance from the lens. Conversely, a point source of light placed at the focal point is converted into a collimated beam by the lens. These two cases are examples of image formation in lenses. In the former case, an object at an infinite distance (as represented by a collimated beam of waves) is focused to an image at the focal point of the lens. In the latter, an object at the focal length distance from the lens is imaged at infinity. The plane perpendicular to the lens axis situated at a distance from the lens is called the .
Lens equation
For paraxial rays, if the distances from an object to a spherical thin lens (a lens of negligible thickness) and from the lens to the image are and respectively, the distances are related by the (Gaussian) thin lens formula:
The right figure shows how the image of an object point can be found by using three rays; the first ray parallelly incident on the lens and refracted toward the second focal point of it, the second ray crossing the optical center of the lens (so its direction does not change), and the third ray toward the first focal point and refracted to the direction parallel to the optical axis. This is a simple ray tracing method easily used. Two rays among the three are sufficient to locate the image point. By moving the object along the optical axis, it is shown that the second ray determines the image size while other rays help to locate the image location.
The lens equation can also be put into the "Newtonian" form:
where and is positive if it is left to the front focal point , and is positive if it is right to the rear focal point . Because is positive, an object point and the corresponding imaging point made by a lens are always in opposite sides with respect to their respective focal points. ( and are either positive or negative.)
This Newtonian form of the lens equation can be derived by using a similarity between triangles P1PO1F1 and L3L2F1 and another similarity between triangles L1L2F2 and P2P02F2 in the right figure. The similarities give the following equations and combining these results gives the Newtonian form of the lens equation.
The above equations also hold for thick lenses (including a compound lens made by multiple lenses, that can be treated as a thick lens) in air or vacuum (which refractive index can be treated as 1) if , , and are with respect to the principal planes of the lens ( is the effective focal length in this case). This is because of triangle similarities like the thin lens case above; similarity between triangles P1PO1F1 and L3H1F1 and another similarity between triangles L1'H2F2 and P2P02F2 in the right figure. If distances or pass through a medium other than air or vacuum, then a more complicated analysis is required.
If an object is placed at a distance from a positive lens of focal length , we will find an image at a distance according to this formula. If a screen is placed at a distance on the opposite side of the lens, an image is formed on it. This sort of image, which can be projected onto a screen or image sensor, is known as a real image. This is the principle of the camera, and also of the human eye, in which the retina serves as the image sensor.
The focusing adjustment of a camera adjusts , as using an image distance different from that required by this formula produces a defocused (fuzzy) image for an object at a distance of from the camera. Put another way, modifying causes objects at a different to come into perfect focus.
In some cases, is negative, indicating that the image is formed on the opposite side of the lens from where those rays are being considered. Since the diverging light rays emanating from the lens never come into focus, and those rays are not physically present at the point where they to form an image, this is called a virtual image. Unlike real images, a virtual image cannot be projected on a screen, but appears to an observer looking through the lens as if it were a real object at the location of that virtual image. Likewise, it appears to a subsequent lens as if it were an object at that location, so that second lens could again focus that light into a real image, then being measured from the virtual image location behind the first lens to the second lens. This is exactly what the eye does when looking through a magnifying glass. The magnifying glass creates a (magnified) virtual image behind the magnifying glass, but those rays are then re-imaged by the lens of the eye to create a real image on the retina.
Using a positive lens of focal length , a virtual image results when , the lens thus being used as a magnifying glass (rather than if as for a camera). Using a negative lens () with a () can only produce a virtual image (), according to the above formula. It is also possible for the object distance to be negative, in which case the lens sees a so-called virtual object. This happens when the lens is inserted into a converging beam (being focused by a previous lens) the location of its real image. In that case even a negative lens can project a real image, as is done by a Barlow lens.
For a given lens with the focal length f, the minimum distance between an object and the real image is 4f (S1 = S2 = 2f). This is derived by letting L = S1 + S2, expressing S2 in terms of S1 by the lens equation (or expressing S1 in terms of S2), and equating the derivative of L with respect to S1 (or S2) to zero. (Note that L has no limit in increasing so its extremum is only the minimum, at which the derivate of L is zero.)
Magnification
The linear magnification of an imaging system using a single lens is given by
where is the magnification factor defined as the ratio of the size of an image compared to the size of the object. The sign convention here dictates that if is negative, as it is for real images, the image is upside-down with respect to the object. For virtual images is positive, so the image is upright.
This magnification formula provides two easy ways to distinguish converging () and diverging () lenses: For an object very close to the lens (), a converging lens would form a magnified (bigger) virtual image, whereas a diverging lens would form a demagnified (smaller) image; For an object very far from the lens (), a converging lens would form an inverted image, whereas a diverging lens would form an upright image.
Linear magnification is not always the most useful measure of magnifying power. For instance, when characterizing a visual telescope or binoculars that produce only a virtual image, one would be more concerned with the angular magnification—which expresses how much larger a distant object appears through the telescope compared to the naked eye. In the case of a camera one would quote the plate scale, which compares the apparent (angular) size of a distant object to the size of the real image produced at the focus. The plate scale is the reciprocal of the focal length of the camera lens; lenses are categorized as long-focus lenses or wide-angle lenses according to their focal lengths.
Using an inappropriate measurement of magnification can be formally correct but yield a meaningless number. For instance, using a magnifying glass of focal length, held from the eye and from the object, produces a virtual image at infinity of infinite linear size: . But the is 5, meaning that the object appears 5 times larger to the eye than without the lens. When taking a picture of the moon using a camera with a lens, one is not concerned with the linear magnification Rather, the plate scale of the camera is about , from which one can conclude that the image on the film corresponds to an angular size of the moon seen from earth of about 0.5°.
In the extreme case where an object is an infinite distance away, , and , indicating that the object would be imaged to a single point in the focal plane. In fact, the diameter of the projected spot is not actually zero, since diffraction places a lower limit on the size of the point spread function. This is called the diffraction limit.
Table for thin lens imaging properties
Aberrations
Lenses do not form perfect images, and always introduce some degree of distortion or aberration that makes the image an imperfect replica of the object. Careful design of the lens system for a particular application minimizes the aberration. Several types of aberration affect image quality, including spherical aberration, coma, and chromatic aberration.
Spherical aberration
Spherical aberration occurs because spherical surfaces are not the ideal shape for a lens, but are by far the simplest shape to which glass can be ground and polished, and so are often used. Spherical aberration causes beams parallel to, but laterally distant from, the lens axis to be focused in a slightly different place than beams close to the axis. This manifests itself as a blurring of the image. Spherical aberration can be minimised with normal lens shapes by carefully choosing the surface curvatures for a particular application. For instance, a plano-convex lens, which is used to focus a collimated beam, produces a sharper focal spot when used with the convex side towards the beam source.
Coma
Coma, or comatic aberration, derives its name from the comet-like appearance of the aberrated image. Coma occurs when an object off the optical axis of the lens is imaged, where rays pass through the lens at an angle to the axis . Rays that pass through the centre of a lens of focal length are focused at a point with distance from the axis. Rays passing through the outer margins of the lens are focused at different points, either further from the axis (positive coma) or closer to the axis (negative coma). In general, a bundle of parallel rays passing through the lens at a fixed distance from the centre of the lens are focused to a ring-shaped image in the focal plane, known as a comatic circle (see each circle of the image in the below figure). The sum of all these circles results in a V-shaped or comet-like flare. As with spherical aberration, coma can be minimised (and in some cases eliminated) by choosing the curvature of the two lens surfaces to match the application. Lenses in which both spherical aberration and coma are minimised are called bestform lenses.
Chromatic aberration
Chromatic aberration is caused by the dispersion of the lens material—the variation of its refractive index, , with the wavelength of light. Since, from the formulae above, is dependent upon , it follows that light of different wavelengths is focused to different positions. Chromatic aberration of a lens is seen as fringes of colour around the image. It can be minimised by using an achromatic doublet (or achromat) in which two materials with differing dispersion are bonded together to form a single lens. This reduces the amount of chromatic aberration over a certain range of wavelengths, though it does not produce perfect correction. The use of achromats was an important step in the development of the optical microscope. An apochromat is a lens or lens system with even better chromatic aberration correction, combined with improved spherical aberration correction. Apochromats are much more expensive than achromats.
Different lens materials may also be used to minimise chromatic aberration, such as specialised coatings or lenses made from the crystal fluorite. This naturally occurring substance has the highest known Abbe number, indicating that the material has low dispersion.
Other types of aberration
Other kinds of aberration include field curvature, barrel and pincushion distortion, and astigmatism.
Aperture diffraction
Even if a lens is designed to minimize or eliminate the aberrations described above, the image quality is still limited by the diffraction of light passing through the lens' finite aperture. A diffraction-limited lens is one in which aberrations have been reduced to the point where the image quality is primarily limited by diffraction under the design conditions.
Compound lenses
Simple lenses are subject to the optical aberrations discussed above. In many cases these aberrations can be compensated for to a great extent by using a combination of simple lenses with complementary aberrations. A compound lens is a collection of simple lenses of different shapes and made of materials of different refractive indices, arranged one after the other with a common axis.
In a multiple-lens system, if the purpose of the system is to image an object, then the system design can be such that each lens treats the image made by the previous lens as an object, and produces the new image of it, so the imaging is cascaded through the lenses. As shown above, the Gaussian lens equation for a spherical lens is derived such that the 2nd surface of the lens images the image made by the 1st lens surface. For multi-lens imaging, 3rd lens surface (the front surface of the 2nd lens) can image the image made by the 2nd surface, and 4th surface (the back surface of the 2nd lens) can also image the image made by the 3rd surface. This imaging cascade by each lens surface justifies the imaging cascade by each lens.
For a two-lens system the object distances of each lens can be denoted as and , and the image distances as and and . If the lenses are thin, each satisfies the thin lens formula
If the distance between the two lenses is , then . (The 2nd lens images the image of the first lens.)
FFD (Front Focal Distance) is defined as the distance between the front (left) focal point of an optical system and its nearest optical surface vertex. If an object is located at the front focal point of the system, then its image made by the system is located infinitely far way to the right (i.e., light rays from the object is collimated after the system). To do this, the image of the 1st lens is located at the focal point of the 2nd lens, i.e., . So, the thin lens formula for the 1st lens becomes
BFD (Back Focal Distance) is similarly defined as the distance between the back (right) focal point of an optical system and its nearest optical surface vertex. If an object is located infinitely far away from the system (to the left), then its image made by the system is located at the back focal point. In this case, the 1st lens images the object at its focal point. So, the thin lens formula for the 2nd lens becomes
A simplest case is where thin lenses are placed in contact (). Then the combined focal length of the lenses is given by
Since is the power of a lens with focal length , it can be seen that the powers of thin lenses in contact are additive. The general case of multiple thin lenses in contact is
where is the number of lenses.
If two thin lenses are separated in air by some distance , then the focal length for the combined system is given by
As tends to zero, the focal length of the system tends to the value of given for thin lenses in contact. It can be shown that the same formula works for thick lenses if is taken as the distance between their principal planes.
If the separation distance between two lenses is equal to the sum of their focal lengths (), then the FFD and BFD are infinite. This corresponds to a pair of lenses that transforms a parallel (collimated) beam into another collimated beam. This type of system is called an afocal system, since it produces no net convergence or divergence of the beam. Two lenses at this separation form the simplest type of optical telescope. Although the system does not alter the divergence of a collimated beam, it does alter the (transverse) width of the beam. The magnification of such a telescope is given by
which is the ratio of the output beam width to the input beam width. Note the sign convention: a telescope with two convex lenses (, ) produces a negative magnification, indicating an inverted image. A convex plus a concave lens () produces a positive magnification and the image is upright. For further information on simple optical telescopes, see Refracting telescope § Refracting telescope designs.
Non spherical types
Cylindrical lenses have curvature along only one axis. They are used to focus light into a line, or to convert the elliptical light from a laser diode into a round beam. They are also used in motion picture anamorphic lenses.
Aspheric lenses have at least one surface that is neither spherical nor cylindrical. The more complicated shapes allow such lenses to form images with less aberration than standard simple lenses, but they are more difficult and expensive to produce. These were formerly complex to make and often extremely expensive, but advances in technology have greatly reduced the manufacturing cost for such lenses.
A Fresnel lens has its optical surface broken up into narrow rings, allowing the lens to be much thinner and lighter than conventional lenses. Durable Fresnel lenses can be molded from plastic and are inexpensive.
Lenticular lenses are arrays of microlenses that are used in lenticular printing to make images that have an illusion of depth or that change when viewed from different angles.
Bifocal lens has two or more, or a graduated, focal lengths ground into the lens.
A gradient index lens has flat optical surfaces, but has a radial or axial variation in index of refraction that causes light passing through the lens to be focused.
An axicon has a conical optical surface. It images a point source into a line the optic axis, or transforms a laser beam into a ring.
Diffractive optical elements can function as lenses.
Superlenses are made from negative index metamaterials and claim to produce images at spatial resolutions exceeding the diffraction limit. The first superlenses were made in 2004 using such a metamaterial for microwaves. Improved versions have been made by other researchers. the superlens has not yet been demonstrated at visible or near-infrared wavelengths.
A prototype flat ultrathin lens, with no curvature has been developed.
Uses
A single convex lens mounted in a frame with a handle or stand is a magnifying glass.
Lenses are used as prosthetics for the correction of refractive errors such as myopia, hypermetropia, presbyopia, and astigmatism. (See corrective lens, contact lens, eyeglasses, intraocular lens.) Most lenses used for other purposes have strict axial symmetry; eyeglass lenses are only approximately symmetric. They are usually shaped to fit in a roughly oval, not circular, frame; the optical centres are placed over the eyeballs; their curvature may not be axially symmetric to correct for astigmatism. Sunglasses' lenses are designed to attenuate light; sunglass lenses that also correct visual impairments can be custom made.
Other uses are in imaging systems such as monoculars, binoculars, telescopes, microscopes, cameras and projectors. Some of these instruments produce a virtual image when applied to the human eye; others produce a real image that can be captured on photographic film or an optical sensor, or can be viewed on a screen. In these devices lenses are sometimes paired up with curved mirrors to make a catadioptric system where the lens's spherical aberration corrects the opposite aberration in the mirror (such as Schmidt and meniscus correctors).
Convex lenses produce an image of an object at infinity at their focus; if the sun is imaged, much of the visible and infrared light incident on the lens is concentrated into the small image. A large lens creates enough intensity to burn a flammable object at the focal point. Since ignition can be achieved even with a poorly made lens, lenses have been used as burning-glasses for at least 2400 years. A modern application is the use of relatively large lenses to concentrate solar energy on relatively small photovoltaic cells, harvesting more energy without the need to use larger and more expensive cells.
Radio astronomy and radar systems often use dielectric lenses, commonly called a lens antenna to refract electromagnetic radiation into a collector antenna.
Lenses can become scratched and abraded. Abrasion-resistant coatings are available to help control this.
| Technology | Optical | null |
18339 | https://en.wikipedia.org/wiki/Law%20of%20multiple%20proportions | Law of multiple proportions | In chemistry, the law of multiple proportions states that in compounds which contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. For instance, the ratio of the hydrogen content in methane (CH4) and ethane (C2H6) per measure of carbon is 4:3. This law is also known as Dalton's Law, named after John Dalton, the chemist who first expressed it. The discovery of this pattern led Dalton to develop the modern theory of atoms, as it suggested that the elements combine with each other in multiples of a basic quantity. Along with the law of definite proportions, the law of multiple proportions forms the basis of stoichiometry.
The law of multiple proportions often does not apply when comparing very large molecules. For example, if one tried to demonstrate it using the hydrocarbons decane (C10H22) and undecane (C11H24), one would find that 100 grams of carbon could react with 18.46 grams of hydrogen to produce decane or with 18.31 grams of hydrogen to produce undecane, for a ratio of hydrogen masses of 121:120, which is hardly a ratio of "small" whole numbers.
History
In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published an explanation of Dalton's theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, and it wasn't his only mistake. But in other cases, he got their formulas right. The following examples come from Dalton's own books A New System of Chemical Philosophy (in two volumes, 1808 and 1817):
Example 1 — tin oxides: Dalton identified two types of tin oxide. One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen. The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO2). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. Tin oxides are actually crystals, they don't exist in molecular form.
Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron", which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide (Fe2O2) and iron(III) oxide (Fe2O3). Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide". As with tin oxides, iron oxides are crystals.
Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". These compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there are 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N2O, NO, and NO2.
The earliest definition of Dalton's observation appears in an 1807 chemistry encyclopedia:
The first known writer to refer to this principle as the "doctrine of multiple proportions" was Jöns Jacob Berzelius in 1813.
Dalton's atomic theory garnered widespread interest but not universal acceptance shortly after he published it because the law of multiple proportions by itself was not complete proof of the existence of atoms. Over the course of the 19th century, other discoveries in the fields of chemistry and physics would give atomic theory more credence, such that by the end of the 19th century it had found universal acceptance.
| Physical sciences | Reaction | Chemistry |
18365 | https://en.wikipedia.org/wiki/Luminance | Luminance | Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle.
The procedure for conversion from spectral radiance to luminance is standardized by the CIE and ISO.
Brightness is the term for the subjective impression of the objective luminance measurement standard (see for the importance of this contrast).
The SI unit for luminance is candela per square metre (cd/m2). A non-SI term for the same unit is the nit. The unit in the Centimetre–gram–second system of units (CGS) (which predated the SI system) is the stilb, which is equal to one candela per square centimetre or 10 kcd/m2.
Description
Luminance is often used to characterize emission or reflection from flat, diffuse surfaces. Luminance levels indicate how much luminous power could be detected by the human eye looking at a particular surface from a particular angle of view. Luminance is thus an indicator of how bright the surface will appear. In this case, the solid angle of interest is the solid angle subtended by the eye's pupil.
Luminance is used in the video industry to characterize the brightness of displays. A typical computer display emits between . The sun has a luminance of about at noon.
Luminance is invariant in geometric optics. This means that for an ideal optical system, the luminance at the output is the same as the input luminance.
For real, passive optical systems, the output luminance is equal to the input. As an example, if one uses a lens to form an image that is smaller than the source object, the luminous power is concentrated into a smaller area, meaning that the illuminance is higher at the image. The light at the image plane, however, fills a larger solid angle so the luminance comes out to be the same assuming there is no loss at the lens. The image can never be "brighter" than the source.
Health effects
Retinal damage can occur when the eye is exposed to high luminance. Damage can occur because of local heating of the retina. Photochemical effects can also cause damage, especially at short wavelengths.
The IEC 60825 series gives guidance on safety relating to exposure of the eye to lasers, which are high luminance sources. The IEC 62471 series gives guidance for evaluating the photobiological safety of lamps and lamp systems including luminaires. Specifically it specifies the exposure limits, reference measurement technique and classification scheme for the evaluation and control of photobiological hazards from all electrically powered incoherent broadband sources of optical radiation, including LEDs but excluding lasers, in the wavelength range from through . This standard was prepared as Standard CIE S 009:2002 by the International Commission on Illumination.
Luminance meter
A luminance meter is a device used in photometry that can measure the luminance in a particular direction and with a particular solid angle. The simplest devices measure the luminance in a single direction while imaging luminance meters measure luminance in a way similar to the way a digital camera records color images.
Formulation
The luminance of a specified point of a light source, in a specified direction, is defined by the mixed partial derivative
where
is the luminance (cd/m2);
is the luminous flux (lm) leaving the area in any direction contained inside the solid angle ;
is an infinitesimal area (m2) of the source containing the specified point;
is an infinitesimal solid angle (sr) containing the specified direction; and
is the angle between the normal to the surface and the specified direction.
If light travels through a lossless medium, the luminance does not change along a given light ray. As the ray crosses an arbitrary surface , the luminance is given by
where
is the infinitesimal area of seen from the source inside the solid angle ;
is the infinitesimal solid angle subtended by as seen from ; and
is the angle between the normal to and the direction of the light.
More generally, the luminance along a light ray can be defined as
where
is the etendue of an infinitesimally narrow beam containing the specified ray;
is the luminous flux carried by this beam; and
is the index of refraction of the medium.
Relation to illuminance
The luminance of a reflecting surface is related to the illuminance it receives:
where the integral covers all the directions of emission ,
is the surface's luminous exitance;
is the received illuminance; and
is the reflectance.
In the case of a perfectly diffuse reflector (also called a Lambertian reflector), the luminance is isotropic, per Lambert's cosine law. Then the relationship is simply
Units
A variety of units have been used for luminance, besides the candela per square metre. Luminance is essentially the same as surface brightness, the term used in astronomy. This is measured with a logarithmic scale, magnitudes per square arcsecond (MPSAS).
| Physical sciences | Optics | Physics |
18369 | https://en.wikipedia.org/wiki/Lunar%20calendar | Lunar calendar | A lunar calendar is a calendar based on the monthly cycles of the Moon's phases (synodic months, lunations), in contrast to solar calendars, whose annual cycles are based on the solar year. The most widely observed purely lunar calendar is the Islamic calendar. A purely lunar calendar is distinguished from a lunisolar calendar, whose lunar months are brought into alignment with the solar year through some process of intercalationsuch as by insertion of a leap month. The details of when months begin vary from calendar to calendar, with some using new, full, or crescent moons and others employing detailed calculations.
Since each lunation is approximately days, it is common for the months of a lunar calendar to alternate between 29 and 30 days. Since the period of 12 such lunations, a lunar year, is 354 days, 8 hours, 48 minutes, 34 seconds (354.36707 days), purely lunar calendars are 11 to 12 days shorter than the solar year. In purely lunar calendars, which do not make use of intercalation, the lunar months cycle through all the seasons of a solar year over the course of a 33–34 lunar-year cycle (see, e.g., list of Islamic years).
History
A lunisolar calendar was found at Warren Field in Scotland and has been dated to , during the Mesolithic period. Some scholars argue for lunar calendars still earlier—Rappenglück in the marks on a year-old cave painting at Lascaux and Marshack in the marks on a year-old bone baton—but their findings remain controversial. Scholars have argued that ancient hunters conducted regular astronomical observations of the Moon back in the Upper Palaeolithic. Samuel L. Macey dates the earliest uses of the Moon as a time-measuring device back to 28,000–30,000 years ago.
Start of the lunar month
Lunar and lunisolar calendars differ as to which day is the first day of the month. Some are based on the first sighting of the lunar crescent, such as the Hijri calendar observed by most of Islam. Alternatively, in some lunisolar calendars, such as the Hebrew calendar and Chinese calendar, the first day of a month is the day when an astronomical new moon occurs in a particular time zone. In others, such as some Hindu calendars, each month begins on the day after the full moon.
Length of the lunar month
The length of each lunar cycle varies slightly from the average value. In addition, observations are subject to uncertainty and weather conditions. Thus, to minimise uncertainty, there have been attempts to create fixed arithmetical rules to determine the start of each calendar month. The best known of these is the Tabular Islamic calendar: in brief, it has a 30-year cycle with 11 leap years of 355 days and 19 years of 354 days. In the long term, it is accurate to one day in about 2,500 solar years or 2,570 lunar years. It also deviates from observation by up to about one or two days in the short term. The algorithm was introduced by Muslim astronomers in the 8th century to predict the approximate date of the first crescent moon, which is used to determine the first day of each month in the Islamic lunar calendar.
List of lunar calendars
Lunar Hijri calendar
Javanese calendar
Lunisolar calendars
Most calendars referred to as "lunar" calendars are in fact lunisolar calendars. Their months are based on observations of the lunar cycle, with periodic intercalation being used to restore them into general agreement with the solar year. The solar "civic calendar" that was used in ancient Egypt showed traces of its origin in the earlier lunar calendar, which continued to be used alongside it for religious and agricultural purposes. Present-day lunisolar calendars include the Chinese, Korean, Vietnamese, Hindu, Hebrew and Thai calendars.
The most common form of intercalation is to add an additional month every second or third year. Some lunisolar calendars are also calibrated by annual natural events which are affected by lunar cycles as well as the solar cycle. An example of this is the lunisolar calendar of the Banks Islands, which includes three months in which the edible palolo worms mass on the beaches. These events occur at the last quarter of the lunar month, as the reproductive cycle of the palolos is synchronized with the moon.
| Technology | Calendars | null |
18382 | https://en.wikipedia.org/wiki/Lunisolar%20calendar | Lunisolar calendar | A lunisolar calendar is a calendar in many cultures, incorporating lunar calendars and solar calendars. The date of lunisolar calendars therefore indicates both the Moon phase and the time of the solar year, that is the position of the Sun in the Earth's sky. If the sidereal year (such as in a sidereal solar calendar) is used instead of the solar year, then the calendar will predict the constellation near which the full moon may occur. As with all calendars which divide the year into months there is an additional requirement that the year have a whole number of months. In some cases ordinary years consist of twelve months but every second or third year is an embolismic year, which adds a thirteenth intercalary, embolismic, or leap month.
Their months are based on the regular cycle of the Moon's phases. So lunisolar calendars are lunar calendars with in contrast to them additional intercalation rules being used to bring them into a rough agreement with the solar year and thus with the seasons.
Examples
The Chinese, Buddhist, Burmese, Assyrian,
Hebrew, Jain and Kurdish as well as the traditional Nepali, Hindu, Japanese, Korean, Mongolian, Tibetan, and Vietnamese calendars (in the East Asian Chinese cultural sphere), plus the ancient Hellenic, Coligny, and Babylonian calendars are all lunisolar. Also, some of the ancient pre-Islamic calendars in south Arabia followed a lunisolar system. The Chinese, Coligny and
Hebrew lunisolar calendars track more or less the tropical year whereas the Buddhist and Hindu lunisolar calendars track the sidereal year. Therefore, the first three give an idea of the seasons whereas the last two give an idea of the position among the constellations of the full moon.
Chinese lunisolar calendar
The Chinese calendar or Chinese lunisolar calendar is also called Agricultural Calendar [農曆; 农历; Nónglì; 'farming calendar'], or Yin Calendar [陰曆; 阴历; Yīnlì; 'yin calendar']), based on the concept of Yin Yang and astronomical phenomena, as movements of the sun, moon, Mercury, Venus, Mars, Jupiter and Saturn (known as the seven luminaries) are the references for the Chinese lunisolar calendar calculations. The Chinese lunisolar calendar is the origin of some variant calendars adopted by other neighboring countries, such as Vietnam and Korea.
The traditional calendar used the sexagenary cycle-based ganzhi system's mathematically repeating cycles of Heavenly Stems and Earthly Branches. Together with astronomical, horological, and phenological observations, definitions, measurements, and predictions of years, months, and days were refined. Astronomical phenomena and calculations emphasized especially the efforts to mathematically correlate the solar and lunar cycles from the perspective of the earth, which however are known to require some degree of numeric approximation or compromise.
The earliest record of the Chinese lunisolar calendar was in the Zhou dynasty (1050 BC – 771 BC, around 3000 years ago. Throughout history, the Chinese lunisolar calendar had many variations and evolved with different dynasties with increasing accuracy, including the "six ancient calendars" in the Warring States period, the Qin calendar in the Qin dynasty, the Han calendar or the Taichu calendar in the Han dynasty and Tang dynasty, the Shoushi calendar in the Yuan dynasty, and the Daming calendar in the Ming dynasty, etc. Starting in 1912, the solar calendar is used together with the lunar calendar in China.
The most celebrated Chinese holidays, such as Spring Festival (Chunjie, 春節), also known as the Chinese New Year, Lantern Festival (元宵節), Mid-Autumn Festival (中秋節), Dragon Boat Festival (端午節), and Qingming Festival (清明節) are all based upon the Chinese lunisolar calendar. In addition, the popular Chinese zodiac is a classification scheme based on the Chinese calendar that assigns an animal and its reputed attributes to each year in a repeating twelve-year cycle.
Movable feasts in the Christian calendars, related to the lunar cycle
The Gregorian calendar (the world's most commonly used) is a solar one but the Western Christian churches use a lunar-based algorithm to determine the date of Easter and consequent movable feasts. Briefly, the date is determined with respect to the ecclesiastical full moon that follows the ecclesiastical equinox in March. (These events are almost, but not quite, the same as the actual astronomical observations.) The Eastern Christian churches have a similar algorithm that is based on the Julian calendar.
Reconciling lunar and solar cycles
Determining leap months
A tropical year is approximately 365.2422 days long and a synodic month is approximately 29.5306 days long, so a tropical year is approximately months long. Because 0.36826 is between and , a typical year of 12 months needs to be supplemented with one intercalary or leap month every 2 to 3 years. More precisely, 0.36826 is quite close to (about 0.3684211): several lunisolar calendars have 7 leap months in every cycle of 19 years (called a 'Metonic cycle'). The Babylonians applied the 19-year cycle in the late sixth century BCE.
Intercalation of leap months is frequently controlled by the "epact", which is the difference between the lunar and solar years (approximately 11 days). The classic Metonic cycle can be reproduced by assigning an initial epact value of 1 to the last year of the cycle and incrementing by 11 each year. Between the last year of one cycle and the first year of the next the increment is 12 the which causes the epacts to repeat every 19 years. When the epact reaches 30 or higher, an intercalary month is added and 30 is subtracted. The Metonic cycle states that 7 of 19 years will contain an additional intercalary month and those years are numbered: 3, 6, 8, 11, 14, 17 and 19. Both the Hebrew calendar and the Julian calendar use this sequence.
The Buddhist and Hebrew calendars restrict the leap month to a single month of the year; the number of common months between leap months is, therefore, usually 36, but occasionally only 24 months. Because the Chinese and Hindu lunisolar calendars allow the leap month to occur after or before (respectively) any month but use the true apparent motion of the Sun, their leap months do not usually occur within a couple of months of perihelion, when the apparent speed of the Sun along the ecliptic is fastest (now about 3 January). This increases the usual number of common months between leap months to roughly 34 months when a doublet of common years occurs, while reducing the number to about 29 months when only a common singleton occurs.
With uncounted time
An alternative way of dealing with the fact that a solar year does not contain an integer number of lunar months is by including uncounted time in a period of the year that is not assigned to a named month. Some Coast Salish peoples used a calendar of this kind. For instance, the Chehalis began their count of lunar months from the arrival of spawning chinook salmon (in Gregorian calendar October), and counted 10 months, leaving an uncounted period until the next chinook salmon run.
List of lunisolar calendars
The following is a list of lunisolar calendars sorted by family.
Babylonian calendar family – common use of the Metonic cycle
Ancient Macedonian calendar
Hebrew calendar
Umma calendar
Hindu calendar family – shared astronomical roots
Assamese calendar
Bikram Samvat
Burmese calendar (Pyu calendar)
Chula Sakarat
Nepal Sambat ("Nepali calendar")
Odia calendar
Pawukon calendar
Shaka Samvat
Suriyayatra calendar
Vira Nirvana Samvat (Jain calendar)
Chinese calendar family – years start on second new moon after winter solstice (save for leaps)
Japanese calendar
Korean calendar
Mongolian calendar
Tibetan calendar
Vietnamese calendar
Unclassified or independent
Attic calendar
A lunisolar calendar devised by Plethon, né Georgios Gemistos, in his Book of Laws
Egyptian calendar, Ptolemaic
Inca Empire
Celtic calendar, including Coligny calendar
Muisca calendar
Nisg̱a'a
Old Eastern Ojibwe calendar
Javanese calendar
| Technology | Calendars | null |
18393 | https://en.wikipedia.org/wiki/Life | Life | Life is a quality that distinguishes matter that has biological processes, such as signaling and self-sustaining processes, from matter that does not. It is defined descriptively by the capacity for homeostasis, organisation, metabolism, growth, adaptation, response to stimuli, and reproduction. All life over time eventually reaches a state of death, and none is immortal. Many philosophical definitions of living systems have been proposed, such as self-organizing systems. Viruses in particular make definition difficult as they replicate only in host cells. Life exists all over the Earth in air, water, and soil, with many ecosystems forming the biosphere. Some of these are harsh environments occupied only by extremophiles.
Life has been studied since ancient times, with theories such as Empedocles's materialism asserting that it was composed of four eternal elements, and Aristotle's hylomorphism asserting that living things have souls and embody both form and matter. Life originated at least 3.5 billion years ago, resulting in a universal common ancestor. This evolved into all the species that exist now, by way of many extinct species, some of which have left traces as fossils. Attempts to classify living things, too, began with Aristotle. Modern classification began with Carl Linnaeus's system of binomial nomenclature in the 1740s.
Living things are composed of biochemical molecules, formed mainly from a few core chemical elements. All living things contain two types of large molecule, proteins and nucleic acids, the latter usually both DNA and RNA: these carry the information needed by each species, including the instructions to make each type of protein. The proteins, in turn, serve as the machinery which carries out the many chemical processes of life. The cell is the structural and functional unit of life. Smaller organisms, including prokaryotes (bacteria and archaea), consist of small single cells. Larger organisms, mainly eukaryotes, can consist of single cells or may be multicellular with more complex structure. Life is only known to exist on Earth but extraterrestrial life is thought probable. Artificial life is being simulated and explored by scientists and engineers.
Definitions
Challenge
The definition of life has long been a challenge for scientists and philosophers. This is partially because life is a process, not a substance. This is complicated by a lack of knowledge of the characteristics of living entities, if any, that may have developed outside Earth. Philosophical definitions of life have also been put forward, with similar difficulties on how to distinguish living things from the non-living. Legal definitions of life have been debated, though these generally focus on the decision to declare a human dead, and the legal ramifications of this decision. At least 123 definitions of life have been compiled.
Descriptive
Since there is no consensus for a definition of life, most current definitions in biology are descriptive. Life is considered a characteristic of something that preserves, furthers or reinforces its existence in the given environment. This implies all or most of the following traits:
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature.
Organisation: being structurally composed of one or more cells – the basic units of life.
Metabolism: transformation of energy, used to convert chemicals into cellular components (anabolism) and to decompose organic matter (catabolism). Living things require energy for homeostasis and other activities.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size and structure.
Adaptation: the evolutionary process whereby an organism becomes better able to live in its habitat.
Response to stimuli: such as the contraction of a unicellular organism away from external chemicals, the complex reactions involving all the senses of multicellular organisms, or the motion of the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Physics
From a physics perspective, an organism is a thermodynamic system with an organised molecular structure that can reproduce itself and evolve as survival dictates. Thermodynamically, life has been described as an open system which makes use of gradients in its surroundings to create imperfect copies of itself. Another way of putting this is to define life as "a self-sustained chemical system capable of undergoing Darwinian evolution", a definition adopted by a NASA committee attempting to define life for the purposes of exobiology, based on a suggestion by Carl Sagan. This definition, however, has been widely criticised because according to it, a single sexually reproducing individual is not alive as it is incapable of evolving on its own.
Living systems
Others take a living systems theory viewpoint that does not necessarily depend on molecular chemistry. One systemic definition of life is that living things are self-organizing and autopoietic (self-producing). Variations of this include Stuart Kauffman's definition as an autonomous agent or a multi-agent system capable of reproducing itself, and of completing at least one thermodynamic work cycle. This definition is extended by the evolution of novel functions over time.
Death
Death is the termination of all vital functions or life processes in an organism or cell.
One of the challenges in defining death is in distinguishing it from life. Death would seem to refer to either the moment life ends, or when the state that follows life begins. However, determining when death has occurred is difficult, as cessation of life functions is often not simultaneous across organ systems. Such determination, therefore, requires drawing conceptual lines between life and death. This is problematic because there is little consensus over how to define life. The nature of death has for millennia been a central concern of the world's religious traditions and of philosophical inquiry. Many religions maintain faith in either a kind of afterlife or reincarnation for the soul, or resurrection of the body at a later date.
Viruses
Whether or not viruses should be considered as alive is controversial. They are most often considered as just gene coding replicators rather than forms of life. They have been described as "organisms at the edge of life" because they possess genes, evolve by natural selection, and replicate by making multiple copies of themselves through self-assembly. However, viruses do not metabolise and they require a host cell to make new products. Virus self-assembly within host cells has implications for the study of the origin of life, as it may support the hypothesis that life could have started as self-assembling organic molecules.
History of study
Materialism
Some of the earliest theories of life were materialist, holding that all that exists is matter, and that life is merely a complex form or arrangement of matter. Empedocles (430 BC) argued that everything in the universe is made up of a combination of four eternal "elements" or "roots of all": earth, water, air, and fire. All change is explained by the arrangement and rearrangement of these four elements. The various forms of life are caused by an appropriate mixture of elements.
Democritus (460 BC) was an atomist; he thought that the essential characteristic of life was having a soul (psyche), and that the soul, like everything else, was composed of fiery atoms. He elaborated on fire because of the apparent connection between life and heat, and because fire moves.
Plato, in contrast, held that the world was organised by permanent forms, reflected imperfectly in matter; forms provided direction or intelligence, explaining the regularities observed in the world. The mechanistic materialism that originated in ancient Greece was revived and revised by the French philosopher René Descartes (1596–1650), who held that animals and humans were assemblages of parts that together functioned as a machine. This idea was developed further by Julien Offray de La Mettrie (1709–1750) in his book L'Homme Machine. In the 19th century the advances in cell theory in biological science encouraged this view. The evolutionary theory of Charles Darwin (1859) is a mechanistic explanation for the origin of species by means of natural selection. At the beginning of the 20th century Stéphane Leduc (1853–1939) promoted the idea that biological processes could be understood in terms of physics and chemistry, and that their growth resembled that of inorganic crystals immersed in solutions of sodium silicate. His ideas, set out in his book La biologie synthétique, were widely dismissed during his lifetime, but has incurred a resurgence of interest in the work of Russell, Barge and colleagues.
Hylomorphism
Hylomorphism is a theory first expressed by the Greek philosopher Aristotle (322 BC). The application of hylomorphism to biology was important to Aristotle, and biology is extensively covered in his extant writings. In this view, everything in the material universe has both matter and form, and the form of a living thing is its soul (Greek psyche, Latin anima). There are three kinds of souls: the vegetative soul of plants, which causes them to grow and decay and nourish themselves, but does not cause motion and sensation; the animal soul, which causes animals to move and feel; and the rational soul, which is the source of consciousness and reasoning, which (Aristotle believed) is found only in man. Each higher soul has all of the attributes of the lower ones. Aristotle believed that while matter can exist without form, form cannot exist without matter, and that therefore the soul cannot exist without the body.
This account is consistent with teleological explanations of life, which account for phenomena in terms of purpose or goal-directedness. Thus, the whiteness of the polar bear's coat is explained by its purpose of camouflage. The direction of causality (from the future to the past) is in contradiction with the scientific evidence for natural selection, which explains the consequence in terms of a prior cause. Biological features are explained not by looking at future optimal results, but by looking at the past evolutionary history of a species, which led to the natural selection of the features in question.
Spontaneous generation
Spontaneous generation was the belief that living organisms can form without descent from similar organisms. Typically, the idea was that certain forms such as fleas could arise from inanimate matter such as dust or the supposed seasonal generation of mice and insects from mud or garbage.
The theory of spontaneous generation was proposed by Aristotle, who compiled and expanded the work of prior natural philosophers and the various ancient explanations of the appearance of organisms; it was considered the best explanation for two millennia. It was decisively dispelled by the experiments of Louis Pasteur in 1859, who expanded upon the investigations of predecessors such as Francesco Redi. Disproof of the traditional ideas of spontaneous generation is no longer controversial among biologists.
Vitalism
Vitalism is the belief that there is a non-material life-principle. This originated with Georg Ernst Stahl (17th century), and remained popular until the middle of the 19th century. It appealed to philosophers such as Henri Bergson, Friedrich Nietzsche, and Wilhelm Dilthey, anatomists like Xavier Bichat, and chemists like Justus von Liebig. Vitalism included the idea that there was a fundamental difference between organic and inorganic material, and the belief that organic material can only be derived from living things. This was disproved in 1828, when Friedrich Wöhler prepared urea from inorganic materials. This Wöhler synthesis is considered the starting point of modern organic chemistry. It is of historical significance because for the first time an organic compound was produced in inorganic reactions.
During the 1850s Hermann von Helmholtz, anticipated by Julius Robert von Mayer, demonstrated that no energy is lost in muscle movement, suggesting that there were no "vital forces" necessary to move a muscle. These results led to the abandonment of scientific interest in vitalistic theories, especially after Eduard Buchner's demonstration that alcoholic fermentation could occur in cell-free extracts of yeast. Nonetheless, belief still exists in pseudoscientific theories such as homoeopathy, which interprets diseases and sickness as caused by disturbances in a hypothetical vital force or life force.
Development
Origin of life
The age of Earth is about 4.54 billion years. Life on Earth has existed for at least 3.5 billion years, with the oldest physical traces of life dating back 3.7 billion years. Estimates from molecular clocks, as summarised in the TimeTree public database, place the origin of life around 4.0 billion years ago. Hypotheses on the origin of life attempt to explain the formation of a universal common ancestor from simple organic molecules via pre-cellular life to protocells and metabolism. In 2016, a set of 355 genes from the last universal common ancestor was tentatively identified.
The biosphere is postulated to have developed, from the origin of life onwards, at least some 3.5 billion years ago. The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilised microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on Earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago.
Evolution
Evolution is the change in heritable characteristics of biological populations over successive generations. It results in the appearance of new species and often the disappearance of old ones. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on genetic variation, resulting in certain characteristics increasing or decreasing in frequency within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
Fossils
Fossils are the preserved remains or traces of organisms from the remote past. The totality of fossils, both discovered and undiscovered, and their placement in layers (strata) of sedimentary rock is known as the fossil record. A preserved specimen is called a fossil if it is older than the arbitrary date of 10,000 years ago. Hence, fossils range in age from the youngest at the start of the Holocene Epoch to the oldest from the Archaean Eon, up to 3.4 billion years old.
Extinction
Extinction is the process by which a species dies out. The moment of extinction is the death of the last individual of that species. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively after a period of apparent absence. Species become extinct when they are no longer able to survive in changing habitat or against superior competition. Over 99% of all the species that have ever lived are now extinct. Mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.
Environmental conditions
The diversity of life on Earth is a result of the dynamic interplay between genetic opportunity, metabolic capability, environmental challenges, and symbiosis. For most of its existence, Earth's habitable environment has been dominated by microorganisms and subjected to their metabolism and evolution. As a consequence of these microbial activities, the physical-chemical environment on Earth has been changing on a geologic time scale, thereby affecting the path of evolution of subsequent life. For example, the release of molecular oxygen by cyanobacteria as a by-product of photosynthesis induced global changes in the Earth's environment. Because oxygen was toxic to most life on Earth at the time, this posed novel evolutionary challenges, and ultimately resulted in the formation of Earth's major animal and plant species. This interplay between organisms and their environment is an inherent feature of living systems.
Biosphere
The biosphere is the global sum of all ecosystems. It can also be termed as the zone of life on Earth, a closed system (apart from solar and cosmic radiation and heat from the interior of the Earth), and largely self-regulating. Organisms exist in every part of the biosphere, including soil, hot springs, inside rocks at least deep underground, the deepest parts of the ocean, and at least high in the atmosphere. For example, spores of Aspergillus niger have been detected in the mesosphere at an altitude of 48 to 77 km. Under test conditions, life forms have been observed to survive in the vacuum of space. Life forms thrive in the deep Mariana Trench, and inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, and beneath the seabed off Japan. In 2014, life forms were found living below the ice of Antarctica. Expeditions of the International Ocean Discovery Program found unicellular life in 120 °C sediment 1.2 km below seafloor in the Nankai Trough subduction zone. According to one researcher, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are."
Range of tolerance
The inert components of an ecosystem are the physical and chemical factors necessary for life—energy (sunlight or chemical energy), water, heat, atmosphere, gravity, nutrients, and ultraviolet solar radiation protection. In most ecosystems, the conditions vary during the day and from one season to the next. To live in most ecosystems, then, organisms must be able to survive a range of conditions, called the "range of tolerance". Outside that are the "zones of physiological stress", where the survival and reproduction are possible but not optimal. Beyond these zones are the "zones of intolerance", where survival and reproduction of that organism is unlikely or impossible. Organisms that have a wide range of tolerance are more widely distributed than organisms with a narrow range of tolerance.
Extremophiles
To survive, some microorganisms have evolved to withstand freezing, complete desiccation, starvation, high levels of radiation exposure, and other physical or chemical challenges. These extremophile microorganisms may survive exposure to such conditions for long periods. They excel at exploiting uncommon sources of energy. Characterization of the structure and metabolic diversity of microbial communities in such extreme environments is ongoing.
Classification
Antiquity
The first classification of organisms was made by the Greek philosopher Aristotle (384–322 BC), who grouped living things as either plants or animals, based mainly on their ability to move. He distinguished animals with blood from animals without blood, which can be compared with the concepts of vertebrates and invertebrates respectively, and divided the blooded animals into five groups: viviparous quadrupeds (mammals), oviparous quadrupeds (reptiles and amphibians), birds, fishes and whales. The bloodless animals were divided into five groups: cephalopods, crustaceans, insects (which included the spiders, scorpions, and centipedes), shelled animals (such as most molluscs and echinoderms), and "zoophytes" (animals that resemble plants). This theory remained dominant for more than a thousand years.
Linnaean
In the late 1740s, Carl Linnaeus introduced his system of binomial nomenclature for the classification of species. Linnaeus attempted to improve the composition and reduce the length of the previously used many-worded names by abolishing unnecessary rhetoric, introducing new descriptive terms and precisely defining their meaning.
The fungi were originally treated as plants. For a short period Linnaeus had classified them in the taxon Vermes in Animalia, but later placed them back in Plantae. Herbert Copeland classified the Fungi in his Protoctista, including them with single-celled organisms and thus partially avoiding the problem but acknowledging their special status. The problem was eventually solved by Whittaker, when he gave them their own kingdom in his five-kingdom system. Evolutionary history shows that the fungi are more closely related to animals than to plants.
As advances in microscopy enabled detailed study of cells and microorganisms, new groups of life were revealed, and the fields of cell biology and microbiology were created. These new organisms were originally described separately in protozoa as animals and protophyta/thallophyta as plants, but were united by Ernst Haeckel in the kingdom Protista; later, the prokaryotes were split off in the kingdom Monera, which would eventually be divided into two separate groups, the Bacteria and the Archaea. This led to the six-kingdom system and eventually to the current three-domain system, which is based on evolutionary relationships. However, the classification of eukaryotes, especially of protists, is still controversial.
As microbiology developed, viruses, which are non-cellular, were discovered. Whether these are considered alive has been a matter of debate; viruses lack characteristics of life such as cell membranes, metabolism and the ability to grow or respond to their environments. Viruses have been classed into "species" based on their genetics, but many aspects of such a classification remain controversial.
The original Linnaean system has been modified many times, for example as follows:
The attempt to organise the Eukaryotes into a small number of kingdoms has been challenged. The Protozoa do not form a clade or natural grouping, and nor do the Chromista (Chromalveolata).
Metagenomic
The ability to sequence large numbers of complete genomes has allowed biologists to take a metagenomic view of the phylogeny of the whole tree of life. This has led to the realisation that the majority of living things are bacteria, and that all have a common origin.
Composition
Chemical elements
All life forms require certain core chemical elements for their biochemical functioning. These include carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur—the elemental macronutrients for all organisms. Together these make up nucleic acids, proteins and lipids, the bulk of living matter. Five of these six elements comprise the chemical components of DNA, the exception being sulfur. The latter is a component of the amino acids cysteine and methionine. The most abundant of these elements in organisms is carbon, which has the desirable attribute of forming multiple, stable covalent bonds. This allows carbon-based (organic) molecules to form the immense variety of chemical arrangements described in organic chemistry.
Alternative hypothetical types of biochemistry have been proposed that eliminate one or more of these elements, swap out an element for one not on the list, or change required chiralities or other chemical properties.
DNA
Deoxyribonucleic acid or DNA is a molecule that carries most of the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses. DNA and RNA are nucleic acids; alongside proteins and complex carbohydrates, they are one of the three major types of macromolecule that are essential for all known forms of life. Most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix. The two DNA strands are known as polynucleotides since they are composed of simpler units called nucleotides. Each nucleotide is composed of a nitrogen-containing nucleobase—either cytosine (C), guanine (G), adenine (A), or thymine (T)—as well as a sugar called deoxyribose and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. According to base pairing rules (A with T, and C with G), hydrogen bonds bind the nitrogenous bases of the two separate polynucleotide strands to make double-stranded DNA. This has the key property that each strand contains all the information needed to recreate the other strand, enabling the information to be preserved during reproduction and cell division. Within cells, DNA is organised into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotes store most of their DNA inside the cell nucleus.
Cells
Cells are the basic unit of structure in every living thing, and all cells arise from pre-existing cells by division. Cell theory was formulated by Henri Dutrochet, Theodor Schwann, Rudolf Virchow and others during the early nineteenth century, and subsequently became widely accepted. The activity of an organism depends on the total activity of its cells, with energy flow occurring within and between them. Cells contain hereditary information that is carried forward as a genetic code during cell division.
There are two primary types of cells, reflecting their evolutionary origins. Prokaryote cells lack a nucleus and other membrane-bound organelles, although they have circular DNA and ribosomes. Bacteria and Archaea are two domains of prokaryotes. The other primary type is the eukaryote cell, which has a distinct nucleus bound by a nuclear membrane and membrane-bound organelles, including mitochondria, chloroplasts, lysosomes, rough and smooth endoplasmic reticulum, and vacuoles. In addition, their DNA is organised into chromosomes. All species of large complex organisms are eukaryotes, including animals, plants and fungi, though with a wide diversity of protist microorganisms. The conventional model is that eukaryotes evolved from prokaryotes, with the main organelles of the eukaryotes forming through endosymbiosis between bacteria and the progenitor eukaryotic cell.
The molecular mechanisms of cell biology are based on proteins. Most of these are synthesised by the ribosomes through an enzyme-catalyzed process called protein biosynthesis. A sequence of amino acids is assembled and joined based upon gene expression of the cell's nucleic acid. In eukaryotic cells, these proteins may then be transported and processed through the Golgi apparatus in preparation for dispatch to their destination.
Cells reproduce through a process of cell division in which the parent cell divides into two or more daughter cells. For prokaryotes, cell division occurs through a process of fission in which the DNA is replicated, then the two copies are attached to parts of the cell membrane. In eukaryotes, a more complex process of mitosis is followed. However, the result is the same; the resulting cell copies are identical to each other and to the original cell (except for mutations), and both are capable of further division following an interphase period.
Multicellular structure
Multicellular organisms may have first evolved through the formation of colonies of identical cells. These cells can form group organisms through cell adhesion. The individual members of a colony are capable of surviving on their own, whereas the members of a true multi-cellular organism have developed specialisations, making them dependent on the remainder of the organism for survival. Such organisms are formed clonally or from a single germ cell that is capable of forming the various specialised cells that form the adult organism. This specialisation allows multicellular organisms to exploit resources more efficiently than single cells. About 800 million years ago, a minor genetic change in a single molecule, the enzyme GK-PID, may have allowed organisms to go from a single cell organism to one of many cells.
Cells have evolved methods to perceive and respond to their microenvironment, thereby enhancing their adaptability. Cell signalling coordinates cellular activities, and hence governs the basic functions of multicellular organisms. Signaling between cells can occur through direct cell contact using juxtacrine signalling, or indirectly through the exchange of agents as in the endocrine system. In more complex organisms, coordination of activities can occur through a dedicated nervous system.
In the universe
Though life is confirmed only on Earth, many think that extraterrestrial life is not only plausible, but probable or inevitable, possibly resulting in a biophysical cosmology instead of a mere physical cosmology. Other planets and moons in the Solar System and other planetary systems are being examined for evidence of having once supported simple life, and projects such as SETI are trying to detect radio transmissions from possible alien civilisations. Other locations within the Solar System that may host microbial life include the subsurface of Mars, the upper atmosphere of Venus, and subsurface oceans on some of the moons of the giant planets.
Investigation of the tenacity and versatility of life on Earth, as well as an understanding of the molecular systems that some organisms utilise to survive such extremes, is important for the search for extraterrestrial life. For example, lichen could survive for a month in a simulated Martian environment.
Beyond the Solar System, the region around another main-sequence star that could support Earth-like life on an Earth-like planet is known as the habitable zone. The inner and outer radii of this zone vary with the luminosity of the star, as does the time interval during which the zone survives. Stars more massive than the Sun have a larger habitable zone, but remain on the Sun-like "main sequence" of stellar evolution for a shorter time interval. Small red dwarfs have the opposite problem, with a smaller habitable zone that is subject to higher levels of magnetic activity and the effects of tidal locking from close orbits. Hence, stars in the intermediate mass range such as the Sun may have a greater likelihood for Earth-like life to develop. The location of the star within a galaxy may also affect the likelihood of life forming. Stars in regions with a greater abundance of heavier elements that can form planets, in combination with a low rate of potentially habitat-damaging supernova events, are predicted to have a higher probability of hosting planets with complex life. The variables of the Drake equation are used to discuss the conditions in planetary systems where civilisation is most likely to exist, within wide bounds of uncertainty. A "Confidence of Life Detection" scale (CoLD) for reporting evidence of life beyond Earth has been proposed.
Artificial
Artificial life is the simulation of any aspect of life, as through computers, robotics, or biochemistry. Synthetic biology is a new area of biotechnology that combines science and biological engineering. The common goal is the design and construction of new biological functions and systems not found in nature. Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health and the environment.
| Biology and health sciences | null | null |
18401 | https://en.wikipedia.org/wiki/Laminar%20flow | Laminar flow | Laminar flow () is the property of fluid particles in fluid dynamics to follow smooth paths in layers, with each layer moving smoothly past the adjacent layers with little or no mixing. At low velocities, the fluid tends to flow without lateral mixing, and adjacent layers slide past one another smoothly. There are no cross-currents perpendicular to the direction of flow, nor eddies or swirls of fluids. In laminar flow, the motion of the particles of the fluid is very orderly with particles close to a solid surface moving in straight lines parallel to that surface.
Laminar flow is a flow regime characterized by high momentum diffusion and low momentum convection.
When a fluid is flowing through a closed channel such as a pipe or between two flat plates, either of two types of flow may occur depending on the velocity and viscosity of the fluid: laminar flow or turbulent flow. Laminar flow occurs at lower velocities, below a threshold at which the flow becomes turbulent. The threshold velocity is determined by a dimensionless parameter characterizing the flow called the Reynolds number, which also depends on the viscosity and density of the fluid and dimensions of the channel. Turbulent flow is a less orderly flow regime that is characterized by eddies or small packets of fluid particles, which result in lateral mixing. In non-scientific terms, laminar flow is smooth, while turbulent flow is rough.
Relationship with the Reynolds number
The type of flow occurring in a fluid in a channel is important in fluid-dynamics problems and subsequently affects heat and mass transfer in fluid systems. The dimensionless Reynolds number is an important parameter in the equations that describe whether fully developed flow conditions lead to laminar or turbulent flow. The Reynolds number is the ratio of the inertial force to the shearing force of the fluid: how fast the fluid is moving relative to how viscous it is, irrespective of the scale of the fluid system. Laminar flow generally occurs when the fluid is moving slowly or the fluid is very viscous. As the Reynolds number increases, such as by increasing the flow rate of the fluid, the flow will transition from laminar to turbulent flow at a specific range of Reynolds numbers, the laminar–turbulent transition range depending on small disturbance levels in the fluid or imperfections in the flow system. If the Reynolds number is very small, much less than 1, then the fluid will exhibit Stokes, or creeping, flow, where the viscous forces of the fluid dominate the inertial forces.
The specific calculation of the Reynolds number, and the values where laminar flow occurs, will depend on the geometry of the flow system and flow pattern. The common example is flow through a pipe, where the Reynolds number is defined as
where:
is the hydraulic diameter of the pipe (m);
is the volumetric flow rate (m3/s);
is the pipe's cross-sectional area (m2);
is the mean speed of the fluid (SI units: m/s);
is the dynamic viscosity of the fluid (Pa·s = N·s/m2 = kg/(m·s));
is the kinematic viscosity of the fluid, (m2/s);
is the density of the fluid (kg/m3).
For such systems, laminar flow occurs when the Reynolds number is below a critical value of approximately 2,040, though the transition range is typically between 1,800 and 2,100.
For fluid systems occurring on external surfaces, such as flow past objects suspended in the fluid, other definitions for Reynolds numbers can be used to predict the type of flow around the object. The particle Reynolds number Rep would be used for particle suspended in flowing fluids, for example. As with flow in pipes, laminar flow typically occurs with lower Reynolds numbers, while turbulent flow and related phenomena, such as vortex shedding, occur with higher Reynolds numbers.
Examples
A common application of laminar flow is in the smooth flow of a viscous liquid through a tube or pipe. In that case, the velocity of flow varies from zero at the walls to a maximum along the cross-sectional centre of the vessel. The flow profile of laminar flow in a tube can be calculated by dividing the flow into thin cylindrical elements and applying the viscous force to them.
Another example is the flow of air over an aircraft wing. The boundary layer is a very thin sheet of air lying over the surface of the wing (and all other surfaces of the aircraft). Because air has viscosity, this layer of air tends to adhere to the wing. As the wing moves forward through the air, the boundary layer at first flows smoothly over the streamlined shape of the airfoil. Here, the flow is laminar and the boundary layer is a laminar layer. Prandtl applied the concept of the laminar boundary layer to airfoils in 1904.
An everyday example is the slow, smooth and optically transparent flow of shallow water over a smooth barrier.
When water leaves a tap without an aerator with little force, it first exhibits laminar flow, but as acceleration by the force of gravity immediately sets in, the Reynolds number of the flow increases with speed, and the laminar flow of the water downstream from the tap can transition to turbulent flow. Optical transparency is then reduced or lost entirely.
Laminar flow barriers
Laminar airflow is used to separate volumes of air, or prevent airborne contaminants from entering an area. Laminar flow hoods are used to exclude contaminants from sensitive processes in science, electronics and medicine. Air curtains are frequently used in commercial settings to keep heated or refrigerated air from passing through doorways. A laminar flow reactor (LFR) is a reactor that uses laminar flow to study chemical reactions and process mechanisms. A laminar flow design for animal husbandry of rats for disease management was developed by Beall et al. 1971 and became a standard around the world including in the then-Eastern Bloc.
| Physical sciences | Fluid mechanics | Physics |
18404 | https://en.wikipedia.org/wiki/Lorentz%20transformation | Lorentz transformation | In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.
The most common form of the transformation, parametrized by the real constant representing a velocity confined to the -direction, is expressed as
where and are the coordinates of an event in two frames with the spatial origins coinciding at ==0, where the primed frame is seen from the unprimed frame as moving with speed along the -axis, where is the speed of light, and is the Lorentz factor. When speed is much smaller than , the Lorentz factor is negligibly different from 1, but as approaches , grows without bound. The value of must be smaller than for the transformation to make sense.
Expressing the speed as an equivalent form of the transformation is
Frames of reference can be divided into two groups: inertial (relative motion with constant velocity) and non-inertial (accelerating, moving in curved paths, rotational motion with constant angular velocity, etc.). The term "Lorentz transformations" only refers to transformations between inertial frames, usually in the context of special relativity.
In each reference frame, an observer can use a local coordinate system (usually Cartesian coordinates in this context) to measure lengths, and a clock to measure time intervals. An event is something that happens at a point in space at an instant of time, or more formally a point in spacetime. The transformations connect the space and time coordinates of an event as measured by an observer in each frame.
They supersede the Galilean transformation of Newtonian physics, which assumes an absolute space and time (see Galilean relativity). The Galilean transformation is a good approximation only at relative speeds much less than the speed of light. Lorentz transformations have a number of unintuitive features that do not appear in Galilean transformations. For example, they reflect the fact that observers moving at different velocities may measure different distances, elapsed times, and even different orderings of events, but always such that the speed of light is the same in all inertial reference frames. The invariance of light speed is one of the postulates of special relativity.
Historically, the transformations were the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The transformations later became a cornerstone for special relativity.
The Lorentz transformation is a linear transformation. It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost. In Minkowski space—the mathematical model of spacetime in special relativity—the Lorentz transformations preserve the spacetime interval between any two events. They describe only the transformations in which the spacetime event at the origin is left fixed. They can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group.
History
Many physicists—including Woldemar Voigt, George FitzGerald, Joseph Larmor, and Hendrik Lorentz himself—had been discussing the physics implied by these equations since 1887. Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the luminiferous aether. FitzGerald then conjectured that Heaviside's distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are being contracted, in order to explain the baffling outcome of the 1887 aether-wind experiment of Michelson and Morley. In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis. Their explanation was widely known before 1905.
Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous aether hypothesis, also looked for the transformation under which Maxwell's equations are invariant when transformed from the aether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well ("local time"). Henri Poincaré gave a physical interpretation to local time (to first order in v/c, the relative velocity of the two reference frames normalized to the speed of light) as the consequence of clock synchronization, under the assumption that the speed of light is constant in moving frames. Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations.
In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group,
and he named it after Lorentz.
Later in the same year Albert Einstein published what is now called special relativity, by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame, and by abandoning the mechanistic aether as unnecessary.
Derivation of the group of Lorentz transformations
An event is something that happens at a certain point in spacetime, or more generally, the point in spacetime itself. In any inertial frame an event is specified by a time coordinate ct and a set of Cartesian coordinates to specify position in space in that frame. Subscripts label individual events.
From Einstein's second postulate of relativity (invariance of c) it follows that:
in all inertial frames for events connected by light signals. The quantity on the left is called the spacetime interval between events and . The interval between any two events, not necessarily separated by light signals, is in fact invariant, i.e., independent of the state of relative motion of observers in different inertial frames, as is shown using homogeneity and isotropy of space. The transformation sought after thus must possess the property that:
where are the spacetime coordinates used to define events in one frame, and are the coordinates in another frame. First one observes that () is satisfied if an arbitrary -tuple of numbers are added to events and . Such transformations are called spacetime translations and are not dealt with further here. Then one observes that a linear solution preserving the origin of the simpler problem solves the general problem too:
(a solution satisfying the first formula automatically satisfies the second one as well; see polarization identity). Finding the solution to the simpler problem is just a matter of look-up in the theory of classical groups that preserve bilinear forms of various signature. First equation in () can be written more compactly as:
where refers to the bilinear form of signature on exposed by the right hand side formula in (). The alternative notation defined on the right is referred to as the relativistic dot product. Spacetime mathematically viewed as endowed with this bilinear form is known as Minkowski space . The Lorentz transformation is thus an element of the group , the Lorentz group or, for those that prefer the other metric signature, (also called the Lorentz group). One has:
which is precisely preservation of the bilinear form () which implies (by linearity of and bilinearity of the form) that () is satisfied. The elements of the Lorentz group are rotations and boosts and mixes thereof. If the spacetime translations are included, then one obtains the inhomogeneous Lorentz group or the Poincaré group.
Generalities
The relations between the primed and unprimed spacetime coordinates are the Lorentz transformations, each coordinate in one frame is a linear function of all the coordinates in the other frame, and the inverse functions are the inverse transformation. Depending on how the frames move relative to each other, and how they are oriented in space relative to each other, other parameters that describe direction, speed, and orientation enter the transformation equations.
Transformations describing relative motion with constant (uniform) velocity and without rotation of the space coordinate axes are called Lorentz boosts or simply boosts, and the relative velocity between the frames is the parameter of the transformation. The other basic type of Lorentz transformation is rotation in the spatial coordinates only, these like boosts are inertial transformations since there is no relative motion, the frames are simply tilted (and not continuously rotating), and in this case quantities defining the rotation are the parameters of the transformation (e.g., axis–angle representation, or Euler angles, etc.). A combination of a rotation and boost is a homogeneous transformation, which transforms the origin back to the origin.
The full Lorentz group also contains special transformations that are neither rotations nor boosts, but rather reflections in a plane through the origin. Two of these can be singled out; spatial inversion in which the spatial coordinates of all events are reversed in sign and temporal inversion in which the time coordinate for each event gets its sign reversed.
Boosts should not be conflated with mere displacements in spacetime; in this case, the coordinate systems are simply shifted and there is no relative motion. However, these also count as symmetries forced by special relativity since they leave the spacetime interval invariant. A combination of a rotation with a boost, followed by a shift in spacetime, is an inhomogeneous Lorentz transformation, an element of the Poincaré group, which is also called the inhomogeneous Lorentz group.
Physical formulation of Lorentz boosts
Coordinate transformation
A "stationary" observer in frame defines events with coordinates . Another frame moves with velocity relative to , and an observer in this "moving" frame defines events using the coordinates .
The coordinate axes in each frame are parallel (the and axes are parallel, the and axes are parallel, and the and axes are parallel), remain mutually perpendicular, and relative motion is along the coincident axes. At , the origins of both coordinate systems are the same, . In other words, the times and positions are coincident at this event. If all these hold, then the coordinate systems are said to be in standard configuration, or synchronized.
If an observer in records an event , then an observer in records the same event with coordinates
where is the relative velocity between frames in the -direction, is the speed of light, and
(lowercase gamma) is the Lorentz factor.
Here, is the parameter of the transformation, for a given boost it is a constant number, but can take a continuous range of values. In the setup used here, positive relative velocity is motion along the positive directions of the axes, zero relative velocity is no relative motion, while negative relative velocity is relative motion along the negative directions of the axes. The magnitude of relative velocity cannot equal or exceed , so only subluminal speeds are allowed. The corresponding range of is .
The transformations are not defined if is outside these limits. At the speed of light () is infinite, and faster than light () is a complex number, each of which make the transformations unphysical. The space and time coordinates are measurable quantities and numerically must be real numbers.
As an active transformation, an observer in F′ notices the coordinates of the event to be "boosted" in the negative directions of the axes, because of the in the transformations. This has the equivalent effect of the coordinate system F′ boosted in the positive directions of the axes, while the event does not change and is simply represented in another coordinate system, a passive transformation.
The inverse relations ( in terms of ) can be found by algebraically solving the original set of equations. A more efficient way is to use physical principles. Here is the "stationary" frame while is the "moving" frame. According to the principle of relativity, there is no privileged frame of reference, so the transformations from to must take exactly the same form as the transformations from to . The only difference is moves with velocity relative to (i.e., the relative velocity has the same magnitude but is oppositely directed). Thus if an observer in notes an event , then an observer in notes the same event with coordinates
and the value of remains unchanged. This "trick" of simply reversing the direction of relative velocity while preserving its magnitude, and exchanging primed and unprimed variables, always applies to finding the inverse transformation of every boost in any direction.
Sometimes it is more convenient to use (lowercase beta) instead of , so that
which shows much more clearly the symmetry in the transformation. From the allowed ranges of and the definition of , it follows . The use of and is standard throughout the literature.
When the boost velocity is in an arbitrary vector direction with the boost vector , then the transformation from an unprimed spacetime coordinate system to a primed coordinate system is given by,
where the Lorentz factor is . The determinant of the transformation matrix is +1 and its trace is . The inverse of the transformation is given by reversing the sign of . The quantity is invariant under the transformation.
The Lorentz transformations can also be derived in a way that resembles circular rotations in 3d space using the hyperbolic functions. For the boost in the direction, the results are
where (lowercase zeta) is a parameter called rapidity (many other symbols are used, including ). Given the strong resemblance to rotations of spatial coordinates in 3d space in the Cartesian xy, yz, and zx planes, a Lorentz boost can be thought of as a hyperbolic rotation of spacetime coordinates in the xt, yt, and zt Cartesian-time planes of 4d Minkowski space. The parameter is the hyperbolic angle of rotation, analogous to the ordinary angle for circular rotations. This transformation can be illustrated with a Minkowski diagram.
The hyperbolic functions arise from the difference between the squares of the time and spatial coordinates in the spacetime interval, rather than a sum. The geometric significance of the hyperbolic functions can be visualized by taking or in the transformations. Squaring and subtracting the results, one can derive hyperbolic curves of constant coordinate values but varying , which parametrizes the curves according to the identity
Conversely the and axes can be constructed for varying coordinates but constant . The definition
provides the link between a constant value of rapidity, and the slope of the axis in spacetime. A consequence these two hyperbolic formulae is an identity that matches the Lorentz factor
Comparing the Lorentz transformations in terms of the relative velocity and rapidity, or using the above formulae, the connections between , , and are
Taking the inverse hyperbolic tangent gives the rapidity
Since , it follows . From the relation between and , positive rapidity is motion along the positive directions of the axes, zero rapidity is no relative motion, while negative rapidity is relative motion along the negative directions of the axes.
The inverse transformations are obtained by exchanging primed and unprimed quantities to switch the coordinate frames, and negating rapidity since this is equivalent to negating the relative velocity. Therefore,
The inverse transformations can be similarly visualized by considering the cases when and .
So far the Lorentz transformations have been applied to one event. If there are two events, there is a spatial separation and time interval between them. It follows from the linearity of the Lorentz transformations that two values of space and time coordinates can be chosen, the Lorentz transformations can be applied to each, then subtracted to get the Lorentz transformations of the differences;
with inverse relations
where (uppercase delta) indicates a difference of quantities; e.g., for two values of coordinates, and so on.
These transformations on differences rather than spatial points or instants of time are useful for a number of reasons:
in calculations and experiments, it is lengths between two points or time intervals that are measured or of interest (e.g., the length of a moving vehicle, or time duration it takes to travel from one place to another),
the transformations of velocity can be readily derived by making the difference infinitesimally small and dividing the equations, and the process repeated for the transformation of acceleration,
if the coordinate systems are never coincident (i.e., not in standard configuration), and if both observers can agree on an event in and in , then they can use that event as the origin, and the spacetime coordinate differences are the differences between their coordinates and this origin, e.g., , , etc.
Physical implications
A critical requirement of the Lorentz transformations is the invariance of the speed of light, a fact used in their derivation, and contained in the transformations themselves. If in the equation for a pulse of light along the direction is , then in the Lorentz transformations give , and vice versa, for any .
For relative speeds much less than the speed of light, the Lorentz transformations reduce to the Galilean transformation:
in accordance with the correspondence principle. It is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance".
Three counterintuitive, but correct, predictions of the transformations are:
Relativity of simultaneity
Suppose two events occur along the x axis simultaneously () in , but separated by a nonzero displacement . Then in , we find that , so the events are no longer simultaneous according to a moving observer.
Time dilation
Suppose there is a clock at rest in . If a time interval is measured at the same point in that frame, so that , then the transformations give this interval in by . Conversely, suppose there is a clock at rest in . If an interval is measured at the same point in that frame, so that , then the transformations give this interval in F by . Either way, each observer measures the time interval between ticks of a moving clock to be longer by a factor than the time interval between ticks of his own clock.
Length contraction
Suppose there is a rod at rest in aligned along the x axis, with length . In , the rod moves with velocity , so its length must be measured by taking two simultaneous () measurements at opposite ends. Under these conditions, the inverse Lorentz transform shows that . In the two measurements are no longer simultaneous, but this does not matter because the rod is at rest in . So each observer measures the distance between the end points of a moving rod to be shorter by a factor than the end points of an identical rod at rest in his own frame. Length contraction affects any geometric quantity related to lengths, so from the perspective of a moving observer, areas and volumes will also appear to shrink along the direction of motion.
Vector transformations
The use of vectors allows positions and velocities to be expressed in arbitrary directions compactly. A single boost in any direction depends on the full relative velocity vector with a magnitude that cannot equal or exceed , so that .
Only time and the coordinates parallel to the direction of relative motion change, while those coordinates perpendicular do not. With this in mind, split the spatial position vector as measured in , and as measured in , each into components perpendicular (⊥) and parallel ( ‖ ) to ,
then the transformations are
where is the dot product. The Lorentz factor retains its definition for a boost in any direction, since it depends only on the magnitude of the relative velocity. The definition with magnitude is also used by some authors.
Introducing a unit vector in the direction of relative motion, the relative velocity is with magnitude and direction , and vector projection and rejection give respectively
Accumulating the results gives the full transformations,
The projection and rejection also applies to . For the inverse transformations, exchange and to switch observed coordinates, and negate the relative velocity (or simply the unit vector since the magnitude is always positive) to obtain
The unit vector has the advantage of simplifying equations for a single boost, allows either or to be reinstated when convenient, and the rapidity parametrization is immediately obtained by replacing and . It is not convenient for multiple boosts.
The vectorial relation between relative velocity and rapidity is
and the "rapidity vector" can be defined as
each of which serves as a useful abbreviation in some contexts. The magnitude of is the absolute value of the rapidity scalar confined to , which agrees with the range .
Transformation of velocities
Defining the coordinate velocities and Lorentz factor by
taking the differentials in the coordinates and time of the vector transformations, then dividing equations, leads to
The velocities and are the velocity of some massive object. They can also be for a third inertial frame (say F′′), in which case they must be constant. Denote either entity by X. Then X moves with velocity relative to F, or equivalently with velocity relative to F′, in turn F′ moves with velocity relative to F. The inverse transformations can be obtained in a similar way, or as with position coordinates exchange and , and change to .
The transformation of velocity is useful in stellar aberration, the Fizeau experiment, and the relativistic Doppler effect.
The Lorentz transformations of acceleration can be similarly obtained by taking differentials in the velocity vectors, and dividing these by the time differential.
Transformation of other quantities
In general, given four quantities and and their Lorentz-boosted counterparts and , a relation of the form
implies the quantities transform under Lorentz transformations similar to the transformation of spacetime coordinates;
The decomposition of (and ) into components perpendicular and parallel to is exactly the same as for the position vector, as is the process of obtaining the inverse transformations (exchange and to switch observed quantities, and reverse the direction of relative motion by the substitution ).
The quantities collectively make up a four-vector, where is the "timelike component", and the "spacelike component". Examples of and are the following:
For a given object (e.g., particle, fluid, field, material), if or correspond to properties specific to the object like its charge density, mass density, spin, etc., its properties can be fixed in the rest frame of that object. Then the Lorentz transformations give the corresponding properties in a frame moving relative to the object with constant velocity. This breaks some notions taken for granted in non-relativistic physics. For example, the energy of an object is a scalar in non-relativistic mechanics, but not in relativistic mechanics because energy changes under Lorentz transformations; its value is different for various inertial frames. In the rest frame of an object, it has a rest energy and zero momentum. In a boosted frame its energy is different and it appears to have a momentum. Similarly, in non-relativistic quantum mechanics the spin of a particle is a constant vector, but in relativistic quantum mechanics spin depends on relative motion. In the rest frame of the particle, the spin pseudovector can be fixed to be its ordinary non-relativistic spin with a zero timelike quantity , however a boosted observer will perceive a nonzero timelike component and an altered spin.
Not all quantities are invariant in the form as shown above, for example orbital angular momentum does not have a timelike quantity, and neither does the electric field nor the magnetic field . The definition of angular momentum is , and in a boosted frame the altered angular momentum is . Applying this definition using the transformations of coordinates and momentum leads to the transformation of angular momentum. It turns out transforms with another vector quantity related to boosts, see relativistic angular momentum for details. For the case of the and fields, the transformations cannot be obtained as directly using vector algebra. The Lorentz force is the definition of these fields, and in it is while in it is . A method of deriving the EM field transformations in an efficient way which also illustrates the unit of the electromagnetic field uses tensor algebra, given below.
Mathematical formulation
Throughout, italic non-bold capital letters are 4×4 matrices, while non-italic bold letters are 3×3 matrices.
Homogeneous Lorentz group
Writing the coordinates in column vectors and the Minkowski metric as a square matrix
the spacetime interval takes the form (superscript denotes transpose)
and is invariant under a Lorentz transformation
where is a square matrix which can depend on parameters.
The set of all Lorentz transformations in this article is denoted . This set together with matrix multiplication forms a group, in this context known as the Lorentz group. Also, the above expression is a quadratic form of signature (3,1) on spacetime, and the group of transformations which leaves this quadratic form invariant is the indefinite orthogonal group O(3,1), a Lie group. In other words, the Lorentz group is O(3,1). As presented in this article, any Lie groups mentioned are matrix Lie groups. In this context the operation of composition amounts to matrix multiplication.
From the invariance of the spacetime interval it follows
and this matrix equation contains the general conditions on the Lorentz transformation to ensure invariance of the spacetime interval. Taking the determinant of the equation using the product rule gives immediately
Writing the Minkowski metric as a block matrix, and the Lorentz transformation in the most general form,
carrying out the block matrix multiplications obtains general conditions on to ensure relativistic invariance. Not much information can be directly extracted from all the conditions, however one of the results
is useful; always so it follows that
The negative inequality may be unexpected, because multiplies the time coordinate and this has an effect on time symmetry. If the positive equality holds, then is the Lorentz factor.
The determinant and inequality provide four ways to classify Lorentz Transformations (herein LTs for brevity). Any particular LT has only one determinant sign and only one inequality. There are four sets which include every possible pair given by the intersections ("n"-shaped symbol meaning "and") of these classifying sets.
where "+" and "−" indicate the determinant sign, while "↑" for ≥ and "↓" for ≤ denote the inequalities.
The full Lorentz group splits into the union ("u"-shaped symbol meaning "or") of four disjoint sets
A subgroup of a group must be closed under the same operation of the group (here matrix multiplication). In other words, for two Lorentz transformations and from a particular subgroup, the composite Lorentz transformations and must be in the same subgroup as and . This is not always the case: the composition of two antichronous Lorentz transformations is orthochronous, and the composition of two improper Lorentz transformations is proper. In other words, while the sets , , , and all form subgroups, the sets containing improper and/or antichronous transformations without enough proper orthochronous transformations (e.g. , , ) do not form subgroups.
Proper transformations
If a Lorentz covariant 4-vector is measured in one inertial frame with result , and the same measurement made in another inertial frame (with the same orientation and origin) gives result , the two results will be related by
where the boost matrix represents the rotation-free Lorentz transformation between the unprimed and primed frames and is the velocity of the primed frame as seen from the unprimed frame. The matrix is given by
where is the magnitude of the velocity and is the Lorentz factor. This formula represents a passive transformation, as it describes how the coordinates of the measured quantity changes from the unprimed frame to the primed frame. The active transformation is given by .
If a frame is boosted with velocity relative to frame , and another frame is boosted with velocity relative to , the separate boosts are
and the composition of the two boosts connects the coordinates in and ,
Successive transformations act on the left. If and are collinear (parallel or antiparallel along the same line of relative motion), the boost matrices commute: . This composite transformation happens to be another boost, , where is collinear with and .
If and are not collinear but in different directions, the situation is considerably more complicated. Lorentz boosts along different directions do not commute: and are not equal. Although each of these compositions is not a single boost, each composition is still a Lorentz transformation as it preserves the spacetime interval. It turns out the composition of any two Lorentz boosts is equivalent to a boost followed or preceded by a rotation on the spatial coordinates, in the form of or . The and are composite velocities, while and are rotation parameters (e.g. axis-angle variables, Euler angles, etc.). The rotation in block matrix form is simply
where is a 3d rotation matrix, which rotates any 3d vector in one sense (active transformation), or equivalently the coordinate frame in the opposite sense (passive transformation). It is not simple to connect and (or and ) to the original boost parameters and . In a composition of boosts, the matrix is named the Wigner rotation, and gives rise to the Thomas precession. These articles give the explicit formulae for the composite transformation matrices, including expressions for .
In this article the axis-angle representation is used for . The rotation is about an axis in the direction of a unit vector , through angle (positive anticlockwise, negative clockwise, according to the right-hand rule). The "axis-angle vector"
will serve as a useful abbreviation.
Spatial rotations alone are also Lorentz transformations since they leave the spacetime interval invariant. Like boosts, successive rotations about different axes do not commute. Unlike boosts, the composition of any two rotations is equivalent to a single rotation. Some other similarities and differences between the boost and rotation matrices include:
inverses: (relative motion in the opposite direction), and (rotation in the opposite sense about the same axis)
identity transformation for no relative motion/rotation:
unit determinant: . This property makes them proper transformations.
matrix symmetry: is symmetric (equals transpose), while is nonsymmetric but orthogonal (transpose equals inverse, ).
The most general proper Lorentz transformation includes a boost and rotation together, and is a nonsymmetric matrix. As special cases, and . An explicit form of the general Lorentz transformation is cumbersome to write down and will not be given here. Nevertheless, closed form expressions for the transformation matrices will be given below using group theoretical arguments. It will be easier to use the rapidity parametrization for boosts, in which case one writes and .
The Lie group SO+(3,1)
The set of transformations
with matrix multiplication as the operation of composition forms a group, called the "restricted Lorentz group", and is the special indefinite orthogonal group SO+(3,1). (The plus sign indicates that it preserves the orientation of the temporal dimension).
For simplicity, look at the infinitesimal Lorentz boost in the x direction (examining a boost in any other direction, or rotation about any axis, follows an identical procedure). The infinitesimal boost is a small boost away from the identity, obtained by the Taylor expansion of the boost matrix to first order about ,
where the higher order terms not shown are negligible because is small, and is simply the boost matrix in the x direction. The derivative of the matrix is the matrix of derivatives (of the entries, with respect to the same variable), and it is understood the derivatives are found first then evaluated at ,
For now, is defined by this result (its significance will be explained shortly). In the limit of an infinite number of infinitely small steps, the finite boost transformation in the form of a matrix exponential is obtained
where the limit definition of the exponential has been used (see also characterizations of the exponential function). More generally
The axis-angle vector and rapidity vector are altogether six continuous variables which make up the group parameters (in this particular representation), and the generators of the group are and , each vectors of matrices with the explicit forms
These are all defined in an analogous way to above, although the minus signs in the boost generators are conventional. Physically, the generators of the Lorentz group correspond to important symmetries in spacetime: are the rotation generators which correspond to angular momentum, and are the boost generators which correspond to the motion of the system in spacetime. The derivative of any smooth curve with in the group depending on some group parameter with respect to that group parameter, evaluated at , serves as a definition of a corresponding group generator , and this reflects an infinitesimal transformation away from the identity. The smooth curve can always be taken as an exponential as the exponential will always map smoothly back into the group via for all ; this curve will yield again when differentiated at .
Expanding the exponentials in their Taylor series obtains
which compactly reproduce the boost and rotation matrices as given in the previous section.
It has been stated that the general proper Lorentz transformation is a product of a boost and rotation. At the infinitesimal level the product
is commutative because only linear terms are required (products like and count as higher order terms and are negligible). Taking the limit as before leads to the finite transformation in the form of an exponential
The converse is also true, but the decomposition of a finite general Lorentz transformation into such factors is nontrivial. In particular,
because the generators do not commute. For a description of how to find the factors of a general Lorentz transformation in terms of a boost and a rotation in principle (this usually does not yield an intelligible expression in terms of generators and ), see Wigner rotation. If, on the other hand, the decomposition is given in terms of the generators, and one wants to find the product in terms of the generators, then the Baker–Campbell–Hausdorff formula applies.
The Lie algebra so(3,1)
Lorentz generators can be added together, or multiplied by real numbers, to obtain more Lorentz generators. In other words, the set of all Lorentz generators
together with the operations of ordinary matrix addition and multiplication of a matrix by a number, forms a vector space over the real numbers. The generators form a basis set of V, and the components of the axis-angle and rapidity vectors, , are the coordinates of a Lorentz generator with respect to this basis.
Three of the commutation relations of the Lorentz generators are
where the bracket is known as the commutator, and the other relations can be found by taking cyclic permutations of x, y, z components (i.e. change x to y, y to z, and z to x, repeat).
These commutation relations, and the vector space of generators, fulfill the definition of the Lie algebra . In summary, a Lie algebra is defined as a vector space V over a field of numbers, and with a binary operation [ , ] (called a Lie bracket in this context) on the elements of the vector space, satisfying the axioms of bilinearity, alternatization, and the Jacobi identity. Here the operation [ , ] is the commutator which satisfies all of these axioms, the vector space is the set of Lorentz generators V as given previously, and the field is the set of real numbers.
Linking terminology used in mathematics and physics: A group generator is any element of the Lie algebra. A group parameter is a component of a coordinate vector representing an arbitrary element of the Lie algebra with respect to some basis. A basis, then, is a set of generators being a basis of the Lie algebra in the usual vector space sense.
The exponential map from the Lie algebra to the Lie group,
provides a one-to-one correspondence between small enough neighborhoods of the origin of the Lie algebra and neighborhoods of the identity element of the Lie group. In the case of the Lorentz group, the exponential map is just the matrix exponential. Globally, the exponential map is not one-to-one, but in the case of the Lorentz group, it is surjective (onto). Hence any group element in the connected component of the identity can be expressed as an exponential of an element of the Lie algebra.
Improper transformations
Lorentz transformations also include parity inversion
which negates all the spatial coordinates only, and time reversal
which negates the time coordinate only, because these transformations leave the spacetime interval invariant. Here is the 3d identity matrix. These are both symmetric, they are their own inverses (see involution (mathematics)), and each have determinant −1. This latter property makes them improper transformations.
If is a proper orthochronous Lorentz transformation, then is improper antichronous, is improper orthochronous, and is proper antichronous.
Inhomogeneous Lorentz group
Two other spacetime symmetries have not been accounted for. In order for the spacetime interval to be invariant, it can be shown that it is necessary and sufficient for the coordinate transformation to be of the form
where C is a constant column containing translations in time and space. If C ≠ 0, this is an inhomogeneous Lorentz transformation or Poincaré transformation. If C = 0, this is a homogeneous Lorentz transformation. Poincaré transformations are not dealt further in this article.
Tensor formulation
Contravariant vectors
Writing the general matrix transformation of coordinates as the matrix equation
allows the transformation of other physical quantities that cannot be expressed as four-vectors; e.g., tensors or spinors of any order in 4d spacetime, to be defined. In the corresponding tensor index notation, the above matrix expression is
where lower and upper indices label covariant and contravariant components respectively, and the summation convention is applied. It is a standard convention to use Greek indices that take the value 0 for time components, and 1, 2, 3 for space components, while Latin indices simply take the values 1, 2, 3, for spatial components (the opposite for Landau and Lifshitz). Note that the first index (reading left to right) corresponds in the matrix notation to a row index. The second index corresponds to the column index.
The transformation matrix is universal for all four-vectors, not just 4-dimensional spacetime coordinates. If is any four-vector, then in tensor index notation
Alternatively, one writes in which the primed indices denote the indices of A in the primed frame. For a general -component object one may write where is the appropriate representation of the Lorentz group, an matrix for every . In this case, the indices should not be thought of as spacetime indices (sometimes called Lorentz indices), and they run from to . E.g., if is a bispinor, then the indices are called Dirac indices.
Covariant vectors
There are also vector quantities with covariant indices. They are generally obtained from their corresponding objects with contravariant indices by the operation of lowering an index; e.g.,
where is the metric tensor. (The linked article also provides more information about what the operation of raising and lowering indices really is mathematically.) The inverse of this transformation is given by
where, when viewed as matrices, is the inverse of . As it happens, . This is referred to as raising an index. To transform a covariant vector , first raise its index, then transform it according to the same rule as for contravariant -vectors, then finally lower the index;
But
That is, it is the -component of the inverse Lorentz transformation. One defines (as a matter of notation),
and may in this notation write
Now for a subtlety. The implied summation on the right hand side of
is running over a row index of the matrix representing . Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of acting on the column vector . That is, in pure matrix notation,
This means exactly that covariant vectors (thought of as column matrices) transform according to the dual representation of the standard representation of the Lorentz group. This notion generalizes to general representations, simply replace with .
Tensors
If and are linear operators on vector spaces and , then a linear operator may be defined on the tensor product of and , denoted according to
From this it is immediately clear that if and are a four-vectors in , then transforms as
The second step uses the bilinearity of the tensor product and the last step defines a 2-tensor on component form, or rather, it just renames the tensor .
These observations generalize in an obvious way to more factors, and using the fact that a general tensor on a vector space can be written as a sum of a coefficient (component!) times tensor products of basis vectors and basis covectors, one arrives at the transformation law for any tensor quantity . It is given by
where is defined above. This form can generally be reduced to the form for general -component objects given above with a single matrix () operating on column vectors. This latter form is sometimes preferred; e.g., for the electromagnetic field tensor.
Transformation of the electromagnetic field
Lorentz transformations can also be used to illustrate that the magnetic field and electric field are simply different aspects of the same force — the electromagnetic force, as a consequence of relative motion between electric charges and observers. The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment.
An observer measures a charge at rest in frame F. The observer will detect a static electric field. As the charge is stationary in this frame, there is no electric current, so the observer does not observe any magnetic field.
The other observer in frame F′ moves at velocity relative to F and the charge. This observer sees a different electric field because the charge moves at velocity in their rest frame. The motion of the charge corresponds to an electric current, and thus the observer in frame F′ also sees a magnetic field.
The electric and magnetic fields transform differently from space and time, but exactly the same way as relativistic angular momentum and the boost vector.
The electromagnetic field strength tensor is given by
in SI units. In relativity, the Gaussian system of units is often preferred over SI units, even in texts whose main choice of units is SI units, because in it the electric field and the magnetic induction have the same units making the appearance of the electromagnetic field tensor more natural. Consider a Lorentz boost in the -direction. It is given by
where the field tensor is displayed side by side for easiest possible reference in the manipulations below.
The general transformation law becomes
For the magnetic field one obtains
For the electric field results
Here, is used. These results can be summarized by
and are independent of the metric signature. For SI units, substitute . refer to this last form as the view as opposed to the geometric view represented by the tensor expression
and make a strong point of the ease with which results that are difficult to achieve using the view can be obtained and understood. Only objects that have well defined Lorentz transformation properties (in fact under any smooth coordinate transformation) are geometric objects. In the geometric view, the electromagnetic field is a six-dimensional geometric object in spacetime as opposed to two interdependent, but separate, 3-vector fields in space and time. The fields (alone) and (alone) do not have well defined Lorentz transformation properties. The mathematical underpinnings are equations and that immediately yield . One should note that the primed and unprimed tensors refer to the same event in spacetime. Thus the complete equation with spacetime dependence is
Length contraction has an effect on charge density and current density , and time dilation has an effect on the rate of flow of charge (current), so charge and current distributions must transform in a related way under a boost. It turns out they transform exactly like the space-time and energy-momentum four-vectors,
or, in the simpler geometric view,
Charge density transforms as the time component of a four-vector. It is a rotational scalar. The current density is a 3-vector.
The Maxwell equations are invariant under Lorentz transformations.
Spinors
Equation hold unmodified for any representation of the Lorentz group, including the bispinor representation. In one simply replaces all occurrences of by the bispinor representation ,
The above equation could, for instance, be the transformation of a state in Fock space describing two free electrons.
Transformation of general fields
A general noninteracting multi-particle state (Fock space state) in quantum field theory transforms according to the rule
where is the Wigner's little group and is the representation of .
| Physical sciences | Theory of relativity | null |
18406 | https://en.wikipedia.org/wiki/Luminiferous%20aether | Luminiferous aether | Luminiferous aether or ether (luminiferous meaning 'light-bearing') was the postulated medium for the propagation of light. It was invoked to explain the ability of the apparently wave-based light to propagate through empty space (a vacuum), something that waves should not be able to do. The assumption of a spatial plenum (space completely filled with matter) of luminiferous aether, rather than a spatial vacuum, provided the theoretical medium that was required by wave theories of light.
The aether hypothesis was the topic of considerable debate throughout its history, as it required the existence of an invisible and infinite material with no interaction with physical objects. As the nature of light was explored, especially in the 19th century, the physical qualities required of an aether became increasingly contradictory. By the late 19th century, the existence of the aether was being questioned, although there was no physical theory to replace it.
The negative outcome of the Michelson–Morley experiment (1887) suggested that the aether did not exist, a finding that was confirmed in subsequent experiments through the 1920s. This led to considerable theoretical work to explain the propagation of light without an aether. A major breakthrough was the special theory of relativity, which could explain why the experiment failed to see aether, but was more broadly interpreted to suggest that it was not needed. The Michelson–Morley experiment, along with the blackbody radiator and photoelectric effect, was a key experiment in the development of modern physics, which includes both relativity and quantum theory, the latter of which explains the particle-like nature of light.
The history of light and aether
Particles vs. waves
In the 17th century, Robert Boyle was a proponent of an aether hypothesis. According to Boyle, the aether consists of subtle particles, one sort of which explains the absence of vacuum and the mechanical interactions between bodies, and the other sort of which explains phenomena such as magnetism (and possibly gravity) that are, otherwise, inexplicable on the basis of purely mechanical interactions of macroscopic bodies, "though in the ether of the ancients there was nothing taken notice of but a diffused and very subtle substance; yet we are at present content to allow that there is always in the air a swarm of streams moving in a determinate course between the north pole and the south".
Christiaan Huygens's Treatise on Light (1690) hypothesized that light is a wave propagating through an aether. He and Isaac Newton could only envision light waves as being longitudinal, propagating like sound and other mechanical waves in fluids. However, longitudinal waves necessarily have only one form for a given propagation direction, rather than two polarizations like a transverse wave. Thus, longitudinal waves can not explain birefringence, in which two polarizations of light are refracted differently by a crystal. In addition, Newton rejected light as waves in a medium because such a medium would have to extend everywhere in space, and would thereby "disturb and retard the Motions of those great Bodies" (the planets and comets) and thus "as it is of no use, and hinders the Operation of Nature, and makes her languish, so there is no evidence for its Existence, and therefore it ought to be rejected".
Isaac Newton contended that light is made up of numerous small particles. This can explain such features as light's ability to travel in straight lines and reflect off surfaces. Newton imagined light particles as non-spherical "corpuscles", with different "sides" that give rise to birefringence. But the particle theory of light can not satisfactorily explain refraction and diffraction. To explain refraction, Newton's Third Book of Opticks (1st ed. 1704, 4th ed. 1730) postulated an "aethereal medium" transmitting vibrations faster than light, by which light, when overtaken, is put into "Fits of easy Reflexion and easy Transmission", which caused refraction and diffraction. Newton believed that these vibrations were related to heat radiation:
Is not the Heat of the warm Room convey'd through the vacuum by the Vibrations of a much subtiler Medium than Air, which after the Air was drawn out remained in the Vacuum? And is not this Medium the same with that Medium by which Light is refracted and reflected, and by whose Vibrations Light communicates Heat to Bodies, and is put into Fits of easy Reflexion and easy Transmission?
In contrast to the modern understanding that heat radiation and light are both electromagnetic radiation, Newton viewed heat and light as two different phenomena. He believed heat vibrations to be excited "when a Ray of Light falls upon the Surface of any pellucid Body". He wrote, "I do not know what this Aether is", but that if it consists of particles then they must be exceedingly smaller than those of Air, or even than those of Light: The exceeding smallness of its Particles may contribute to the greatness of the force by which those Particles may recede from one another, and thereby make that Medium exceedingly more rare and elastic than Air, and by consequence exceedingly less able to resist the motions of Projectiles, and exceedingly more able to press upon gross Bodies, by endeavoring to expand itself.
Bradley suggests particles
In 1720, James Bradley carried out a series of experiments attempting to measure stellar parallax by taking measurements of stars at different times of the year. As the Earth moves around the Sun, the apparent angle to a given distant spot changes. By measuring those angles the distance to the star can be calculated based on the known orbital circumference of the Earth around the Sun. He failed to detect any parallax, thereby placing a lower limit on the distance to stars.
During these experiments, Bradley also discovered a related effect; the apparent positions of the stars did change over the year, but not as expected. Instead of the apparent angle being maximized when the Earth was at either end of its orbit with respect to the star, the angle was maximized when the Earth was at its fastest sideways velocity with respect to the star. This effect is now known as stellar aberration.
Bradley explained this effect in the context of Newton's corpuscular theory of light, by showing that the aberration angle was given by simple vector addition of the Earth's orbital velocity and the velocity of the corpuscles of light, just as vertically falling raindrops strike a moving object at an angle. Knowing the Earth's velocity and the aberration angle enabled him to estimate the speed of light.
Explaining stellar aberration in the context of an aether-based theory of light was regarded as more problematic. As the aberration relied on relative velocities, and the measured velocity was dependent on the motion of the Earth, the aether had to be remaining stationary with respect to the star as the Earth moved through it. This meant that the Earth could travel through the aether, a physical medium, with no apparent effect – precisely the problem that led Newton to reject a wave model in the first place.
Wave-theory triumphs
A century later, Thomas Young and Augustin-Jean Fresnel revived the wave theory of light when they pointed out that light could be a transverse wave rather than a longitudinal wave; the polarization of a transverse wave (like Newton's "sides" of light) could explain birefringence, and in the wake of a series of experiments on diffraction the particle model of Newton was finally abandoned. Physicists assumed, moreover, that, like mechanical waves, light waves required a medium for propagation, and thus required Huygens's idea of an aether "gas" permeating all space.
However, a transverse wave apparently required the propagating medium to behave as a solid, as opposed to a fluid. The idea of a solid that did not interact with other matter seemed a bit odd, and Augustin-Louis Cauchy suggested that perhaps there was some sort of "dragging", or "entrainment", but this made the aberration measurements difficult to understand. He also suggested that the absence of longitudinal waves suggested that the aether had negative compressibility. George Green pointed out that such a fluid would be unstable. George Gabriel Stokes became a champion of the entrainment interpretation, developing a model in which the aether might, like pine pitch, be dilatant (fluid at slow speeds and rigid at fast speeds). Thus the Earth could move through it fairly freely, but it would be rigid enough to support light.
Electromagnetism
In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch measured the numerical value of the ratio of the electrostatic unit of charge to the electromagnetic unit of charge. They found that the ratio between the electrostatic unit of charge and the electromagnetic unit of charge is the speed of light c. The following year, Gustav Kirchhoff wrote a paper in which he showed that the speed of a signal along an electric wire was equal to the speed of light. These are the first recorded historical links between the speed of light and electromagnetic phenomena.
James Clerk Maxwell began working on Michael Faraday's lines of force. In his 1861 paper On Physical Lines of Force he modelled these magnetic lines of force using a sea of molecular vortices that he considered to be partly made of aether and partly made of ordinary matter. He derived expressions for the dielectric constant and the magnetic permeability in terms of the transverse elasticity and the density of this elastic medium. He then equated the ratio of the dielectric constant to the magnetic permeability with a suitably adapted version of Weber and Kohlrausch's result of 1856, and he substituted this result into Newton's equation for the speed of sound. On obtaining a value that was close to the speed of light as measured by Hippolyte Fizeau, Maxwell concluded that light consists in undulations of the same medium that is the cause of electric and magnetic phenomena.
Maxwell had, however, expressed some uncertainties surrounding the precise nature of his molecular vortices and so he began to embark on a purely dynamical approach to the problem. He wrote another paper in 1864, entitled "A Dynamical Theory of the Electromagnetic Field", in which the details of the luminiferous medium were less explicit. Although Maxwell did not explicitly mention the sea of molecular vortices, his derivation of Ampère's circuital law was carried over from the 1861 paper and he used a dynamical approach involving rotational motion within the electromagnetic field which he likened to the action of flywheels. Using this approach to justify the electromotive force equation (the precursor of the Lorentz force equation), he derived a wave equation from a set of eight equations which appeared in the paper and which included the electromotive force equation and Ampère's circuital law. Maxwell once again used the experimental results of Weber and Kohlrausch to show that this wave equation represented an electromagnetic wave that propagates at the speed of light, hence supporting the view that light is a form of electromagnetic radiation.
In 1887–1889, Heinrich Hertz experimentally demonstrated the electric magnetic waves are identical to light waves. This unification of electromagnetic wave and optics indicated that there was a single luminiferous aether instead of many different kinds of aether media.
The apparent need for a propagation medium for such Hertzian waves (later called radio waves) can be seen by the fact that they consist of orthogonal electric (E) and magnetic (B or H) waves. The E waves consist of undulating dipolar electric fields, and all such dipoles appeared to require separated and opposite electric charges. Electric charge is an inextricable property of matter, so it appeared that some form of matter was required to provide the alternating current that would seem to have to exist at any point along the propagation path of the wave. Propagation of waves in a true vacuum would imply the existence of electric fields without associated electric charge, or of electric charge without associated matter. Albeit compatible with Maxwell's equations, electromagnetic induction of electric fields could not be demonstrated in vacuum, because all methods of detecting electric fields required electrically charged matter.
In addition, Maxwell's equations required that all electromagnetic waves in vacuum propagate at a fixed speed, c. As this can only occur in one reference frame in Newtonian physics (see Galilean relativity), the aether was hypothesized as the absolute and unique frame of reference in which Maxwell's equations hold. That is, the aether must be "still" universally, otherwise c would vary along with any variations that might occur in its supportive medium. Maxwell himself proposed several mechanical models of aether based on wheels and gears, and George Francis FitzGerald even constructed a working model of one of them. These models had to agree with the fact that the electromagnetic waves are transverse but never longitudinal.
Problems
By this point the mechanical qualities of the aether had become more and more magical: it had to be a fluid in order to fill space, but one that was millions of times more rigid than steel in order to support the high frequencies of light waves. It also had to be massless and without viscosity, otherwise it would visibly affect the orbits of planets. Additionally it appeared it had to be completely transparent, non-dispersive, incompressible, and continuous at a very small scale. Maxwell wrote in Encyclopædia Britannica:
Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, until all space had been filled three or four times over with aethers. ... The only aether which has survived is that which was invented by Huygens to explain the propagation of light.
By the early 20th century, aether theory was in trouble. A series of increasingly complex experiments had been carried out in the late 19th century to try to detect the motion of the Earth through the aether, and had failed to do so. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. Lorentz and FitzGerald offered within the framework of Lorentz ether theory a more elegant solution to how the motion of an absolute aether could be undetectable (length contraction), but if their equations were correct, the new special theory of relativity (1905) could generate the same mathematics without referring to an aether at all. Aether fell to Occam's Razor.
Relative motion between the Earth and aether
Aether drag
The two most important models, which were aimed to describe the relative motion of the Earth and aether, were Augustin-Jean Fresnel's (1818) model of the (nearly) stationary aether including a partial aether drag determined by Fresnel's dragging coefficient, and George Gabriel Stokes' (1844)
model of complete aether drag. The latter theory was not considered as correct, since it was not compatible with the aberration of light, and the auxiliary hypotheses developed to explain this problem were not convincing. Also, subsequent experiments as the Sagnac effect (1913) also showed that this model is untenable. However, the most important experiment supporting Fresnel's theory was Fizeau's 1851 experimental confirmation of Fresnel's 1818 prediction that a medium with refractive index n moving with a velocity v would increase the speed of light travelling through the medium in the same direction as v from c/n to:
That is, movement adds only a fraction of the medium's velocity to the light (predicted by Fresnel in order to make Snell's law work in all frames of reference, consistent with stellar aberration). This was initially interpreted to mean that the medium drags the aether along, with a portion of the medium's velocity, but that understanding became very problematic after Wilhelm Veltmann demonstrated that the index n in Fresnel's formula depended upon the wavelength of light, so that the aether could not be moving at a wavelength-independent speed. This implied that there must be a separate aether for each of the infinitely many frequencies.
Negative aether-drift experiments
The key difficulty with Fresnel's aether hypothesis arose from the juxtaposition of the two well-established theories of Newtonian dynamics and Maxwell's electromagnetism. Under a Galilean transformation the equations of Newtonian dynamics are invariant, whereas those of electromagnetism are not. Basically this means that while physics should remain the same in non-accelerated experiments, light would not follow the same rules because it is travelling in the universal "aether frame". Some effect caused by this difference should be detectable.
A simple example concerns the model on which aether was originally built: sound. The speed of propagation for mechanical waves, the speed of sound, is defined by the mechanical properties of the medium. Sound travels 4.3 times faster in water than in air. This explains why a person hearing an explosion underwater and quickly surfacing can hear it again as the slower travelling sound arrives through the air. Similarly, a traveller on an airliner can still carry on a conversation with another traveller because the sound of words is travelling along with the air inside the aircraft. This effect is basic to all Newtonian dynamics, which says that everything from sound to the trajectory of a thrown baseball should all remain the same in the aircraft flying (at least at a constant speed) as if still sitting on the ground. This is the basis of the Galilean transformation, and the concept of frame of reference.
But the same was not supposed to be true for light, since Maxwell's mathematics demanded a single universal speed for the propagation of light, based, not on local conditions, but on two measured properties, the permittivity and permeability of free space, that were assumed to be the same throughout the universe. If these numbers did change, there should be noticeable effects in the sky; stars in different directions would have different colours, for instance.
Thus at any point there should be one special coordinate system, "at rest relative to the aether". Maxwell noted in the late 1870s that detecting motion relative to this aether should be easy enough—light travelling along with the motion of the Earth would have a different speed than light travelling backward, as they would both be moving against the unmoving aether. Even if the aether had an overall universal flow, changes in position during the day/night cycle, or over the span of seasons, should allow the drift to be detected.
First-order experiments
Although the aether is almost stationary according to Fresnel, his theory predicts a positive outcome of aether drift experiments only to second order in because Fresnel's dragging coefficient would cause a negative outcome of all optical experiments capable of measuring effects to first order in . This was confirmed by the following first-order experiments, all of which gave negative results. The following list is based on the description of Wilhelm Wien (1898), with changes and additional experiments according to the descriptions of Edmund Taylor Whittaker (1910) and Jakob Laub (1910):
The experiment of François Arago (1810), to confirm whether refraction, and thus the aberration of light, is influenced by Earth's motion. Similar experiments were conducted by George Biddell Airy (1871) by means of a telescope filled with water, and Éleuthère Mascart (1872).
The experiment of Fizeau (1860), to find whether the rotation of the polarization plane through glass columns is changed by Earth's motion. He obtained a positive result, but Lorentz could show that the results have been contradictory. DeWitt Bristol Brace (1905) and Strasser (1907) repeated the experiment with improved accuracy, and obtained negative results.
The experiment of Martin Hoek (1868). This experiment is a more precise variation of the Fizeau experiment (1851). Two light rays were sent in opposite directions – one of them traverses a path filled with resting water, the other one follows a path through air. In agreement with Fresnel's dragging coefficient, he obtained a negative result.
The experiment of Wilhelm Klinkerfues (1870) investigated whether an influence of Earth's motion on the absorption line of sodium exists. He obtained a positive result, but this was shown to be an experimental error, because a repetition of the experiment by Haga (1901) gave a negative result.
The experiment of Ketteler (1872), in which two rays of an interferometer were sent in opposite directions through two mutually inclined tubes filled with water. No change of the interference fringes occurred. Later, Mascart (1872) showed that the interference fringes of polarized light in calcite remained uninfluenced as well.
The experiment of Éleuthère Mascart (1872) to find a change of rotation of the polarization plane in quartz. No change of rotation was found when the light rays had the direction of Earth's motion and then the opposite direction. Lord Rayleigh conducted similar experiments with improved accuracy, and obtained a negative result as well.
Besides those optical experiments, also electrodynamic first-order experiments were conducted, which should have led to positive results according to Fresnel. However, Hendrik Antoon Lorentz (1895) modified Fresnel's theory and showed that those experiments can be explained by a stationary aether as well:
The experiment of Wilhelm Röntgen (1888), to find whether a charged capacitor produces magnetic forces due to Earth's motion.
The experiment of Theodor des Coudres (1889), to find whether the inductive effect of two wire rolls upon a third one is influenced by the direction of Earth's motion. Lorentz showed that this effect is cancelled to first order by the electrostatic charge (produced by Earth's motion) upon the conductors.
The experiment of Königsberger (1905). The plates of a capacitor are located in the field of a strong electromagnet. Due to Earth's motion, the plates should have become charged. No such effect was observed.
The experiment of Frederick Thomas Trouton (1902). A capacitor was brought parallel to Earth's motion, and it was assumed that momentum is produced when the capacitor is charged. The negative result can be explained by Lorentz's theory, according to which the electromagnetic momentum compensates the momentum due to Earth's motion. Lorentz could also show, that the sensitivity of the apparatus was much too low to observe such an effect.
Second-order experiments
While the first-order experiments could be explained by a modified stationary aether, more precise second-order experiments were expected to give positive results. However, no such results could be found.
The famous Michelson–Morley experiment compared the source light with itself after being sent in different directions and looked for changes in phase in a manner that could be measured with extremely high accuracy. In this experiment, their goal was to determine the velocity of the Earth through the aether. The publication of their result in 1887, the null result, was the first clear demonstration that something was seriously wrong with the aether hypothesis (Michelson's first experiment in 1881 was not entirely conclusive). In this case the MM experiment yielded a shift of the fringing pattern of about 0.01 of a fringe, corresponding to a small velocity. However, it was incompatible with the expected aether wind effect due to the Earth's (seasonally varying) velocity which would have required a shift of 0.4 of a fringe, and the error was small enough that the value may have indeed been zero. Therefore, the null hypothesis, the hypothesis that there was no aether wind, could not be rejected. More modern experiments have since reduced the possible value to a number very close to zero, about 10−17.
A series of experiments using similar but increasingly sophisticated apparatuses all returned the null result as well. Conceptually different experiments that also attempted to detect the motion of the aether were the Trouton–Noble experiment (1903), whose objective was to detect torsion effects caused by electrostatic fields, and the experiments of Rayleigh and Brace (1902, 1904), to detect double refraction in various media. However, all of them obtained a null result, like Michelson–Morley (MM) previously did.
These "aether-wind" experiments led to a flurry of efforts to "save" aether by assigning to it ever more complex properties, and only a few scientists, like Emil Cohn or Alfred Bucherer, considered the possibility of the abandonment of the aether hypothesis. Of particular interest was the possibility of "aether entrainment" or "aether drag", which would lower the magnitude of the measurement, perhaps enough to explain the results of the Michelson–Morley experiment. However, as noted earlier, aether dragging already had problems of its own, notably aberration. In addition, the interference experiments of Lodge (1893, 1897) and Ludwig Zehnder (1895), aimed to show whether the aether is dragged by various, rotating masses, showed no aether drag. A more precise measurement was made in the Hammar experiment (1935), which ran a complete MM experiment with one of the "legs" placed between two massive lead blocks. If the aether was dragged by mass then this experiment would have been able to detect the drag caused by the lead, but again the null result was achieved. The theory was again modified, this time to suggest that the entrainment only worked for very large masses or those masses with large magnetic fields. This too was shown to be incorrect by the Michelson–Gale–Pearson experiment, which detected the Sagnac effect due to Earth's rotation (see Aether drag hypothesis).
Another completely different attempt to save "absolute" aether was made in the Lorentz–FitzGerald contraction hypothesis, which posited that everything was affected by travel through the aether. In this theory, the reason that the Michelson–Morley experiment "failed" was that the apparatus contracted in length in the direction of travel. That is, the light was being affected in the "natural" manner by its travel through the aether as predicted, but so was the apparatus itself, cancelling out any difference when measured. FitzGerald had inferred this hypothesis from a paper by Oliver Heaviside. Without referral to an aether, this physical interpretation of relativistic effects was shared by Kennedy and Thorndike in 1932 as they concluded that the interferometer's arm contracts and also the frequency of its light source "very nearly" varies in the way required by relativity.
Similarly, the Sagnac effect, observed by G. Sagnac in 1913, was immediately seen to be fully consistent with special relativity. In fact, the Michelson–Gale–Pearson experiment in 1925 was proposed specifically as a test to confirm the relativity theory, although it was also recognized that such tests, which merely measure absolute rotation, are also consistent with non-relativistic theories.
During the 1920s, the experiments pioneered by Michelson were repeated by Dayton Miller, who publicly proclaimed positive results on several occasions, although they were not large enough to be consistent with any known aether theory. However, other researchers were unable to duplicate Miller's claimed results. Over the years the experimental accuracy of such measurements has been raised by many orders of magnitude, and no trace of any violations of Lorentz invariance has been seen. (A later re-analysis of Miller's results concluded that he had underestimated the variations due to temperature.)
Since the Miller experiment and its unclear results there have been many more experimental attempts to detect the aether. Many experimenters have claimed positive results. These results have not gained much attention from mainstream science, since they contradict a large quantity of high-precision measurements, all the results of which were consistent with special relativity.
Lorentz aether theory
Between 1892 and 1904, Hendrik Lorentz developed an electron–aether theory, in which he avoided making assumptions about the aether. In his model the aether is completely motionless, and by that he meant that it could not be set in motion in the neighborhood of ponderable matter. Contrary to earlier electron models, the electromagnetic field of the aether appears as a mediator between the electrons, and changes in this field cannot propagate faster than the speed of light. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that an observer moving relative to the aether makes the same observations as a resting observer, after a suitable change of variables. Lorentz noticed that it was necessary to change the space-time variables when changing frames and introduced concepts like physical length contraction (1892) to explain the Michelson–Morley experiment, and the mathematical concept of local time (1895) to explain the aberration of light and the Fizeau experiment. This resulted in the formulation of the so-called Lorentz transformation by Joseph Larmor (1897, 1900) and Lorentz (1899, 1904), whereby (it was noted by Larmor) the complete formulation of local time is accompanied by some sort of time dilation of electrons moving in the aether. As Lorentz later noted (1921, 1928), he considered the time indicated by clocks resting in the aether as "true" time, while local time was seen by him as a heuristic working hypothesis and a mathematical artifice. Therefore, Lorentz's theorem is seen by modern authors as being a mathematical transformation from a "real" system resting in the aether into a "fictitious" system in motion.
The work of Lorentz was mathematically perfected by Henri Poincaré, who formulated on many occasions the Principle of Relativity and tried to harmonize it with electrodynamics. He declared simultaneity only a convenient convention which depends on the speed of light, whereby the constancy of the speed of light would be a useful postulate for making the laws of nature as simple as possible. In 1900 and 1904 he physically interpreted Lorentz's local time as the result of clock synchronization by light signals. In June and July 1905 he declared the relativity principle a general law of nature, including gravitation. He corrected some mistakes of Lorentz and proved the Lorentz covariance of the electromagnetic equations. However, he used the notion of an aether as a perfectly undetectable medium and distinguished between apparent and real time, so most historians of science argue that he failed to invent special relativity.
End of aether
Special relativity
Aether theory was dealt another blow when the Galilean transformation and Newtonian dynamics were both modified by Albert Einstein's special theory of relativity, giving the mathematics of Lorentzian electrodynamics a new, "non-aether" context. Unlike most major shifts in scientific thought, special relativity was adopted by the scientific community remarkably quickly, consistent with Einstein's later comment that the laws of physics described by the Special Theory were "ripe for discovery" in 1905. Max Planck's early advocacy of the special theory, along with the elegant formulation given to it by Hermann Minkowski, contributed much to the rapid acceptance of special relativity among working scientists.
Einstein based his theory on Lorentz's earlier work. Instead of suggesting that the mechanical properties of objects changed with their constant-velocity motion through an undetectable aether, Einstein proposed to deduce the characteristics that any successful theory must possess in order to be consistent with the most basic and firmly established principles, independent of the existence of a hypothetical aether. He found that the Lorentz transformation must transcend its connection with Maxwell's equations, and must represent the fundamental relations between the space and time coordinates of inertial frames of reference. In this way he demonstrated that the laws of physics remained invariant as they had with the Galilean transformation, but that light was now invariant as well.
With the development of the special theory of relativity, the need to account for a single universal frame of reference had disappeared – and acceptance of the 19th-century theory of a luminiferous aether disappeared with it. For Einstein, the Lorentz transformation implied a conceptual change: that the concept of position in space or time was not absolute, but could differ depending on the observer's location and velocity.
Moreover, in another paper published the same month in 1905, Einstein made several observations on a then-thorny problem, the photoelectric effect. In this work he demonstrated that light can be considered as particles that have a "wave-like nature". Particles obviously do not need a medium to travel, and thus, neither did light. This was the first step that would lead to the full development of quantum mechanics, in which the wave-like nature and the particle-like nature of light are both considered as valid descriptions of light. A summary of Einstein's thinking about the aether hypothesis, relativity and light quanta may be found in his 1909 (originally German) lecture "The Development of Our Views on the Composition and Essence of Radiation".
Lorentz on his side continued to use the aether hypothesis. In his lectures of around 1911, he pointed out that what "the theory of relativity has to say ... can be carried out independently of what one thinks of the aether and the time". He commented that "whether there is an aether or not, electromagnetic fields certainly exist, and so also does the energy of the electrical oscillations" so that, "if we do not like the name of 'aether', we must use another word as a peg to hang all these things upon". He concluded that "one cannot deny the bearer of these concepts a certain substantiality".
Nevertheless, in 1920, Einstein gave an address at Leiden University in which he commented "More careful reflection teaches us however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which I shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity". He concluded his address by saying that "according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable."
Other models
In later years there have been a few individuals who advocated a neo-Lorentzian approach to physics, which is Lorentzian in the sense of positing an absolute true state of rest that is undetectable and which plays no role in the predictions of the theory. (No violations of Lorentz covariance have ever been detected, despite strenuous efforts.) Hence these theories resemble the 19th century aether theories in name only. For example, the founder of quantum field theory, Paul Dirac, stated in 1951 in an article in Nature, titled "Is there an Aether?" that "we are rather forced to have an aether". However, Dirac never formulated a complete theory, and so his speculations found no acceptance by the scientific community.
Einstein's views on the aether
When Einstein was still a student in the Zurich Polytechnic in 1900, he was very interested in the idea of aether. His initial proposal of research thesis was to do an experiment to measure how fast the Earth was moving through the aether. "The velocity of a wave is proportional to the square root of the elastic forces which cause [its] propagation, and inversely proportional to the mass of the aether moved by these forces."
In 1916, after Einstein completed his foundational work on general relativity, Lorentz wrote a letter to him in which he speculated that within general relativity the aether was re-introduced. In his response Einstein wrote that one can actually speak about a "new aether", but one may not speak of motion in relation to that aether. This was further elaborated by Einstein in some semi-popular articles (1918, 1920, 1924, 1930).
In 1918, Einstein publicly alluded to that new definition for the first time. Then, in the early 1920s, in a lecture which he was invited to give at Lorentz's university in Leiden, Einstein sought to reconcile the theory of relativity with Lorentzian aether. In this lecture Einstein stressed that special relativity took away the last mechanical property of the aether: immobility. However, he continued that special relativity does not necessarily rule out the aether, because the latter can be used to give physical reality to acceleration and rotation. This concept was fully elaborated within general relativity, in which physical properties (which are partially determined by matter) are attributed to space, but no substance or state of motion can be attributed to that "aether" (by which he meant curved space-time).
In another paper of 1924, named "Concerning the Aether", Einstein argued that Newton's absolute space, in which acceleration is absolute, is the "Aether of Mechanics". And within the electromagnetic theory of Maxwell and Lorentz one can speak of the "Aether of Electrodynamics", in which the aether possesses an absolute state of motion. As regards special relativity, also in this theory acceleration is absolute as in Newton's mechanics. However, the difference from the electromagnetic aether of Maxwell and Lorentz lies in the fact that "because it was no longer possible to speak, in any absolute sense, of simultaneous states at different locations in the aether, the aether became, as it were, four-dimensional since there was no objective way of ordering its states by time alone". Now the "aether of special relativity" is still "absolute", because matter is affected by the properties of the aether, but the aether is not affected by the presence of matter. This asymmetry was solved within general relativity. Einstein explained that the "aether of general relativity" is not absolute, because matter is influenced by the aether, just as matter influences the structure of the aether.
The only similarity of this relativistic aether concept with the classical aether models lies in the presence of physical properties in space, which can be identified through geodesics. As historians such as John Stachel argue, Einstein's views on the "new aether" are not in conflict with his abandonment of the aether in 1905. As Einstein himself pointed out, no "substance" and no state of motion can be attributed to that new aether. Einstein's use of the word "aether" found little support in the scientific community, and played no role in the continuing development of modern physics.
Aether concepts
Aether theories
Aether (classical element)
Aether drag hypothesis
Astral light
| Physical sciences | Theory of relativity | Physics |
18420 | https://en.wikipedia.org/wiki/Basis%20%28linear%20algebra%29 | Basis (linear algebra) | In mathematics, a set of vectors in a vector space is called a basis (: bases) if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to . The elements of a basis are called .
Equivalently, a set is a basis if its elements are linearly independent and every element of is a linear combination of elements of . In other words, a basis is a linearly independent spanning set.
A vector space can have several bases; however all the bases have the same number of elements, called the dimension of the vector space.
This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces.
Basis vectors find applications in the study of crystal structures and frames of reference.
Definition
A basis of a vector space over a field (such as the real numbers or the complex numbers ) is a linearly independent subset of that spans . This means that a subset of is a basis if it satisfies the two following conditions:
linear independence
for every finite subset of , if for some in , then
spanning property
for every vector in , one can choose in and in such that
The scalars are called the coordinates of the vector with respect to the basis , and by the first property they are uniquely determined.
A vector space that has a finite basis is called finite-dimensional. In this case, the finite subset can be taken as itself to check for linear independence in the above definition.
It is often convenient or even necessary to have an ordering on the basis vectors, for example, when discussing orientation, or when one considers the scalar coefficients of a vector with respect to a basis without referring explicitly to the basis elements. In this case, the ordering is necessary for associating each coefficient to the corresponding basis element. This ordering can be done by numbering the basis elements. In order to emphasize that an order has been chosen, one speaks of an ordered basis, which is therefore not simply an unstructured set, but a sequence, an indexed family, or similar; see below.
Examples
The set of the ordered pairs of real numbers is a vector space under the operations of component-wise addition
and scalar multiplication
where is any real number. A simple basis of this vector space consists of the two vectors and . These vectors form a basis (called the standard basis) because any vector of may be uniquely written as Any other pair of linearly independent vectors of , such as and , forms also a basis of .
More generally, if is a field, the set of -tuples of elements of is a vector space for similarly defined addition and scalar multiplication. Let be the -tuple with all components equal to 0, except the th, which is 1. Then is a basis of which is called the standard basis of
A different flavor of example is given by polynomial rings. If is a field, the collection of all polynomials in one indeterminate with coefficients in is an -vector space. One basis for this space is the monomial basis , consisting of all monomials: Any set of polynomials such that there is exactly one polynomial of each degree (such as the Bernstein basis polynomials or Chebyshev polynomials) is also a basis. (Such a set of polynomials is called a polynomial sequence.) But there are also many bases for that are not of this form.
Properties
Many properties of finite bases result from the Steinitz exchange lemma, which states that, for any vector space , given a finite spanning set and a linearly independent set of elements of , one may replace well-chosen elements of by the elements of to get a spanning set containing , having its other elements in , and having the same number of elements as .
Most properties resulting from the Steinitz exchange lemma remain true when there is no finite spanning set, but their proofs in the infinite case generally require the axiom of choice or a weaker form of it, such as the ultrafilter lemma.
If is a vector space over a field , then:
If is a linearly independent subset of a spanning set , then there is a basis such that
has a basis (this is the preceding property with being the empty set, and ).
All bases of have the same cardinality, which is called the dimension of . This is the dimension theorem.
A generating set is a basis of if and only if it is minimal, that is, no proper subset of is also a generating set of .
A linearly independent set is a basis if and only if it is maximal, that is, it is not a proper subset of any linearly independent set.
If is a vector space of dimension , then:
A subset of with elements is a basis if and only if it is linearly independent.
A subset of with elements is a basis if and only if it is a spanning set of .
Coordinates
Let be a vector space of finite dimension over a field , and
be a basis of . By definition of a basis, every in may be written, in a unique way, as
where the coefficients are scalars (that is, elements of ), which are called the coordinates of over . However, if one talks of the set of the coefficients, one loses the correspondence between coefficients and basis elements, and several vectors may have the same set of coefficients. For example, and have the same set of coefficients , and are different. It is therefore often convenient to work with an ordered basis; this is typically done by indexing the basis elements by the first natural numbers. Then, the coordinates of a vector form a sequence similarly indexed, and a vector is completely characterized by the sequence of coordinates. An ordered basis, especially when used in conjunction with an origin, is also called a coordinate frame or simply a frame (for example, a Cartesian frame or an affine frame).
Let, as usual, be the set of the -tuples of elements of . This set is an -vector space, with addition and scalar multiplication defined component-wise. The map
is a linear isomorphism from the vector space onto . In other words, is the coordinate space of , and the -tuple is the coordinate vector of .
The inverse image by of is the -tuple all of whose components are 0, except the th that is 1. The form an ordered basis of , which is called its standard basis or canonical basis. The ordered basis is the image by of the canonical basis of
It follows from what precedes that every ordered basis is the image by a linear isomorphism of the canonical basis of and that every linear isomorphism from onto may be defined as the isomorphism that maps the canonical basis of onto a given ordered basis of . In other words, it is equivalent to define an ordered basis of , or a linear isomorphism from onto .
Change of basis
Let be a vector space of dimension over a field . Given two (ordered) bases and of , it is often useful to express the coordinates of a vector with respect to in terms of the coordinates with respect to This can be done by the change-of-basis formula, that is described below. The subscripts "old" and "new" have been chosen because it is customary to refer to and as the old basis and the new basis, respectively. It is useful to describe the old coordinates in terms of the new ones, because, in general, one has expressions involving the old coordinates, and if one wants to obtain equivalent expressions in terms of the new coordinates; this is obtained by replacing the old coordinates by their expressions in terms of the new coordinates.
Typically, the new basis vectors are given by their coordinates over the old basis, that is,
If and are the coordinates of a vector over the old and the new basis respectively, the change-of-basis formula is
for .
This formula may be concisely written in matrix notation. Let be the matrix of the and
be the column vectors of the coordinates of in the old and the new basis respectively, then the formula for changing coordinates is
The formula can be proven by considering the decomposition of the vector on the two bases: one has
and
The change-of-basis formula results then from the uniqueness of the decomposition of a vector over a basis, here that is
for .
Related notions
Free module
If one replaces the field occurring in the definition of a vector space by a ring, one gets the definition of a module. For modules, linear independence and spanning sets are defined exactly as for vector spaces, although "generating set" is more commonly used than that of "spanning set".
Like for vector spaces, a basis of a module is a linearly independent subset that is also a generating set. A major difference with the theory of vector spaces is that not every module has a basis. A module that has a basis is called a free module. Free modules play a fundamental role in module theory, as they may be used for describing the structure of non-free modules through free resolutions.
A module over the integers is exactly the same thing as an abelian group. Thus a free module over the integers is also a free abelian group. Free abelian groups have specific properties that are not shared by modules over other rings. Specifically, every subgroup of a free abelian group is a free abelian group, and, if is a subgroup of a finitely generated free abelian group (that is an abelian group that has a finite basis), then there is a basis of and an integer such that is a basis of , for some nonzero integers For details, see .
Analysis
In the context of infinite-dimensional vector spaces over the real or complex numbers, the term (named after Georg Hamel) or algebraic basis can be used to refer to a basis as defined in this article. This is to make a distinction with other notions of "basis" that exist when infinite-dimensional vector spaces are endowed with extra structure. The most important alternatives are orthogonal bases on Hilbert spaces, Schauder bases, and Markushevich bases on normed linear spaces. In the case of the real numbers R viewed as a vector space over the field Q of rational numbers, Hamel bases are uncountable, and have specifically the cardinality of the continuum, which is the cardinal number where (aleph-nought) is the smallest infinite cardinal, the cardinal of the integers.
The common feature of the other notions is that they permit the taking of infinite linear combinations of the basis vectors in order to generate the space. This, of course, requires that infinite sums are meaningfully defined on these spaces, as is the case for topological vector spaces – a large class of vector spaces including e.g. Hilbert spaces, Banach spaces, or Fréchet spaces.
The preference of other types of bases for infinite-dimensional spaces is justified by the fact that the Hamel basis becomes "too big" in Banach spaces: If X is an infinite-dimensional normed vector space that is complete (i.e. X is a Banach space), then any Hamel basis of X is necessarily uncountable. This is a consequence of the Baire category theorem. The completeness as well as infinite dimension are crucial assumptions in the previous claim. Indeed, finite-dimensional spaces have by definition finite bases and there are infinite-dimensional (non-complete) normed spaces that have countable Hamel bases. Consider the space of the sequences of real numbers that have only finitely many non-zero elements, with the norm Its standard basis, consisting of the sequences having only one non-zero element, which is equal to 1, is a countable Hamel basis.
Example
In the study of Fourier series, one learns that the functions are an "orthogonal basis" of the (real or complex) vector space of all (real or complex valued) functions on the interval [0, 2π] that are square-integrable on this interval, i.e., functions f satisfying
The functions are linearly independent, and every function f that is square-integrable on [0, 2π] is an "infinite linear combination" of them, in the sense that
for suitable (real or complex) coefficients ak, bk. But many square-integrable functions cannot be represented as finite linear combinations of these basis functions, which therefore do not comprise a Hamel basis. Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are typically not useful, whereas orthonormal bases of these spaces are essential in Fourier analysis.
Geometry
The geometric notions of an affine space, projective space, convex set, and cone have related notions of basis. An affine basis for an n-dimensional affine space is points in general linear position. A is points in general position, in a projective space of dimension n. A of a polytope is the set of the vertices of its convex hull. A consists of one point by edge of a polygonal cone. | Mathematics | Linear algebra | null |
18422 | https://en.wikipedia.org/wiki/Linear%20algebra | Linear algebra | Linear algebra is the branch of mathematics concerning linear equations such as:
linear maps such as:
and their representations in vector spaces and through matrices.
Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to function spaces.
Linear algebra is also used in most sciences and fields of engineering, because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear map that best approximates the function near that point.
History
The procedure (using counting rods) for solving simultaneous linear equations now called Gaussian elimination appears in the ancient Chinese mathematical text Chapter Eight: Rectangular Arrays of The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations.
Systems of linear equations arose in Europe with the introduction in 1637 by René Descartes of coordinates in geometry. In fact, in this new geometry, now called Cartesian geometry, lines and planes are represented by linear equations, and computing their intersections amounts to solving systems of linear equations.
The first systematic methods for solving linear systems used determinants and were first considered by Leibniz in 1693. In 1750, Gabriel Cramer used them for giving explicit solutions of linear systems, now called Cramer's rule. Later, Gauss further described the method of elimination, which was initially listed as an advancement in geodesy.
In 1844 Hermann Grassmann published his "Theory of Extension" which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb.
Linear algebra grew with ideas noted in the complex plane. For instance, two numbers and in have a difference , and the line segments and are of the same length and direction. The segments are equipollent. The four-dimensional system of quaternions was discovered by W.R. Hamilton in 1843. The term vector was introduced as representing a point in space. The quaternion difference also produces a segment equipollent to . Other hypercomplex number systems also used the idea of a linear space with a basis.
Arthur Cayley introduced matrix multiplication and the inverse matrix in 1856, making possible the general linear group. The mechanism of group representation became available for describing complex and hypercomplex numbers. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants, and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".
Benjamin Peirce published his Linear Associative Algebra (1872), and his son Charles Sanders Peirce extended the work later.
The telegraph required an explanatory system, and the 1873 publication by James Clerk Maxwell of A Treatise on Electricity and Magnetism instituted a field theory of forces and required differential geometry for expression. Linear algebra is flat differential geometry and serves in tangent spaces to manifolds. Electromagnetic symmetries of spacetime are expressed by the Lorentz transformations, and much of the history of linear algebra is the history of Lorentz transformations.
The first modern and more precise definition of a vector space was introduced by Peano in 1888; by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its modern form in the first half of the twentieth century, when many ideas and methods of previous centuries were generalized as abstract algebra. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modelling and simulations.
Vector spaces
Until the 19th century, linear algebra was introduced through systems of linear equations and matrices. In modern mathematics, the presentation through vector spaces is generally preferred, since it is more synthetic, more general (not limited to the finite-dimensional case), and conceptually simpler, although more abstract.
A vector space over a field (often the field of the real numbers) is a set equipped with two binary operations. Elements of are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors and and outputs a third vector . The second operation, scalar multiplication, takes any scalar and any vector and outputs a new . The axioms that addition and scalar multiplication must satisfy are the following. (In the list below, and are arbitrary elements of , and and are arbitrary scalars in the field .)
{| border="0" style="width:100%;"
|-
| Axiom ||Signification
|-
| Associativity of addition ||
|- style="background:#F8F4FF;"
| Commutativity of addition ||
|-
| Identity element of addition || There exists an element in , called the zero vector (or simply zero), such that for all in .
|- style="background:#F8F4FF;"
| Inverse elements of addition || For every in , there exists an element in , called the additive inverse of , such that
|-
| Distributivity of scalar multiplication with respect to vector addition ||
|- style="background:#F8F4FF;"
| Distributivity of scalar multiplication with respect to field addition ||
|-
| Compatibility of scalar multiplication with field multiplication ||
|- style="background:#F8F4FF;"
| Identity element of scalar multiplication || , where denotes the multiplicative identity of .
|}
The first four axioms mean that is an abelian group under addition.
An element of a specific vector space may have various nature; for example, it could be a sequence, a function, a polynomial or a matrix. Linear algebra is concerned with those properties of such objects that are common to all vector spaces.
Linear maps
Linear maps are mappings between vector spaces that preserve the vector-space structure. Given two vector spaces and over a field , a linear map (also called, in some contexts, linear transformation or linear mapping) is a map
that is compatible with addition and scalar multiplication, that is
for any vectors in and scalar in .
This implies that for any vectors in and scalars in , one has
When are the same vector space, a linear map is also known as a linear operator on .
A bijective linear map between two vector spaces (that is, every vector from the second space is associated with exactly one in the first) is an isomorphism. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view, in the sense that they cannot be distinguished by using vector space properties. An essential question in linear algebra is testing whether a linear map is an isomorphism or not, and, if it is not an isomorphism, finding its range (or image) and the set of elements that are mapped to the zero vector, called the kernel of the map. All these questions can be solved by using Gaussian elimination or some variant of this algorithm.
Subspaces, span, and basis
The study of those subsets of vector spaces that are in themselves vector spaces under the induced operations is fundamental, similarly as for many mathematical structures. These subsets are called linear subspaces. More precisely, a linear subspace of a vector space over a field is a subset of such that and are in , for every , in , and every in . (These conditions suffice for implying that is a vector space.)
For example, given a linear map , the image of , and the inverse image of (called kernel or null space), are linear subspaces of and , respectively.
Another important way of forming a subspace is to consider linear combinations of a set of vectors: the set of all sums
where are in , and are in form a linear subspace called the span of . The span of is also the intersection of all linear subspaces containing . In other words, it is the smallest (for the inclusion relation) linear subspace containing .
A set of vectors is linearly independent if none is in the span of the others. Equivalently, a set of vectors is linearly independent if the only way to express the zero vector as a linear combination of elements of is to take zero for every coefficient .
A set of vectors that spans a vector space is called a spanning set or generating set. If a spanning set is linearly dependent (that is not linearly independent), then some element of is in the span of the other elements of , and the span would remain the same if one were to remove from . One may continue to remove elements of until getting a linearly independent spanning set. Such a linearly independent set that spans a vector space is called a basis of . The importance of bases lies in the fact that they are simultaneously minimal generating sets and maximal independent sets. More precisely, if is a linearly independent set, and is a spanning set such that , then there is a basis such that .
Any two bases of a vector space have the same cardinality, which is called the dimension of ; this is the dimension theorem for vector spaces. Moreover, two vector spaces over the same field are isomorphic if and only if they have the same dimension.
If any basis of (and therefore every basis) has a finite number of elements, is a finite-dimensional vector space. If is a subspace of , then . In the case where is finite-dimensional, the equality of the dimensions implies .
If and are subspaces of , then
where denotes the span of .
Matrices
Matrices allow explicit manipulation of finite-dimensional vector spaces and linear maps. Their theory is thus an essential part of linear algebra.
Let be a finite-dimensional vector space over a field , and be a basis of (thus is the dimension of ). By definition of a basis, the map
is a bijection from , the set of the sequences of elements of , onto . This is an isomorphism of vector spaces, if is equipped of its standard structure of vector space, where vector addition and scalar multiplication are done component by component.
This isomorphism allows representing a vector by its inverse image under this isomorphism, that is by the coordinate vector or by the column matrix
If is another finite dimensional vector space (possibly the same), with a basis , a linear map from to is well defined by its values on the basis elements, that is . Thus, is well represented by the list of the corresponding column matrices. That is, if
for , then is represented by the matrix
with rows and columns.
Matrix multiplication is defined in such a way that the product of two matrices is the matrix of the composition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing exactly the same concepts.
Two matrices that encode the same linear transformation in different bases are called similar. It can be proved that two matrices are similar if and only if one can transform one into the other by elementary row and column operations. For a matrix representing a linear map from to , the row operations correspond to change of bases in and the column operations correspond to change of bases in . Every matrix is similar to an identity matrix possibly bordered by zero rows and zero columns. In terms of vector spaces, this means that, for any linear map from to , there are bases such that a part of the basis of is mapped bijectively on a part of the basis of , and that the remaining basis elements of , if any, are mapped to zero. Gaussian elimination is the basic algorithm for finding these elementary operations, and proving these results.
Linear systems
A finite set of linear equations in a finite set of variables, for example, , or is called a system of linear equations or a linear system.
Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory has been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems.
For example, let
be a linear system.
To such a system, one may associate its matrix
and its right member vector
Let be the linear transformation associated to the matrix . A solution of the system () is a vector
such that
that is an element of the preimage of by .
Let () be the associated homogeneous system, where the right-hand sides of the equations are put to zero:
The solutions of () are exactly the elements of the kernel of or, equivalently, .
The Gaussian-elimination consists of performing elementary row operations on the augmented matrix
for putting it in reduced row echelon form. These row operations do not change the set of solutions of the system of equations. In the example, the reduced echelon form is
showing that the system () has the unique solution
It follows from this matrix interpretation of linear systems that the same methods can be applied for solving linear systems and for many operations on matrices and linear transformations, which include the computation of the ranks, kernels, matrix inverses.
Endomorphisms and square matrices
A linear endomorphism is a linear map that maps a vector space to itself.
If has a basis of elements, such an endomorphism is represented by a square matrix of size .
With respect to general linear maps, linear endomorphisms and square matrices have some specific properties that make their study an important part of linear algebra, which is used in many parts of mathematics, including geometric transformations, coordinate changes, quadratic forms, and many other part of mathematics.
Determinant
The determinant of a square matrix is defined to be
where is the group of all permutations of elements, is a permutation, and the parity of the permutation. A matrix is invertible if and only if the determinant is invertible (i.e., nonzero if the scalars belong to a field).
Cramer's rule is a closed-form expression, in terms of determinants, of the solution of a system of linear equations in unknowns. Cramer's rule is useful for reasoning about the solution, but, except for or , it is rarely used for computing a solution, since Gaussian elimination is a faster algorithm.
The determinant of an endomorphism is the determinant of the matrix representing the endomorphism in terms of some ordered basis. This definition makes sense, since this determinant is independent of the choice of the basis.
Eigenvalues and eigenvectors
If is a linear endomorphism of a vector space over a field , an eigenvector of is a nonzero vector of such that for some scalar in . This scalar is an eigenvalue of .
If the dimension of is finite, and a basis has been chosen, and may be represented, respectively, by a square matrix and a column matrix ; the equation defining eigenvectors and eigenvalues becomes
Using the identity matrix , whose entries are all zero, except those of the main diagonal, which are equal to one, this may be rewritten
As is supposed to be nonzero, this means that is a singular matrix, and thus that its determinant equals zero. The eigenvalues are thus the roots of the polynomial
If is of dimension , this is a monic polynomial of degree , called the characteristic polynomial of the matrix (or of the endomorphism), and there are, at most, eigenvalues.
If a basis exists that consists only of eigenvectors, the matrix of on this basis has a very simple structure: it is a diagonal matrix such that the entries on the main diagonal are eigenvalues, and the other entries are zero. In this case, the endomorphism and the matrix are said to be diagonalizable. More generally, an endomorphism and a matrix are also said diagonalizable, if they become diagonalizable after extending the field of scalars. In this extended sense, if the characteristic polynomial is square-free, then the matrix is diagonalizable.
A symmetric matrix is always diagonalizable. There are non-diagonalizable matrices, the simplest being
(it cannot be diagonalizable since its square is the zero matrix, and the square of a nonzero diagonal matrix is never zero).
When an endomorphism is not diagonalizable, there are bases on which it has a simple form, although not as simple as the diagonal form. The Frobenius normal form does not need of extending the field of scalars and makes the characteristic polynomial immediately readable on the matrix. The Jordan normal form requires to extend the field of scalar for containing all eigenvalues, and differs from the diagonal form only by some entries that are just above the main diagonal and are equal to 1.
Duality
A linear form is a linear map from a vector space over a field to the field of scalars , viewed as a vector space over itself. Equipped by pointwise addition and multiplication by a scalar, the linear forms form a vector space, called the dual space of , and usually denoted or .
If is a basis of (this implies that is finite-dimensional), then one can define, for , a linear map such that and if . These linear maps form a basis of , called the dual basis of . (If is not finite-dimensional, the may be defined similarly; they are linearly independent, but do not form a basis.)
For in , the map
is a linear form on . This defines the canonical linear map from into , the dual of , called the double dual or bidual of . This canonical map is an isomorphism if is finite-dimensional, and this allows identifying with its bidual. (In the infinite dimensional case, the canonical map is injective, but not surjective.)
There is thus a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of the bra–ket notation
for denoting .
Dual map
Let
be a linear map. For every linear form on , the composite function is a linear form on . This defines a linear map
between the dual spaces, which is called the dual or the transpose of .
If and are finite dimensional, and is the matrix of in terms of some ordered bases, then the matrix of over the dual bases is the transpose of , obtained by exchanging rows and columns.
If elements of vector spaces and their duals are represented by column vectors, this duality may be expressed in bra–ket notation by
For highlighting this symmetry, the two members of this equality are sometimes written
Inner-product spaces
Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an inner product. The inner product is an example of a bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an inner product is a map
that satisfies the following three axioms for all vectors in and all scalars in :
Conjugate symmetry:
In , it is symmetric.
Linearity in the first argument:
Positive-definiteness:
with equality only for .
We can define the length of a vector v in V by
and we can prove the Cauchy–Schwarz inequality:
In particular, the quantity
and so we can call this quantity the cosine of the angle between the two vectors.
Two vectors are orthogonal if . An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gram–Schmidt procedure. Orthonormal bases are particularly easy to deal with, since if , then
The inner product facilitates the construction of many useful concepts. For instance, given a transform , we can define its Hermitian conjugate as the linear transform satisfying
If satisfies , we call normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span .
Relationship with geometry
There is a strong relationship between linear algebra and geometry, which started with the introduction by René Descartes, in 1637, of Cartesian coordinates. In this new (at that time) geometry, now called Cartesian geometry, points are represented by Cartesian coordinates, which are sequences of three real numbers (in the case of the usual three-dimensional space). The basic objects of geometry, which are lines and planes are represented by linear equations. Thus, computing intersections of lines and planes amounts to solving systems of linear equations. This was one of the main motivations for developing linear algebra.
Most geometric transformation, such as translations, rotations, reflections, rigid motions, isometries, and projections transform lines into lines. It follows that they can be defined, specified and studied in terms of linear maps. This is also the case of homographies and Möbius transformations, when considered as transformations of a projective space.
Until the end of the 19th century, geometric spaces were defined by axioms relating points, lines and planes (synthetic geometry). Around this date, it appeared that one may also define geometric spaces by constructions involving vector spaces (see, for example, Projective space and Affine space). It has been shown that the two approaches are essentially equivalent. In classical geometry, the involved vector spaces are vector spaces over the reals, but the constructions may be extended to vector spaces over any field, allowing considering geometry over arbitrary fields, including finite fields.
Presently, most textbooks introduce geometric spaces from linear algebra, and geometry is often presented, at elementary level, as a subfield of linear algebra.
Usage and applications
Linear algebra is used in almost all areas of mathematics, thus making it relevant in almost all scientific domains that use mathematics. These applications may be divided into several wide categories.
Functional analysis
Functional analysis studies function spaces. These are vector spaces with additional structure, such as Hilbert spaces. Linear algebra is thus a fundamental part of functional analysis and its applications, which include, in particular, quantum mechanics (wave functions) and Fourier analysis (orthogonal basis).
Scientific computation
Nearly all scientific computations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized. BLAS and LAPACK are the best known implementations. For improving efficiency, some of them configure the algorithms automatically, at run time, for adapting them to the specificities of the computer (cache size, number of available cores, ...).
Since the 1960s there have been processors with specialized instructions for optimizing the operations of linear algebra, optional array processors under the control of a conventional processor, supercomputers designed for array processing and conventional processors augmented with vector registers.
Some contemporary processors, typically graphics processing units (GPU), are designed with a matrix structure, for optimizing the operations of linear algebra.
Geometry of ambient space
The modeling of ambient space is based on geometry. Sciences concerned with this space use geometry widely. This is the case with mechanics and robotics, for describing rigid body dynamics; geodesy for describing Earth shape; perspectivity, computer vision, and computer graphics, for describing the relationship between a scene and its plane representation; and many other scientific domains.
In all these applications, synthetic geometry is often used for general descriptions and a qualitative approach, but for the study of explicit situations, one must compute with coordinates. This requires the heavy use of linear algebra.
Study of complex systems
Most physical phenomena are modeled by partial differential equations. To solve them, one usually decomposes the space in which the solutions are searched into small, mutually interacting cells. For linear systems this interaction involves linear functions. For nonlinear systems, this interaction is often approximated by linear functions.This is called a linear model or first-order approximation. Linear models are frequently used for complex nonlinear real-world systems because it makes parametrization more manageable. In both cases, very large matrices are generally involved. Weather forecasting (or more specifically, parametrization for atmospheric modeling) is a typical example of a real-world application, where the whole Earth atmosphere is divided into cells of, say, 100 km of width and 100 km of height.
Fluid Mechanics, Fluid Dynamics, and Thermal Energy Systems
Linear algebra, a branch of mathematics dealing with vector spaces and linear mappings between these spaces, plays a critical role in various engineering disciplines, including fluid mechanics, fluid dynamics, and thermal energy systems. Its application in these fields is multifaceted and indispensable for solving complex problems.
In fluid mechanics, linear algebra is integral to understanding and solving problems related to the behavior of fluids. It assists in the modeling and simulation of fluid flow, providing essential tools for the analysis of fluid dynamics problems. For instance, linear algebraic techniques are used to solve systems of differential equations that describe fluid motion. These equations, often complex and non-linear, can be linearized using linear algebra methods, allowing for simpler solutions and analyses.
In the field of fluid dynamics, linear algebra finds its application in computational fluid dynamics (CFD), a branch that uses numerical analysis and data structures to solve and analyze problems involving fluid flows. CFD relies heavily on linear algebra for the computation of fluid flow and heat transfer in various applications. For example, the Navier-Stokes equations, fundamental in fluid dynamics, are often solved using techniques derived from linear algebra. This includes the use of matrices and vectors to represent and manipulate fluid flow fields.
Furthermore, linear algebra plays a crucial role in thermal energy systems, particularly in power systems analysis. It is used to model and optimize the generation, transmission, and distribution of electric power. Linear algebraic concepts such as matrix operations and eigenvalue problems are employed to enhance the efficiency, reliability, and economic performance of power systems. The application of linear algebra in this context is vital for the design and operation of modern power systems, including renewable energy sources and smart grids.
Overall, the application of linear algebra in fluid mechanics, fluid dynamics, and thermal energy systems is an example of the profound interconnection between mathematics and engineering. It provides engineers with the necessary tools to model, analyze, and solve complex problems in these domains, leading to advancements in technology and industry.
Extensions and generalizations
This section presents several related topics that do not appear generally in elementary textbooks on linear algebra, but are commonly considered, in advanced mathematics, as parts of linear algebra.
Module theory
The existence of multiplicative inverses in fields is not involved in the axioms defining a vector space. One may thus replace the field of scalars by a ring , and this gives the structure called a module over , or -module.
The concepts of linear independence, span, basis, and linear maps (also called module homomorphisms) are defined for modules exactly as for vector spaces, with the essential difference that, if is not a field, there are modules that do not have any basis. The modules that have a basis are the free modules, and those that are spanned by a finite set are the finitely generated modules. Module homomorphisms between finitely generated free modules may be represented by matrices. The theory of matrices over a ring is similar to that of matrices over a field, except that determinants exist only if the ring is commutative, and that a square matrix over a commutative ring is invertible only if its determinant has a multiplicative inverse in the ring.
Vector spaces are completely characterized by their dimension (up to an isomorphism). In general, there is not such a complete classification for modules, even if one restricts oneself to finitely generated modules. However, every module is a cokernel of a homomorphism of free modules.
Modules over the integers can be identified with abelian groups, since the multiplication by an integer may be identified to a repeated addition. Most of the theory of abelian groups may be extended to modules over a principal ideal domain. In particular, over a principal ideal domain, every submodule of a free module is free, and the fundamental theorem of finitely generated abelian groups may be extended straightforwardly to finitely generated modules over a principal ring.
There are many rings for which there are algorithms for solving linear equations and systems of linear equations. However, these algorithms have generally a computational complexity that is much higher than the similar algorithms over a field. For more details, see Linear equation over a ring.
Multilinear algebra and tensors
In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of a number of different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space consisting of linear maps where F is the field of scalars. Multilinear maps can be described via tensor products of elements of .
If, in addition to vector addition and scalar multiplication, there is a bilinear vector product , the vector space is called an algebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials).
Topological vector spaces
Vector spaces that are not finite dimensional often require additional structure to be tractable. A normed vector space is a vector space along with a function called a norm, which measures the "size" of elements. The norm induces a metric, which measures the distance between elements, and induces a topology, which allows for a definition of continuous maps. The metric also allows for a definition of limits and completeness – a normed vector space that is complete is known as a Banach space. A complete metric space along with the additional structure of an inner product (a conjugate symmetric sesquilinear form) is known as a Hilbert space, which is in some sense a particularly well-behaved Banach space. Functional analysis applies the methods of linear algebra alongside those of mathematical analysis to study various function spaces; the central objects of study in functional analysis are spaces, which are Banach spaces, and especially the space of square integrable functions, which is the only Hilbert space among them. Functional analysis is of particular importance to quantum mechanics, the theory of partial differential equations, digital signal processing, and electrical engineering. It also provides the foundation and theoretical framework that underlies the Fourier transform and related methods.
| Mathematics | Algebra | null |
18450 | https://en.wikipedia.org/wiki/Lung%20cancer | Lung cancer | Lung cancer, also known as lung carcinoma, is a malignant tumor that begins in the lung. Lung cancer is caused by genetic damage to the DNA of cells in the airways, often caused by cigarette smoking or inhaling damaging chemicals. Damaged airway cells gain the ability to multiply unchecked, causing the growth of a tumor. Without treatment, tumors spread throughout the lung, damaging lung function. Eventually lung tumors metastasize, spreading to other parts of the body.
Early lung cancer often has no symptoms and can only be detected by medical imaging. As the cancer progresses, most people experience nonspecific respiratory problems: coughing, shortness of breath, or chest pain. Other symptoms depend on the location and size of the tumor. Those suspected of having lung cancer typically undergo a series of imaging tests to determine the location and extent of any tumors. Definitive diagnosis of lung cancer requires a biopsy of the suspected tumor be examined by a pathologist under a microscope. In addition to recognizing cancerous cells, a pathologist can classify the tumor according to the type of cells it originates from. Around 15% of cases are small-cell lung cancer (SCLC), and the remaining 85% (the non-small-cell lung cancers or NSCLC) are adenocarcinomas, squamous-cell carcinomas, and large-cell carcinomas. After diagnosis, further imaging and biopsies are done to determine the cancer's stage based on how far it has spread.
Treatment for early stage lung cancer includes surgery to remove the tumor, sometimes followed by radiation therapy and chemotherapy to kill any remaining cancer cells. Later stage cancer is treated with radiation therapy and chemotherapy alongside drug treatments that target specific cancer subtypes. Even with treatment, only around 20% of people survive five years on from their diagnosis. Survival rates are higher in those diagnosed at an earlier stage, diagnosed at a younger age, and in women compared to men.
Most lung cancer cases are caused by tobacco smoking. The remainder are caused by exposure to hazardous substances like asbestos and radon gas, or by genetic mutations that arise by chance. Consequently, lung cancer prevention efforts encourage people to avoid hazardous chemicals and quit smoking. Quitting smoking both reduces one's chance of developing lung cancer and improves treatment outcomes in those already diagnosed with lung cancer.
Lung cancer is the most diagnosed and deadliest cancer worldwide, with 2.2 million cases in 2020 resulting in 1.8 million deaths. Lung cancer is rare in those younger than 40; the average age at diagnosis is 70 years, and the average age at death 72. Incidence and outcomes vary widely across the world, depending on patterns of tobacco use. Prior to the advent of cigarette smoking in the 20th century, lung cancer was a rare disease. In the 1950s and 1960s, increasing evidence linked lung cancer and tobacco use, culminating in declarations by most large national health bodies discouraging tobacco use.
Signs and symptoms
Early lung cancer often has no symptoms. When symptoms do arise they are often nonspecific respiratory problems – coughing, shortness of breath, or chest pain – that can differ from person to person. Those who experience coughing tend to report either a new cough, or an increase in the frequency or strength of a pre-existing cough. Around one in four cough up blood, ranging from small streaks in the sputum to large amounts. Around half of those diagnosed with lung cancer experience shortness of breath, while 25–50% experience a dull, persistent chest pain that remains in the same location over time. In addition to respiratory symptoms, some experience systemic symptoms including loss of appetite, weight loss, general weakness, fever, and night sweats.
Some less common symptoms suggest tumors in particular locations. Tumors in the thorax can cause breathing problems by obstructing the trachea or disrupting the nerve to the diaphragm; difficulty swallowing by compressing the esophagus; hoarseness by disrupting the nerves of the larynx; and Horner's syndrome by disrupting the sympathetic nervous system. Horner's syndrome is also common in tumors at the top of the lung, known as Pancoast tumors, which also cause shoulder pain that radiates down the little-finger side of the arm as well as destruction of the topmost ribs. Swollen lymph nodes above the collarbone can indicate a tumor that has spread within the chest. Tumors obstructing bloodflow to the heart can cause superior vena cava syndrome (swelling of the upper body and shortness of breath), while tumors infiltrating the area around the heart can cause fluid buildup around the heart, arrhythmia (irregular heartbeat), and heart failure.
About one in three people diagnosed with lung cancer have symptoms caused by metastases in sites other than the lungs. Lung cancer can metastasize anywhere in the body, with different symptoms depending on the location. Brain metastases can cause headache, nausea, vomiting, seizures, and neurological deficits. Bone metastases can cause pain, bone fractures, and compression of the spinal cord. Metastasis into the bone marrow can deplete blood cells and cause leukoerythroblastosis (immature cells in the blood). Liver metastases can cause liver enlargement, pain in the right upper quadrant of the abdomen, fever, and weight loss.
Lung tumors often cause the release of body-altering hormones, which cause unusual symptoms, called paraneoplastic syndromes. Inappropriate hormone release can cause dramatic shifts in concentrations of blood minerals. Most common is hypercalcemia (high blood calcium) caused by over-production of parathyroid hormone-related protein or parathyroid hormone. Hypercalcemia can manifest as nausea, vomiting, abdominal pain, constipation, increased thirst, frequent urination, and altered mental status. Those with lung cancer also commonly experience hypokalemia (low potassium) due to inappropriate secretion of adrenocorticotropic hormone, as well as hyponatremia (low sodium) due to overproduction of antidiuretic hormone or atrial natriuretic peptide. About one of three people with lung cancer develop nail clubbing, while up to one in ten experience hypertrophic pulmonary osteoarthropathy (nail clubbing, joint soreness, and skin thickening). A variety of autoimmune disorders can arise as paraneoplastic syndromes in those with lung cancer, including Lambert–Eaton myasthenic syndrome (which causes muscle weakness), sensory neuropathies, muscle inflammation, brain swelling, and autoimmune deterioration of cerebellum, limbic system, or brainstem. Up to one in twelve people with lung cancer have paraneoplastic blood clotting, including migratory venous thrombophlebitis, clots in the heart, and disseminated intravascular coagulation (clots throughout the body). Paraneoplastic syndromes involving the skin and kidneys are rare, each occurring in up to 1% of those with lung cancer.
Diagnosis
A person suspected of having lung cancer will have imaging tests done to evaluate the presence, extent, and location of tumors. First, many primary care providers perform a chest X-ray to look for a mass inside the lung. The X-ray may reveal an obvious mass, the widening of the mediastinum (suggestive of spread to lymph nodes there), atelectasis (lung collapse), consolidation (pneumonia), or pleural effusion; however, some lung tumors are not visible by X-ray. Next, many undergo computed tomography (CT) scanning, which can reveal the sizes and locations of tumors.
A definitive diagnosis of lung cancer requires a biopsy of the suspected tissue be histologically examined for cancer cells. Given the location of lung cancer tumors, biopsies can often be obtained by minimally invasive techniques: a fiberoptic bronchoscope that can retrieve tissue (sometimes guided by endobronchial ultrasound), fine needle aspiration, or other imaging-guided biopsy through the skin. Those who cannot undergo a typical biopsy procedure may instead have a liquid biopsy taken (that is, a sample of some body fluid) which may contain circulating tumor DNA that can be detected.
Imaging is also used to assess the extent of cancer spread. Positron emission tomography (PET) scanning or combined PET-CT scanning is often used to locate metastases in the body. Since PET scanning is less sensitive in the brain, the National Comprehensive Cancer Network recommends magnetic resonance imaging (MRI) – or CT where MRI is unavailable – to scan the brain for metastases in those with NSCLC and large tumors, or tumors that have spread to the nearby lymph nodes. When imaging suggests the tumor has spread, the suspected metastasis is often biopsied to confirm that it is cancerous. Lung cancer most commonly metastasizes to the brain, bones, liver, and adrenal glands.
Lung cancer can often appear as a solitary pulmonary nodule on a chest radiograph or CT scan. In lung cancer screening studies as many as 30% of those screened have a lung nodule, the majority of which turn out to be benign. Besides lung cancer many other diseases can also give this appearance, including hamartomas, and infectious granulomas caused by tuberculosis, histoplasmosis, or coccidioidomycosis.
Classification
At diagnosis, lung cancer is classified based on the type of cells the tumor is derived from; tumors derived from different cells progress and respond to treatment differently. There are two main types of lung cancer, categorized by the size and appearance of the malignant cells seen by a histopathologist under a microscope: small cell lung cancer (SCLC; 15% of cases) and non-small-cell lung cancer (NSCLC; 85% of cases). SCLC tumors are often found near the center of the lungs, in the major airways. Their cells appear small with ill-defined boundaries, not much cytoplasm, many mitochondria, and have distinctive nuclei with granular-looking chromatin and no visible nucleoli. NSCLCs comprise a group of three cancer types: adenocarcinoma, squamous-cell carcinoma, and large-cell carcinoma. Nearly 40% of lung cancers are adenocarcinomas. Their cells grow in three-dimensional clumps, resemble glandular cells, and may produce mucin. About 30% of lung cancers are squamous-cell carcinomas. They typically occur close to large airways. The tumors consist of sheets of cells, with layers of keratin. A hollow cavity and associated cell death are commonly found at the center of the tumor. Less than 10% of lung cancers are large-cell carcinomas, so named because the cells are large, with excess cytoplasm, large nuclei, and conspicuous nucleoli. Around 10% of lung cancers are rarer types. These include mixes of the above subtypes like adenosquamous carcinoma, and rare subtypes such as carcinoid tumors, and sarcomatoid carcinomas.
Several lung cancer types are subclassified based on the growth characteristics of the cancer cells. Adenocarcinomas are classified as lepidic (growing along the surface of intact alveolar walls), acinar and papillary, or micropapillary and solid pattern. Lepidic adenocarcinomas tend to be least aggressive, while micropapillary and solid pattern adenocarcinomas are most aggressive.
In addition to examining cell morphology, biopsies are often stained by immunohistochemistry to confirm lung cancer classification. SCLCs bear the markers of neuroendocrine cells, such as chromogranin, synaptophysin, and CD56. Adenocarcinomas tend to express and ; squamous cell carcinomas lack and , but express p63 and its cancer-specific isoform p40. CK7 and CK20 are also commonly used to differentiate lung cancers. CK20 is found in several cancers, but typically absent from lung cancer. CK7 is present in many lung cancers, but absent from squamous cell carcinomas.
Staging
Lung cancer staging is an assessment of the degree of spread of the cancer from its original source. It is one of the factors affecting both the prognosis and the treatment of lung cancer.
SCLC is typically staged with a relatively simple system: limited stage or extensive stage. Around a third of people are diagnosed at the limited stage, meaning cancer is confined to one side of the chest, within the scope of a single radiotherapy field. The other two thirds are diagnosed at the "extensive stage", with cancer spread to both sides of the chest, or to other parts of the body.
NSCLC – and sometimes SCLC – is typically staged with the American Joint Committee on Cancer's Tumor, Node, Metastasis (TNM) staging system. The size and extent of the tumor (T), spread to regional lymph nodes (N), and distant metastases (M) are scored individually, and combined to form stage groups.
Relatively small tumors are designated T1, which are subdivided by size: tumors ≤ 1 centimeter (cm) across are T1a; 1–2 cm T1b; 2–3 cm T1c. Tumors up to 5 cm across, or those that have spread to the visceral pleura (tissue covering the lung) or main bronchi, are designated T2. T2a designates 3–4 cm tumors; T2b 4–5 cm tumors. T3 tumors are up to 7 cm across, have multiple nodules in the same lobe of the lung, or invade the chest wall, diaphragm (or the nerve that controls it), or area around the heart. Tumors that are larger than 7 cm, have nodules spread in different lobes of a lung, or invade the mediastinum (center of the chest cavity), heart, largest blood vessels that supply the heart, trachea, esophagus, or spine are designated T4. Lymph node staging depends on the extent of local spread: with the cancer metastasized to no lymph nodes (N0), pulmonary or hilar nodes (along the bronchi) on the same side as the tumor (N1), mediastinal or subcarinal lymph nodes (in the middle of the lungs, N2), or lymph nodes on the opposite side of the lung from the tumor (N3). Metastases are staged as no metastases (M0), nearby metastases (M1a; the space around the lung or the heart, or the opposite lung), a single distant metastasis (M1b), or multiple metastases (M1c).
These T, N, and M scores are combined to designate a stage grouping for the cancer. Cancer limited to smaller tumors is designated stage I. Disease with larger tumors or spread to the nearest lymph nodes is stage II. Cancer with the largest tumors or extensive lymph node spread is stage III. Cancer that has metastasized is stage IV. Each stage is further subdivided based on the combination of T, N, and M scores.
Screening
Some countries recommend that people who are at a high risk of developing lung cancer be screened at different intervals using low-dose CT lung scans. Screening programs may result in early detection of lung tumors in people who are not yet experiencing symptoms of lung cancer, ideally, early enough that the tumors can be successfully treated and result in decreased mortality. There is evidence that regular low-dose CT scans in people at high risk of developing lung cancer reduces total lung cancer deaths by as much as 20%. Despite evidence of benefit in these populations, potential harms of screening include the potential for a person to have a 'false positive' screening result that may lead to unnecessary testing, invasive procedures, and distress. Although rare, there is also a risk of radiation-induced cancer. The United States Preventive Services Task Force recommends yearly screening using low-dose CT in people between 55 and 80 who have a smoking history of at least 30 pack-years. The European Commission recommends that cancer screening programs across the European Union be extended to include low-dose CT lung scans for current or previous smokers. Similarly, The Canadian Task Force for Preventative Health recommends that people who are current or former smokers (smoking history of more than 30 pack years) and who are between the ages of 55–74 years be screened for lung cancer.
Treatment
Treatment for lung cancer depends on the cancer's specific cell type, how far it has spread, and the person's health. Common treatments for early stage cancer includes surgical removal of the tumor, chemotherapy, and radiation therapy. For later-stage cancer, chemotherapy and radiation therapy are combined with newer targeted molecular therapies and immune checkpoint inhibitors. All lung cancer treatment regimens are combined with lifestyle changes and palliative care to improve quality of life.
Small-cell lung cancer
Limited-stage SCLC is typically treated with a combination of chemotherapy and radiotherapy. For chemotherapy, the National Comprehensive Cancer Network and American College of Chest Physicians guidelines recommend four to six cycles of a platinum-based chemotherapeutic – cisplatin or carboplatin – combined with either etoposide or irinotecan. This is typically combined with thoracic radiation therapy – 45 Gray (Gy) twice-daily – alongside the first two chemotherapy cycles. First-line therapy causes remission in up to 80% of those who receive it; however most people relapse with chemotherapy-resistant disease. Those who relapse are given second-line chemotherapies. Topotecan and lurbinectedin are approved by the US FDA for this purpose. Irinotecan, paclitaxel, docetaxel, vinorelbine, etoposide, and gemcitabine are also sometimes used, and are similarly efficacious. Prophylactic cranial irradiation can reduce the risk of brain metastases and improve survival in those with limited-stage disease.
Extensive-stage SCLC is treated first with etoposide along with either cisplatin or carboplatin. Radiotherapy is used only to shrink tumors that are causing particularly severe symptoms. Combining standard chemotherapy with an immune checkpoint inhibitor can improve survival for a minority of those affected, extending the average person's lifespan by around 2 months.
Non-small-cell lung cancer
For stage I and stage II NSCLC the first line of treatment is often surgical removal of the affected lobe of the lung. For those not well enough to tolerate full lobe removal, a smaller chunk of lung tissue can be removed by wedge resection or segmentectomy surgery. Those with centrally located tumors and otherwise-healthy respiratory systems may have more extreme surgery to remove an entire lung (pneumonectomy). Experienced thoracic surgeons, and a high-volume surgery clinic improve chances of survival. Those who are unable or unwilling to undergo surgery can instead receive radiation therapy. Stereotactic body radiation therapy is best practice, typically administered several times over 1–2 weeks. Chemotherapy has little effect in those with stage I NSCLC, and may worsen disease outcomes in those with the earliest disease. In those with stage II disease, chemotherapy is usually initiated six to twelve weeks after surgery, with up to four cycles of cisplatin – or carboplatin in those with kidney problems, neuropathy, or hearing impairment – combined with vinorelbine, pemetrexed, gemcitabine, or docetaxel.
Treatment for those with stage III NSCLC depends on the nature of their disease. Those with more limited spread may undergo surgery to have the tumor and affected lymph nodes removed, followed by chemotherapy and potentially radiotherapy. Those with particularly large tumors (T4) and those for whom surgery is impractical are treated with combination chemotherapy and radiotherapy along with the immunotherapy durvalumab. Combined chemotherapy and radiation enhances survival compared to chemotherapy followed by radiation, though the combination therapy comes with harsher side effects.
Those with stage IV disease are treated with combinations of pain medication, radiotherapy, immunotherapy, and chemotherapy. Many cases of advanced disease can be treated with targeted therapies depending on the genetic makeup of the cancerous cells. Up to 30% of tumors have mutations in the EGFR gene that result in an overactive EGFR protein; these can be treated with EGFR inhibitors osimertinib, erlotinib, gefitinib, afatinib, or dacomitinib – with osimertinib known to be superior to erlotinib and gefitinib, and all superior to chemotherapy alone. Up to 7% of those with NSCLC harbor mutations that result in hyperactive ALK protein, which can be treated with ALK inhibitors crizotinib, or its successors alectinib, brigatinib, and ceritinib. Those treated with ALK inhibitors who relapse can then be treated with the third-generation ALK inhibitor lorlatinib. Up to 5% with NSCLC have overactive MET, which can be inhibited with MET inhibitors capmatinib or tepotinib. Targeted therapies are also available for some cancers with rare mutations. Cancers with hyperactive BRAF (around 2% of NSCLC) can be treated by dabrafenib combined with the MEK inhibitor trametinib; those with activated ROS1 (around 1% of NSCLC) can be inhibited by crizotinib, lorlatinib, or entrectinib; overactive NTRK (<1% of NSCLC) by entrectinib or larotrectinib; active RET (around 1% of NSCLC) by selpercatinib.
People whose NSCLC is not targetable by current molecular therapies instead can be treated with combination chemotherapy plus immune checkpoint inhibitors, which prevent cancer cells from inactivating immune T cells. The chemotherapeutic agent of choice depends on the NSCLC subtype: cisplatin plus gemcitabine for squamous cell carcinoma, cisplatin plus pemetrexed for non-squamous cell carcinoma. Immune checkpoint inhibitors are most effective against tumors that express the protein PD-L1, but are sometimes effective in those that do not. Treatment with pembrolizumab, atezolizumab, or combination nivolumab plus ipilimumab are all superior to chemotherapy alone against tumors expressing PD-L1. Those who relapse on the above are treated with second-line chemotherapeutics docetaxel and ramucirumab.
Palliative care
Integrating palliative care (medical care focused on improving symptoms and lessening discomfort) into lung cancer treatment from the time of diagnosis improves the survival time and quality of life of those with lung cancer. Particularly common symptoms of lung cancer are shortness of breath and pain. Supplemental oxygen, improved airflow, re-orienting an affected person in bed, and low-dose morphine can all improve shortness of breath. In around 20 to 30% of those with lung cancer – particularly those with late-stage disease – growth of the tumor can narrow or block the airway, causing coughing and difficulty breathing. Obstructing tumors can be surgically removed where possible, though typically those with airway obstruction are not well enough for surgery. In such cases the American College of Chest Physicians recommends opening the airway by inserting a stent, attempting to shrink the tumor with localized radiation (brachytherapy), or physically removing the blocking tissue by bronchoscopy, sometimes aided by thermal or laser ablation. Other causes of lung cancer-associated shortness of breath can be treated directly, such as antibiotics for a lung infection, diuretics for pulmonary edema, benzodiazepines for anxiety, and steroids for airway obstruction.
Up to 92% of those with lung cancer report pain, either from tissue damage at the tumor site(s) or nerve damage. The World Health Organization (WHO) has developed a three-tiered system for managing cancer pain. For those with mild pain (tier one), the WHO recommends acetominophen or a nonsteroidal anti-inflammatory drug. Around a third of people experience moderate (tier two) or severe (tier three) pain, for which the WHO recommends opioid painkillers. Opioids are typically effective at easing nociceptive pain (pain caused by damage to various body tissues). Opioids are occasionally effective at easing neuropathic pain (pain caused by nerve damage). Neuropathic agents such as anticonvulsants, tricyclic antidepressants, and serotonin–norepinephrine reuptake inhibitors, are often used to ease neuropathic pain, either alone or in combination with opioids. In many cases, targeted radiotherapy can be used to shrink tumors, reducing pain and other symptoms caused by tumor growth.
Individuals who have advanced disease and are approaching end-of-life can benefit from dedicated end-of-life care to manage symptoms and ease suffering. As in earlier disease, pain and difficulty breathing are common, and can be managed with opioid pain medications, transitioning from oral medication to injected medication if the affected individual loses the ability to swallow. Coughing is also common, and can be managed with opioids or cough suppressants. Some experience terminal delirium – confused behavior, unexplained movements, or a reversal of the sleep-wake cycle – which can be managed by antipsychotic drugs, low-dose sedatives, and investigating other causes of discomfort such as low blood sugar, constipation, and sepsis. In the last few days of life, many develop terminal secretions – pooled fluid in the airways that can cause a rattling sound while breathing. This is thought not to cause respiratory problems, but can distress family members and caregivers. Terminal secretions can be reduced by anticholinergic medications. Even those who are non-communicative or have reduced consciousness may be able to experience cancer-related pain, so pain medications are typically continued until the time of death.
Prognosis
Around 19% of people diagnosed with lung cancer survive five years from diagnosis, though prognosis varies based on the stage of the disease at diagnosis and the type of lung cancer. Prognosis is better for people with lung cancer diagnosed at an earlier stage; those diagnosed at the earliest TNM stage, IA1 (small tumor, no spread), have a two-year survival of 97% and five-year survival of 92%. Those diagnosed at the most-advanced stage, IVB, have a two-year survival of 10% and a five-year survival of 0%. Five-year survival is higher in women (22%) than men (16%). Women tend to be diagnosed with less-advanced disease, and have better outcomes than men diagnosed at the same stage. Average five-year survival also varies across the world, with particularly high five-year survival in Japan (33%), and five-year survival above 20% in 12 other countries: Mauritius, Canada, the US, China, South Korea, Taiwan, Israel, Latvia, Iceland, Sweden, Austria, and Switzerland.
SCLC is particularly aggressive. 10–15% of people survive five years after a SCLC diagnosis. As with other types of lung cancer, the extent of disease at diagnosis also influences prognosis. The average person diagnosed with limited-stage SCLC survives 12–20 months from diagnosis; with extensive-stage SCLC around 12 months. While SCLC often responds initially to treatment, most people eventually relapse with chemotherapy-resistant cancer, surviving an average 3–4 months from the time of relapse. Those with limited stage SCLC that go into complete remission after chemotherapy and radiotherapy have a 50% chance of brain metastases developing within the next two years – a chance reduced by prophylactic cranial irradiation.
Several other personal and disease factors are associated with improved outcomes. Those diagnosed at a younger age tend to have better outcomes. Those who smoke or experience weight loss as a symptom tend to have worse outcomes. Tumor mutations in KRAS are associated with reduced survival.
Experience
The uncertainty of lung cancer prognosis often causes stress, and makes future planning difficult, for those with lung cancer and their families. Those whose cancer goes into remission often experience fear of their cancer returning or progressing, associated with poor quality of life, negative mood, and functional impairment. This fear is exacerbated by frequent or prolonged surveillance imaging, and other reminders of cancer risks.
Causes
Lung cancer is caused by genetic damage to the DNA of lung cells. These changes are sometimes random, but are typically induced by breathing in toxic substances such as cigarette smoke. Cancer-causing genetic changes affect the cell's normal functions, including cell proliferation, programmed cell death (apoptosis), and DNA repair. Eventually, cells gain enough genetic changes to grow uncontrollably, forming a tumor, and eventually spreading within and then beyond the lung. Rampant tumor growth and spread causes the symptoms of lung cancer. If unstopped, the spreading tumor will eventually cause the death of affected individuals.
Smoking
Tobacco smoking is by far the major contributor to lung cancer, causing 80% to 90% of cases. Lung cancer risk increases with quantity of cigarettes consumed. Tobacco smoking's carcinogenic effect is due to various chemicals in tobacco smoke that cause DNA mutations, increasing the chance of cells becoming cancerous. The International Agency for Research on Cancer identifies at least 50 chemicals in tobacco smoke as carcinogenic, and the most potent is tobacco-specific nitrosamines. Exposure to these chemicals causes several kinds of DNA damage: DNA adducts, oxidative stress, and breaks in the DNA strands. Being around tobacco smoke – called passive smoking – can also cause lung cancer. Living with a tobacco smoker increases one's risk of developing lung cancer by 24%. An estimated 17% of lung cancer cases in those who do not smoke are caused by high levels of environmental tobacco smoke.
Vaping may be a risk factor for lung cancer, but less than that of cigarettes, and further research as of 2021 is necessary due to the length of time it can take for lung cancer to develop following an exposure to carcinogens.
The smoking of non-tobacco products is not known to be associated with lung cancer development. Marijuana smoking does not seem to independently cause lung cancer – despite the relatively high levels of tar and known carcinogens in marijuana smoke. The relationship between smoking cocaine and developing lung cancer has not been studied as of 2020.
Environmental exposures
Exposure to a variety of other toxic chemicals – typically encountered in certain occupations – is associated with an increased risk of lung cancer. Occupational exposures to carcinogens cause 9–15% of lung cancer. A prominent example is asbestos, which causes lung cancer either directly or indirectly by inflaming the lung. Exposure to all commercially available forms of asbestos increases cancer risk, and cancer risk increases with time of exposure. Asbestos and cigarette smoking increase risk synergistically – that is, the risk of someone who smokes and has asbestos exposure dying from lung cancer is much higher than would be expected from adding the two risks together. Similarly, exposure to radon, a naturally occurring breakdown product of the Earth's radioactive elements, is associated with increased lung cancer risk. Radon levels vary with geography. Underground miners have the greatest exposure; however even the lower levels of radon that seep into residential spaces can increase occupants' risk of lung cancer. Like asbestos, cigarette smoking and radon exposure increase risk synergistically. Radon exposure is responsible for between 3% and 14% of lung cancer cases.
Several other chemicals encountered in various occupations are also associated with increased lung cancer risk including arsenic used in wood preservation, pesticide application, and some ore smelting; ionizing radiation encountered during uranium mining; vinyl chloride in papermaking; beryllium in jewelers, ceramics workers, missile technicians, and nuclear reactor workers; chromium in stainless steel production, welding, and hide tanning; nickel in electroplaters, glass workers, metal workers, welders, and those who make batteries, ceramics, and jewelry; and diesel exhaust encountered by miners.
Exposure to air pollution, especially particulate matter released by motor vehicle exhaust and fossil fuel-burning power plants, increases the risk of lung cancer. Indoor air pollution from burning wood, charcoal, or crop residue for cooking and heating has also been linked to an increased risk of developing lung cancer. The International Agency for Research on Cancer has classified emission from household burning of coal and biomass as "carcinogenic" and "probably carcinogenic" respectively.
Other diseases
Several other diseases that cause inflammation of the lung increase one's risk of lung cancer. This association is strongest for chronic obstructive pulmonary disorder – the risk is highest in those with the most inflammation, and reduced in those whose inflammation is treated with inhaled corticosteroids. Other inflammatory lung and immune system diseases such as alpha-1 antitrypsin deficiency, interstitial fibrosis, scleroderma, Chlamydia pneumoniae infection, tuberculosis, and HIV infection are associated with increased risk of developing lung cancer. Epstein–Barr virus is associated with the development of the rare lung cancer lymphoepithelioma-like carcinoma in people from Asia, but not in people from Western nations. A role for several other infectious agents – namely human papillomaviruses, BK virus, JC virus, human cytomegalovirus, SV40, measles virus, and Torque teno virus – in lung cancer development has been studied but remains inconclusive as of 2020.
Genetics
Particular gene combinations may make some people more susceptible to lung cancer. Close family members of those with lung cancer have around twice the risk of developing lung cancer as an average person, even after controlling for occupational exposure and smoking habits. Genome-wide association studies have identified many gene variants associated with lung cancer risk, each of which contributes a small risk increase. Many of these genes participate in pathways known to be involved in carcinogenesis, namely DNA repair, inflammation, the cell division cycle, cellular stress responses, and chromatin remodeling. Some rare genetic disorders that increase the risk of various cancers also increase the risk of lung cancer, namely retinoblastoma and Li–Fraumeni syndrome.
Pathogenesis
As with all cancers, lung cancer is triggered by mutations that allow tumor cells to endlessly multiply, stimulate blood vessel growth, avoid apoptosis (programmed cell death), generate pro-growth signalling molecules, ignore anti-growth signalling molecules, and eventually spread into surrounding tissue or metastasize throughout the body. Different tumors can acquire these abilities through different mutations, though generally cancer-contributing mutations activate oncogenes and inactivate tumor suppressors. Some mutations – called "driver mutations" – are particularly common in adenocarcinomas, and contribute disproportionately to tumor development. These typically occur in the receptor tyrosine kinases EGFR, BRAF, MET, KRAS, and PIK3CA. Similarly, some adenocarcinomas are driven by chromosomal rearrangements that result in overexpression of tyrosine kinases ALK, ROS1, NTRK, and RET. A given tumor will typically have just one driver mutation. In contrast, SCLCs rarely have these driver mutations, and instead often have mutations that have inactivated the tumor suppressors p53 and RB. A cluster of tumor suppressor genes on the short arm of chromosome 3 are often lost early in the development of all lung cancers.
Prevention
Smoking cessation
Those who smoke can reduce their lung cancer risk by quitting smoking – the risk reduction is greater the longer a person goes without smoking. Self-help programs tend to have little influence on success of smoking cessation, whereas combined counseling and pharmacotherapy improve cessation rates. The US FDA has approved antidepressant therapies and the nicotine replacement varenicline as first-line therapies to aid in smoking cessation. Clonidine and nortriptyline are recommended second-line therapies. The majority of those diagnosed with lung cancer attempt to quit smoking; around half succeed. Even after lung cancer diagnosis, smoking cessation improves treatment outcomes, reducing cancer treatment toxicity and failure rates, and lengthening survival time.
At a societal level, smoking cessation can be promoted by tobacco control policies that make tobacco products more difficult to obtain or use. Many such policies are mandated or recommended by the WHO Framework Convention on Tobacco Control, ratified by 182 countries, representing over 90% of the world's population. The WHO groups these policies into six intervention categories, each of which has been shown to be effective in reducing the cost of tobacco-induced disease burden on a population:
increasing the price of tobacco by raising taxes;
banning tobacco use in public places to reduce exposure;
banning tobacco advertisements;
publicizing the dangers of tobacco products;
instituting help programs for those attempting to quit smoking; and
monitoring population-level tobacco use and the effectiveness of tobacco control policies.
Policies implementing each intervention are associated with decreases in tobacco smoking prevalence. The more policies implemented, the greater the reduction. Reducing access to tobacco for adolescents is particularly effective at decreasing uptake of habitual smoking, and adolescent demand for tobacco products is particularly sensitive to increases in cost.
Diet and lifestyle
Several foods and dietary supplements have been associated with lung cancer risk. High consumption of some animal products – red meat (but not other meats or fish), saturated fats, as well as nitrosamines and nitrites (found in salted and smoked meats) – is associated with an increased risk of developing lung cancer. In contrast, high consumption of fruits and vegetables is associated with a reduced risk of lung cancer, particularly consumption of cruciferous vegetables and raw fruits and vegetables. Based on the beneficial effects of fruits and vegetables, supplementation of several individual vitamins have been studied. Supplementation with vitamin A or beta-carotene had no effect on lung cancer, and instead slightly increased mortality. Dietary supplementation with vitamin E or retinoids similarly had no effect. Consumption of polyunsaturated fats, tea, alcoholic beverages, and coffee are all associated with reduced risk of developing lung cancer.
Along with diet, body weight and exercise habits are also associated with lung cancer risk. Being overweight is associated with a lower risk of developing lung cancer, possibly due to the tendency of those who smoke cigarettes to have a lower body weight. However, being underweight is also associated with a reduced lung cancer risk. Some studies have shown those who exercise regularly or have better cardiovascular fitness to have a lower risk of developing lung cancer.
Epidemiology
Worldwide, lung cancer is the most diagnosed type of cancer, and the leading cause of cancer death. In 2020, 2.2 million new cases were diagnosed, and 1.8 million people died from lung cancer, representing 18% of all cancer deaths. Lung cancer deaths are expected to rise globally to nearly 3 million annual deaths by 2035, due to high rates of tobacco use and aging populations. Lung cancer is rare among those younger than 40; after that, cancer rates increase with age, stabilizing around age 80. The median age of a person diagnosed with lung cancer is 70; the median age of death is 72.
Lung cancer incidence varies by geography and sex, with the highest rates in Micronesia, Polynesia, Europe, Asia, and North America; and lowest rates in Africa and Central America. Globally, around 8% of men and 6% of women develop lung cancer in their lifetimes. The ratio of lung cancer cases in men to women varies considerably by geography, from as high as nearly 12:1 in Belarus, to 1:1 in Brazil, likely due to differences in smoking patterns.
Lung cancer risk is influenced by environmental exposure, namely cigarette smoking, as well as occupational risks in mining, shipbuilding, petroleum refining, and occupations that involve asbestos exposure. People who have smoked cigarettes account for 85–90% of lung cancer cases, and 15% of smokers develop lung cancer. Non-smokers' risk of developing lung cancer is also influenced by tobacco smoking; secondhand smoke (that is, being around tobacco smoke) increases risk of developing lung cancer around 30%, with risk correlated to duration of exposure. As the global incidence of lung cancer decreases in parallel with declining smoking rates in developed countries, the incidence of lung cancer in individuals who have never smoked is stable or increasing.
History
Lung cancer was uncommon before the advent of cigarette smoking. Surgeon Alton Ochsner recalled that as a Washington University medical student in 1919, his entire medical school class was summoned to witness an autopsy of a man who had died from lung cancer, and told they may never see such a case again. In Isaac Adler's 1912 Primary Malignant Growths of the Lungs and Bronchi, he called lung cancer "among the rarest forms of disease"; Adler tabulated the 374 cases of lung cancer that had been published to that time, concluding the disease was increasing in incidence. By the 1920s, several theories had been put forward linking the increase in lung cancer to various chemical exposures that had increased including tobacco smoke, asphalt dust, industrial air pollution, and poisonous gasses from World War I.
Over the following decades, growing scientific evidence linked lung cancer to cigarette consumption. Through the 1940s and early 1950s, several case-control studies showed that those with lung cancer were more likely to have smoked cigarettes compared to those without lung cancer. These were followed by several prospective cohort studies in the 1950s – including the first report of the British Doctors Study in 1954 – all of which showed that those who smoked tobacco were at dramatically increased risk of developing lung cancer.
A 1953 study showing that tar from cigarette smoke could cause tumors in mice attracted attention in the popular press, with features in Life and Time magazines. Facing public concern and falling stock prices, the CEOs of six of the largest American tobacco companies gathered in December 1953. They enlisted the help of public relations firm Hill & Knowlton to craft a multi-pronged strategy aiming to distract from accumulating evidence by funding tobacco-friendly research, declaring the link to lung cancer "controversial", and demanding ever-more research to settle this purported controversy. At the same time, internal research at the major tobacco companies supported the link between tobacco and lung cancer; though these results were kept secret from the public.
As evidence linking tobacco use with lung cancer mounted, various health bodies announced official positions linking the two. In 1962, the United Kingdom's Royal College of Physicians officially concluded that cigarette smoking causes lung cancer, prompting the United States Surgeon General to empanel (enroll or enlist) an advisory committee, which deliberated in secret over nine sessions between November 1962 and December 1963. The committee's report, published in January 1964, firmly concluded that cigarette smoking "far outweighs all other factors" in causing lung cancer. The report received substantial coverage in the popular press, and is widely seen as a turning point for public recognition that tobacco smoking causes lung cancer.
The connection with radon gas was first recognized among miners in Germany's Ore Mountains. As early as 1500, miners were noted to develop a deadly disease called "mountain sickness" ("Bergkrankheit"), identified as lung cancer by the late 19th century. By 1938, up to 80% of miners in affected regions died from the disease. In the 1950s radon and its breakdown products became established as causes of lung cancer in miners. Based largely on studies of miners, the International Agency for Research on Cancer classified radon as "carcinogenic to humans" in 1988. In 1956, a study revealed radon in Swedish residences. Over the following decades, high radon concentrations were found in residences across the world; by the 1980s many countries had established national radon programs to catalog and mitigate residential radon.
The first successful pneumonectomy for lung cancer was performed in 1933 by Evarts Graham at Barnes Hospital in St. Louis, Missouri. Over the following decades, surgical development focused on sparing as much healthy lung tissue as possible, with the lobectomy surpassing the pneumectomy in frequency by the 1960s, and the wedge resection appearing in the early 1970s. This trend continued with the development of video-assisted thoracoscopic surgery in the 1980s, now widely performed for many lung cancer surgeries.
Research
While lung cancer is the deadliest type of cancer, it receives the third-most funding from the US National Cancer Institute (NCI, the world's largest cancer research funder) behind brain cancers and breast cancer. Despite high levels of gross research funding, lung cancer funding per death lags behind many other cancers, with around $3,200 spent on lung cancer research in 2022 per US death, considerably lower than that for brain cancer ($22,000 per death), breast cancer ($14,000 per death), and cancer as a whole ($11,000 per death). A similar trend holds for private nonprofit organizations. Annual revenues of lung cancer-focused nonprofits rank fifth among cancer types, but lung cancer nonprofits have lower revenue than would be expected for the number of lung cancer cases, deaths, and potential years of life lost.
Despite this, many investigational lung cancer treatments are undergoing clinical trials – with nearly 2,250 active clinical trials registered as of 2021. Of these, a large plurality are testing radiotherapy regimens (26% of trials) and surgical techniques (22%). Many others are testing targeted anticancer drugs, with targets including EGFR (17% of trials), microtubules (12%), VEGF (12%), immune pathways (10%), mTOR (1%), and histone deacetylases (<1%).
| Biology and health sciences | Cancer | null |
18472 | https://en.wikipedia.org/wiki/Leap%20second | Leap second | A leap second is a one-second adjustment that is occasionally applied to Coordinated Universal Time (UTC), to accommodate the difference between precise time (International Atomic Time (TAI), as measured by atomic clocks) and imprecise observed solar time (UT1), which varies due to irregularities and long-term slowdown in the Earth's rotation. The UTC time standard, widely used for international timekeeping and as the reference for civil time in most countries, uses TAI and consequently would run ahead of observed solar time unless it is reset to UT1 as needed. The leap second facility exists to provide this adjustment. The leap second was introduced in 1972. Since then, 27 leap seconds have been added to UTC, with the most recent occurring on December 31, 2016. All have so far been positive leap seconds, adding a second to a UTC day; while it is possible for a negative leap second to be needed, this has not happened yet.
Because the Earth's rotational speed varies in response to climatic and geological events, UTC leap seconds are irregularly spaced and unpredictable. Insertion of each UTC leap second is usually decided about six months in advance by the International Earth Rotation and Reference Systems Service (IERS), to ensure that the difference between the UTC and UT1 readings will never exceed 0.9 seconds.
This practice has proven disruptive, particularly in the twenty-first century and especially in services that depend on precise timestamping or time-critical process control. And since not all computers are adjusted by leap-second, they will display times differing from those that have been adjusted. After many years of discussions by different standards bodies, in November 2022, at the 27th General Conference on Weights and Measures, it was decided to abandon the leap second by or before 2035.
History
In about AD 140, Ptolemy, the Alexandrian astronomer, sexagesimally subdivided both the mean solar day and the true solar day to at least six places after the sexagesimal point, and he used simple fractions of both the equinoctial hour and the seasonal hour, none of which resemble the modern second. Muslim scholars, including al-Biruni in 1000, subdivided the mean solar day into 24 equinoctial hours, each of which was subdivided sexagesimally, that is into the units of minute, second, third, fourth and fifth, creating the modern second as of the mean solar day in the process. With this definition, the second was proposed in 1874 as the base unit of time in the CGS system of units. Soon afterwards Simon Newcomb and others discovered that Earth's rotation period varied irregularly, so in 1952, the International Astronomical Union (IAU) defined the second as a fraction of the sidereal year. In 1955, considering the tropical year to be more fundamental than the sidereal year, the IAU redefined the second as the fraction of the 1900.0 mean tropical year. In 1956, a slightly more precise value of was adopted for the definition of the second by the International Committee for Weights and Measures, and in 1960 by the General Conference on Weights and Measures, becoming a part of the International System of Units (SI).
Eventually, this definition too was found to be inadequate for precise time measurements, so in 1967, the SI second was again redefined as 9,192,631,770 periods of the radiation emitted by a caesium-133 atom in the transition between the two hyperfine levels of its ground state. That value agreed to 1 part in 1010 with the astronomical (ephemeris) second then in use. It was also close to of the mean solar day as averaged between years 1750 and 1892.
However, for the past several centuries, the length of the mean solar day has been increasing by about 1.4–1.7 ms per century, depending on the averaging time. By 1961, the mean solar day was already a millisecond or two longer than SI seconds. Therefore, time standards that change the date after precisely SI seconds, such as the International Atomic Time (TAI), would become increasingly ahead of time standards tied to the mean solar day, such as Universal Time (UT).
When the Coordinated Universal Time (UTC) standard was instituted in 1960, based on atomic clocks, it was felt necessary to maintain agreement with UT, which, until then, had been the reference for broadcast time services. From 1960 to 1971, the rate of UTC atomic clocks was offset from a pure atomic time scale by the BIH to remain synchronized with UT2, a practice known as the "rubber second". The rate of UTC was decided at the start of each year, and was offset from the rate of atomic time by −150 parts per 10 for 1960–1962, by −130 parts per 10 for 1962–63, by −150 parts per 10 again for 1964–65, and by −300 parts per 10 for 1966–1971. Alongside the shift in rate, an occasional 0.1 s step (0.05 s before 1963) was needed. This predominantly frequency-shifted rate of UTC was broadcast by MSF, WWV, and CHU among other time stations. In 1966, the CCIR approved "stepped atomic time" (SAT), which adjusted atomic time with more frequent 0.2 s adjustments to keep it within 0.1 s of UT2, because it had no rate adjustments. SAT was broadcast by WWVB among other time stations.
In 1972, the leap-second system was introduced so that the UTC seconds could be set exactly equal to the standard SI second, while still maintaining the UTC time of day and changes of UTC date synchronized with those of UT1. By then, the UTC clock was already 10 seconds behind TAI, which had been synchronized with UT1 in 1958, but had been counting true SI seconds since then. After 1972, both clocks have been ticking in SI seconds, so the difference between their displays at any time is 10 seconds plus the total number of leap seconds that have been applied to UTC as of that time; , 27 leap seconds have been applied to UTC, so the difference is 10 + 27 = 37 seconds. The most recent leap second was on December 31, 2016.
Insertion of leap seconds
The scheduling of leap seconds was initially delegated to the Bureau International de l'Heure (BIH), but passed to the International Earth Rotation and Reference Systems Service (IERS) on 1 January 1988. IERS usually decides to apply a leap second whenever the difference between UTC and UT1 approaches 0.6 s, in order to keep the difference between UTC and UT1 from exceeding 0.9 s.
The UTC standard allows leap seconds to be applied at the end of any UTC month, with first preference to June and December and second preference to March and September. , all of them have been inserted at the end of either 30 June or 31 December. IERS publishes announcements every six months, whether leap seconds are to occur or not, in its "Bulletin C". Such announcements are typically published well in advance of each possible leap second date – usually in early January for 30 June and in early July for 31 December. Some time signal broadcasts give voice announcements of an impending leap second.
Between 1972 and 2020, a leap second has been inserted about every 21 months, on average. However, the spacing is quite irregular and apparently increasing: there were no leap seconds in the six-year interval between 1 January 1999, and 31 December 2004, but there were nine leap seconds in the eight years 1972–1979. Since the introduction of leap seconds, 1972 has been the longest year on record: 366 days, 364 of which were 86,400 seconds long and two of which were 86,401 seconds long, for a total of 31,622,402 seconds.
Unlike leap days, which begin after 28 February, 23:59:59 local time, UTC leap seconds occur simultaneously worldwide; for example, the leap second on 31 December 2005, 23:59:60 UTC was 31 December 2005, 18:59:60 (6:59:60 p.m.) in U.S. Eastern Standard Time and 1 January 2006, 08:59:60 (a.m.) in Japan Standard Time.
Process
When it is mandated, a positive leap second is inserted between second 23:59:59 of a chosen UTC calendar date and second 00:00:00 of the following date. The definition of UTC states that the last day of December and June are preferred, with the last day of March or September as second preference, and the last day of any other month as third preference. All leap seconds (as of 2019) have been scheduled for either 30 June or 31 December. The extra second is displayed on UTC clocks as 23:59:60. On clocks that display local time tied to UTC, the leap second may be inserted at the end of some other hour (or half-hour or quarter-hour), depending on the local time zone. A negative leap second would suppress second 23:59:59 of the last day of a chosen month so that second 23:59:58 of that date would be followed immediately by second 00:00:00 of the following date. Since the introduction of leap seconds, the mean solar day has outpaced atomic time only for very brief periods and has not triggered a negative leap second.
Recent changes to the Earth's rotation rate have made it more likely that a negative leap second will be required before the abolition of leap seconds in 2035.
Slowing rotation of the Earth
Leap seconds are irregularly spaced because the Earth's rotation speed changes irregularly. Indeed, the Earth's rotation is quite unpredictable in the long term, which explains why leap seconds are announced only six months in advance.
A mathematical model of the variations in the length of the solar day was developed by F. R. Stephenson and L. V. Morrison, based on records of eclipses for the period 700 BC to 1623, telescopic observations of occultations for the period 1623 until 1967 and atomic clocks thereafter. The model shows a steady increase of the mean solar day by 1.70 ms (±0.05 ms) per century, plus a periodic shift of about 4 ms amplitude and period of about 1,500 yr. Over the last few centuries, rate of lengthening of the mean solar day has been about 1.4 ms per century, being the sum of the periodic component and the overall rate.
The main reason for the slowing down of the Earth's rotation is tidal friction, which alone would lengthen the day by 2.3 ms/century. Other contributing factors are the movement of the Earth's crust relative to its core, changes in mantle convection, and any other events or processes that cause a significant redistribution of mass. These processes change the Earth's moment of inertia, affecting the rate of rotation due to the conservation of angular momentum. Some of these redistributions increase Earth's rotational speed, shorten the solar day and oppose tidal friction. For example, glacial rebound shortens the solar day by 0.6 ms/century and the 2004 Indian Ocean earthquake is thought to have shortened it by 2.68 microseconds.
It is a mistake, however, to consider leap seconds as indicators of a slowing of Earth's rotation rate; they are indicators of the accumulated difference between atomic time and time measured by Earth rotation. The plot at the top of this section shows that in 1972 the average length of day was approximately seconds and in 2016 it was approximately seconds, indicating an overall increase in Earth's rotation rate over that time period. Positive leap seconds were inserted during that time because the annual average length of day remained greater than SI seconds, not because of any slowing of Earth's rotation rate.
In 2021, it was reported that Earth was spinning faster in 2020 and experienced the 28 shortest days since 1960, each of which lasted less than seconds. This caused engineers worldwide to discuss a negative leap second and other possible timekeeping measures, some of which could eliminate leap seconds. The shortest day ever recorded was 29 June 2022, at 1.59 milliseconds less than 24 hours. In a 2024 paper published in Nature, Duncan Agnew of the Scripps Institution of Oceanography projects that the water from increasing ice cap melting will migrate to the equator and thus cause the rate of rotation to slow down again.
Future of leap seconds
The TAI and UT1 time scales are precisely defined, the former by atomic clocks (and thus independent of Earth's rotation) and the latter by astronomical observations (that measure actual planetary rotation and thus the solar time at the IERS Reference Meridian at Greenwich. UTC (on which civil time is usually based) is a compromise, stepping with atomic seconds but periodically reset by a leap second to match UT1.
The irregularity and unpredictability of UTC leap seconds is problematic for several areas, especially computing (see below). With increasing requirements for timestamp accuracy in systems such as process automation and high-frequency trading, this raises a number of issues. Consequently, the long-standing practice of inserting leap seconds is under review by the relevant international standards body.
International proposals for elimination of leap seconds
On 5 July 2005, the Head of the Earth Orientation Center of the IERS sent a notice to IERS Bulletins C and D subscribers, soliciting comments on a U.S. proposal before the ITU-R Study Group 7's WP7-A to eliminate leap seconds from the UTC broadcast standard before 2008 (the ITU-R is responsible for the definition of UTC). It was expected to be considered in November 2005, but the discussion has since been postponed. Under the proposal, leap seconds would be technically replaced by leap hours as an attempt to satisfy the legal requirements of several ITU-R member nations that civil time be astronomically tied to the Sun.
A number of objections to the proposal have been raised. P. Kenneth Seidelmann, editor of the Explanatory Supplement to the Astronomical Almanac, wrote a letter lamenting the lack of consistent public information about the proposal and adequate justification. In an op-ed for Science News, Steve Allen of the University of California, Santa Cruz said that the process has a large impact on astronomers.
At the 2014 General Assembly of the International Union of Radio Scientists (URSI), Demetrios Matsakis, the United States Naval Observatory's Chief Scientist for Time Services, presented the reasoning in favor of the redefinition and rebuttals to the arguments made against it. He stressed the practical inability of software programmers to allow for the fact that leap seconds make time appear to go backwards, particularly when most of them do not even know that leap seconds exist. The possibility of leap seconds being a hazard to navigation was presented, as well as the observed effects on commerce.
The United States formulated its position on this matter based upon the advice of the National Telecommunications and Information Administration and the Federal Communications Commission (FCC), which solicited comments from the general public. This position is in favor of the redefinition.
In 2011, Chunhao Han of the Beijing Global Information Center of Application and Exploration said China had not decided what its vote would be in January 2012, but some Chinese scholars consider it important to maintain a link between civil and astronomical time due to Chinese tradition. The 2012 vote was ultimately deferred. At an ITU/BIPM-sponsored workshop on the leap second, Han expressed his personal view in favor of abolishing the leap second, and similar support for the redefinition was again expressed by Han, along with other Chinese timekeeping scientists, at the URSI General Assembly in 2014.
At a special session of the Asia-Pacific Telecommunity meeting on 10 February 2015, Chunhao Han indicated China was now supporting the elimination of future leap seconds, as were all the other presenting national representatives (from Australia, Japan, and the Republic of Korea). At this meeting, Bruce Warrington (NMI, Australia) and Tsukasa Iwama (NICT, Japan) indicated particular concern for the financial markets due to the leap second occurring in the middle of a workday in their part of the world. Subsequent to the CPM15-2 meeting in March/April 2015 the draft gives four methods which the WRC-15 might use to satisfy Resolution 653 from WRC-12.
Arguments against the proposal include the unknown expense of such a major change and the fact that universal time will no longer correspond to mean solar time. It is also answered that two timescales that do not follow leap seconds are already available, International Atomic Time (TAI) and Global Positioning System (GPS) time. Computers, for example, could use these and convert to UTC or local civil time as necessary for output. Inexpensive GPS timing receivers are readily available, and the satellite broadcasts include the necessary information to convert GPS time to UTC. It is also easy to convert GPS time to TAI, as TAI is always exactly 19 seconds ahead of GPS time. Examples of systems based on GPS time include the CDMA digital cellular systems IS-95 and CDMA2000. In general, computer systems use UTC and synchronize their clocks using Network Time Protocol (NTP). Systems that cannot tolerate disruptions caused by leap seconds can base their time on TAI and use Precision Time Protocol. However, the BIPM has pointed out that this proliferation of timescales leads to confusion.
At the 47th meeting of the Civil Global Positioning System Service Interface Committee in Fort Worth, Texas, in September 2007, it was announced that a mailed vote would go out on stopping leap seconds. The plan for the vote was:
April 2008: ITU Working Party 7A will submit to ITU Study Group 7 project recommendation on stopping leap seconds
During 2008, Study Group 7 will conduct a vote through mail among member states
October 2011: The ITU-R released its status paper, Status of Coordinated Universal Time (UTC) study in ITU-R, in preparation for the January 2012 meeting in Geneva; the paper reported that, to date, in response to the UN agency's 2010 and 2011 web-based surveys requesting input on the topic, it had received 16 responses from the 192 Member States with "13 being in favor of change, 3 being contrary."
January 2012: The ITU makes a decision.
In January 2012, rather than decide yes or no per this plan, the ITU decided to postpone a decision on leap seconds to the World Radiocommunication Conference in November 2015. At this conference, it was again decided to continue using leap seconds, pending further study and consideration at the next conference in 2023.
In October 2014, Włodzimierz Lewandowski, chair of the timing subcommittee of the Civil GPS Interface Service Committee and a member of the ESA Navigation Program Board, presented a CGSIC-endorsed resolution to the ITU that supported the redefinition and described leap seconds as a "hazard to navigation".
Some of the objections to the proposed change have been addressed by its supporters. For example, Felicitas Arias, who, as Director of the International Bureau of Weights and Measures (BIPM)'s Time, Frequency, and Gravimetry Department, was responsible for generating UTC, noted in a press release that the drift of about one minute every 60–90 years could be compared to the 16-minute annual variation between true solar time and mean solar time, the one hour offset by use of daylight time, and the several-hours offset in certain geographically extra-large time zones.
Proposed alternatives to the leap second are the leap hour, which requires changes only once every few centuries; and the leap minute, with changes coming every half-century.
On 18 November 2022, the General Conference on Weights and Measures (CGPM) resolved to eliminate leap seconds by or before 2035. The difference between atomic and astronomical time will be allowed to grow to a larger value yet to be determined. A suggested possible future measure would be to let the discrepancy increase to a full minute, which would take 50 to 100 years, and then have the last minute of the day taking two minutes in a "kind of smear" with no discontinuity. The year 2035 for eliminating leap seconds was chosen considering Russia's request to extend the timeline to 2040, since, unlike the United States's global navigation satellite system, GPS, which does not adjust its time with leap seconds, Russia's system, GLONASS, does adjust its time with leap seconds.
ITU World Radiocommunication Conference 2023 (WRC-23), which was held in Dubai (United Arab Emirates) from 20 November to 15 December 2023 formally recognized the Resolution 4 of the 27th CGPM (2022) which decides that the maximum value for the difference (UT1-UTC) will be increased in, or before, 2035.
Issues created by insertion (or removal) of leap seconds
Calculation of time differences and sequence of events
To compute the elapsed time in seconds between two given UTC dates requires the consultation of a table of leap seconds, which needs to be updated whenever a new leap second is announced. Since leap seconds are known only 6 months in advance, time intervals for UTC dates further in the future cannot be computed.
Missing leap seconds announcement
Although BIPM announces a leap second 6 months in advance, most time distribution systems (SNTP, IRIG-B, PTP) announce leap seconds at most 12 hours in advance, sometimes only in the last minute and some even not at all (DNP3).
Implementation differences
Not all clocks implement leap seconds in the same manner. Leap seconds in Unix time are commonly implemented by repeating 23:59:59 or adding the time-stamp 23:59:60. Network Time Protocol (NTP) freezes time during the leap second, some time servers declare "alarm condition". Other schemes smear time in the vicinity of a leap second, spreading out the second of change over a longer period. This aims to avoid any negative effects of a substantial (by modern standards) step in time. This approach has led to differences between systems, as leap smear is not standardized and several different schemes are used in practice.
Textual representation of the leap second
The textual representation of a leap second is defined by BIPM as "23:59:60". There are programs that are not familiar with this format and may report an error when dealing with such input.
Binary representation of the leap second
Most computer operating systems and most time distribution systems represent time with a binary counter indicating the number of seconds elapsed since an arbitrary epoch; for instance, since 00:00:00 in POSIX machines or since 00:00:00 in NTP. This counter does not count positive leap seconds, and has no indicator that a leap second has been inserted, therefore two seconds in sequence will have the same counter value. Some computer operating systems, in particular Linux, assign to the leap second the counter value of the preceding, 23:59:59 second ( sequence), while other computers (and the IRIG-B time distribution) assign to the leap second the counter value of the next, 00:00:00 second ( sequence). Since there is no standard governing this sequence, the timestamp of values sampled at exactly the same time can vary by one second. This may explain flaws in time-critical systems that rely on timestamped values.
Other reported software problems associated with the leap second
Several models of global navigation satellite receivers have software flaws associated with leap seconds:
Some older versions of Motorola Oncore VP, UT, GT, and M12 GPS receivers had a software bug that would cause a single timestamp to be off by a day if no leap second was scheduled for 256 weeks. On 28 November 2003, this happened. At midnight, the receivers with this firmware reported 29 November 2003, for one second and then reverted to 28 November 2003.
Older Trimble GPS receivers had a software flaw that would insert a leap second immediately after the GPS constellation started broadcasting the next leap second insertion time (some months in advance of the actual leap second), rather than waiting for the next leap second to happen. This left the receiver's time off by a second in the interim.
Older Datum Tymeserve 2100 GPS receivers and Symmetricom Tymeserve 2100 receivers apply a leap second as soon as the a leap second notification is received, instead of waiting for the correct date. The manufacturers no longer supports these models and no corrected software is available. A workaround has been described and tested, but if the GPS system rebroadcasts the announcement, or the unit is powered off, the problem will occur again.
Four different brands of navigational receivers that use data from BeiDou satellites were found to implement leap seconds one day early. This was traced to a bug related to how the BeiDou protocol numbers the days of the week.
Several software vendors have distributed software that has not properly functioned with the concept of leap seconds:
NTP specifies a flag to inform the receiver that a leap second is imminent. However, some NTP server implementations have failed to set their leap second flag correctly. Some NTP servers have responded with the wrong time for up to a day after a leap second insertion.
A number of organizations reported problems caused by flawed software following the leap second that occurred on 30 June 2012. Among the sites which reported problems were Reddit (Apache Cassandra), Mozilla (Hadoop), Qantas, and various sites running Linux.
Despite the publicity given to the 2015 leap second, a small number of network failures occurred due to leap second-related software errors of some routers. Several older versions of the Cisco Systems Nexus 5000 Series Operating System NX-OS (versions 5.0, 5.1, 5.2) are affected by leap second bugs.
Some businesses and service providers have been impacted by leap-second related software bugs:
In 2015, interruptions occurred with Twitter, Instagram, Pinterest, Netflix, Amazon, and Apple's music streaming series Beats 1.
Leap second software bugs in Linux reportedly affected the Amadeus Altéa airlines reservation system, used by Qantas and Virgin Australia, in 2015.
Cloudflare was affected by a leap second software bug. Its DNS resolver implementation incorrectly calculated a negative number when subtracting two timestamps obtained from the Go programming language's time.Now()function, which then used only a real-time clock source. This could have been avoided by using a monotonic clock source, which has since been added to Go 1.9.
The Intercontinental Exchange, parent body to 7 clearing houses and 11 stock exchanges including the New York Stock Exchange, chose to cease operations for 61 minutes at the time of the 30 June 2015, leap second.
There were misplaced concerns that farming equipment using GPS navigation during harvests occurring on 31 December 2016, would be affected by the 2016 leap second. GPS navigation makes use of GPS time, which is not impacted by the leap second.
Due to a software error, the UTC time broadcast by the NavStar GPS system was incorrect by about 13 microseconds on 25–26 January 2016.
Workarounds for leap second problems
The most obvious workaround is to use the TAI scale for all operational purposes and convert to UTC for human-readable text. UTC can always be derived from TAI with a suitable table of leap seconds. The Society of Motion Picture and Television Engineers (SMPTE) video/audio industry standards body selected TAI for deriving timestamps of media.
IEC/IEEE 60802 (Time sensitive networks) specifies TAI for all operations. Grid automation is planning to switch to TAI for global distribution of events in electrical grids. Bluetooth mesh networking also uses TAI.
Instead of inserting a leap second at the end of the day, Google servers implement a "leap smear", extending seconds slightly over a 24-hour period centered on the leap second. Amazon followed a similar, but slightly different, pattern for the introduction of the 30 June 2015, leap second, leading to another case of the proliferation of timescales. They later released an NTP service for EC2 instances which performs leap smearing. UTC-SLS was proposed as a version of UTC with linear leap smearing, but it never became standard.
It has been proposed that media clients using the Real-time Transport Protocol inhibit generation or use of NTP timestamps during the leap second and the second preceding it.
NIST has established a special NTP time server to deliver UT1 instead of UTC. Such a server would be particularly useful in the event the ITU resolution passes and leap seconds are no longer inserted. Those astronomical observatories and other users that require UT1 could run off UT1 – although in many cases these users already download UT1-UTC from the IERS, and apply corrections in software.
| Technology | Timekeeping | null |
18531 | https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s%20rule | L'Hôpital's rule | L'Hôpital's rule (, ) or L'Hospital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume De l'Hôpital. Although the rule is often attributed to De l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.
L'Hôpital's rule states that for functions and which are defined on an open interval and differentiable on for a (possibly infinite) accumulation point of , if and for all in , and exists, then
The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be directly evaluated by continuity.
History
Guillaume de l'Hôpital (also written l'Hospital) published this rule in his 1696 book Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes (literal translation: Analysis of the Infinitely Small for the Understanding of Curved Lines), the first textbook on differential calculus. However, it is believed that the rule was discovered by the Swiss mathematician Johann Bernoulli.
General form
The general form of L'Hôpital's rule covers many cases. Let and be extended real numbers: real numbers, positive or negative infinity. Let be an open interval containing (for a two-sided limit) or an open interval with endpoint (for a one-sided limit, or a limit at infinity if is infinite). On , the real-valued functions and are assumed differentiable with . It is also assumed that , a finite or infinite limit.
If eitherorthenAlthough we have written throughout, the limits may also be one-sided limits ( or ), when is a finite endpoint of .
In the second case, the hypothesis that diverges to infinity is not necessary; in fact, it is sufficient that
The hypothesis that appears most commonly in the literature, but some authors sidestep this hypothesis by adding other hypotheses which imply . For example, one may require in the definition of the limit that the function must be defined everywhere on an interval . Another method is to require that both and be differentiable everywhere on an interval containing .
Necessity of conditions: Counterexamples
All four conditions for L'Hôpital's rule are necessary:
Indeterminacy of form: or ;
Differentiability of functions: and are differentiable on an open interval except possibly at the limit point in ;
Non-zero derivative of denominator: for all in with ;
Existence of limit of the quotient of the derivatives: exists.
Where one of the above conditions is not satisfied, L'Hôpital's rule is not valid in general, and its conclusion may be false in certain cases.
1. Form is not indeterminate
The necessity of the first condition can be seen by considering the counterexample where the functions are and and the limit is .
The first condition is not satisfied for this counterexample because and . This means that the form is not indeterminate.
The second and third conditions are satisfied by and . The fourth condition is also satisfied with
But the conclusion fails, since
2. Differentiability of functions
Differentiability of functions is a requirement because if a function is not differentiable, then the derivative of the function is not guaranteed to exist at each point in . The fact that is an open interval is grandfathered in from the hypothesis of the Cauchy's mean value theorem. The notable exception of the possibility of the functions being not differentiable at exists because L'Hôpital's rule only requires the derivative to exist as the function approaches ; the derivative does not need to be taken at .
For example, let , , and . In this case, is not differentiable at . However, since is differentiable everywhere except , then still exists. Thus, since
and exists, L'Hôpital's rule still holds.
3. Derivative of denominator is zero
The necessity of the condition that near can be seen by the following counterexample due to Otto Stolz. Let and Then there is no limit for as However,
which tends to 0 as , although it is undefined at infinitely many points. Further examples of this type were found by Ralph P. Boas Jr.
4. Limit of derivatives does not exist
The requirement that the limit exists is essential; if it does not exist, the other limit may nevertheless exist. Indeed, as approaches , the functions or may exhibit many oscillations of small amplitude but steep slope, which do not affect but do prevent the convergence of .
For example, if , and , then which does not approach a limit since cosine oscillates infinitely between and . But the ratio of the original functions does approach a limit, since the amplitude of the oscillations of becomes small relative to :
In a case such as this, all that can be concluded is that
so that if the limit of exists, then it must lie between the inferior and superior limits of . In the example, 1 does indeed lie between 0 and 2.)
Note also that by the contrapositive form of the Rule, if does not exist, then also does not exist.
Examples
In the following computations, we indicate each application of L'Hopital's rule by the symbol .
Here is a basic example involving the exponential function, which involves the indeterminate form at :
This is a more elaborate example involving . Applying L'Hôpital's rule a single time still results in an indeterminate form. In this case, the limit may be evaluated by applying the rule three times:
Here is an example involving : Repeatedly apply L'Hôpital's rule until the exponent is zero (if is an integer) or negative (if is fractional) to conclude that the limit is zero.
Here is an example involving the indeterminate form (see below), which is rewritten as the form :
Here is an example involving the mortgage repayment formula and . Let be the principal (loan amount), the interest rate per period and the number of periods. When is zero, the repayment amount per period is (since only principal is being repaid); this is consistent with the formula for non-zero interest rates:
One can also use L'Hôpital's rule to prove the following theorem. If is twice-differentiable in a neighborhood of and its second derivative is continuous on this neighborhood, then
Sometimes L'Hôpital's rule is invoked in a tricky way: suppose converges as and that converges to positive or negative infinity. Then:and so, exists and (This result remains true without the added hypothesis that converges to positive or negative infinity, but the justification is then incomplete.)
Complications
Sometimes L'Hôpital's rule does not reduce to an obvious limit in a finite number of steps, unless some intermediate simplifications are applied. Examples include the following:
Two applications can lead to a return to the original expression that was to be evaluated: This situation can be dealt with by substituting and noting that goes to infinity as goes to infinity; with this substitution, this problem can be solved with a single application of the rule: Alternatively, the numerator and denominator can both be multiplied by at which point L'Hôpital's rule can immediately be applied successfully:
An arbitrarily large number of applications may never lead to an answer even without repeating:This situation too can be dealt with by a transformation of variables, in this case : Again, an alternative approach is to multiply numerator and denominator by before applying L'Hôpital's rule:
A common logical fallacy is to use L'Hôpital's rule to prove the value of a derivative by computing the limit of a difference quotient. Since applying l'Hôpital requires knowing the relevant derivatives, this amounts to circular reasoning or begging the question, assuming what is to be proved. For example, consider the proof of the derivative formula for powers of x:
Applying L'Hôpital's rule and finding the derivatives with respect to yields
as expected, but this computation requires the use of the very formula that is being proven. Similarly, to prove , applying L'Hôpital requires knowing the derivative of at , which amounts to calculating in the first place; a valid proof requires a different method such as the squeeze theorem.
Other indeterminate forms
Other indeterminate forms, such as , , , , and , can sometimes be evaluated using L'Hôpital's rule. We again indicate applications of L'Hopital's rule by .
For example, to evaluate a limit involving , convert the difference of two functions to a quotient:
L'Hôpital's rule can be used on indeterminate forms involving exponents by using logarithms to "move the exponent down". Here is an example involving the indeterminate form :
It is valid to move the limit inside the exponential function because this function is continuous. Now the exponent has been "moved down". The limit is of the indeterminate form dealt with in an example above: L'Hôpital may be used to determine that
Thus
The following table lists the most common indeterminate forms and the transformations which precede applying l'Hôpital's rule:
Stolz–Cesàro theorem
The Stolz–Cesàro theorem is a similar result involving limits of sequences, but it uses finite difference operators rather than derivatives.
Geometric interpretation: parametric curve and velocity vector
Consider the parametric curve in the xy-plane with coordinates given by the continuous functions and , the locus of points , and suppose . The slope of the tangent to the curve at is the limit of the ratio as . The tangent to the curve at the point is the velocity vector with slope . L'Hôpital's rule then states that the slope of the curve at the origin () is the limit of the tangent slope at points approaching the origin, provided that this is defined.
Proof of L'Hôpital's rule
Special case
The proof of L'Hôpital's rule is simple in the case where and are continuously differentiable at the point and where a finite limit is found after the first round of differentiation. This is only a special case of L'Hôpital's rule, because it only applies to functions satisfying stronger conditions than required by the general rule. However, many common functions have continuous derivatives (e.g. polynomials, sine and cosine, exponential functions), so this special case covers most applications.
Suppose that and are continuously differentiable at a real number , that , and that . Then
This follows from the difference quotient definition of the derivative. The last equality follows from the continuity of the derivatives at . The limit in the conclusion is not indeterminate because .
The proof of a more general version of L'Hôpital's rule is given below.
General proof
The following proof is due to , where a unified proof for the and indeterminate forms is given. Taylor notes that different proofs may be found in and .
Let f and g be functions satisfying the hypotheses in the General form section. Let be the open interval in the hypothesis with endpoint c. Considering that on this interval and g is continuous, can be chosen smaller so that g is nonzero on .
For each x in the interval, define and as ranges over all values between x and c. (The symbols inf and sup denote the infimum and supremum.)
From the differentiability of f and g on , Cauchy's mean value theorem ensures that for any two distinct points x and y in there exists a between x and y such that . Consequently, for all choices of distinct x and y in the interval. The value g(x)-g(y) is always nonzero for distinct x and y in the interval, for if it was not, the mean value theorem would imply the existence of a p between x and y such that g' (p)=0.
The definition of m(x) and M(x) will result in an extended real number, and so it is possible for them to take on the values ±∞. In the following two cases, m(x) and M(x) will establish bounds on the ratio .
Case 1:
For any x in the interval , and point y between x and c,
and therefore as y approaches c, and become zero, and so
Case 2:
For every x in the interval , define . For every point y between x and c,
As y approaches c, both and become zero, and therefore
The limit superior and limit inferior are necessary since the existence of the limit of has not yet been established.
It is also the case that
and
and
In case 1, the squeeze theorem establishes that exists and is equal to L. In the case 2, and the squeeze theorem again asserts that , and so the limit exists and is equal to L. This is the result that was to be proven.
In case 2 the assumption that f(x) diverges to infinity was not used within the proof. This means that if |g(x)| diverges to infinity as x approaches c and both f and g satisfy the hypotheses of L'Hôpital's rule, then no additional assumption is needed about the limit of f(x): It could even be the case that the limit of f(x) does not exist. In this case, L'Hopital's theorem is actually a consequence of Cesàro–Stolz.
In the case when |g(x)| diverges to infinity as x approaches c and f(x) converges to a finite limit at c, then L'Hôpital's rule would be applicable, but not absolutely necessary, since basic limit calculus will show that the limit of f(x)/g(x) as x approaches c must be zero.
Corollary
A simple but very useful consequence of L'Hopital's rule is that the derivative of a function cannot have a removable discontinuity. That is, suppose that f is continuous at a, and that exists for all x in some open interval containing a, except perhaps for . Suppose, moreover, that exists. Then also exists and
In particular, f''' is also continuous at a.
Thus, if a function is not continuously differentiable near a point, the derivative must have an essential discontinuity at that point.
Proof
Consider the functions and . The continuity of f at a'' tells us that . Moreover, since a polynomial function is always continuous everywhere. Applying L'Hopital's rule shows that .
| Mathematics | Differential calculus | null |
18539 | https://en.wikipedia.org/wiki/Leukemia | Leukemia | Leukemia (also spelled leukaemia; pronounced ) is a group of blood cancers that usually begin in the bone marrow and produce high numbers of abnormal blood cells. These blood cells are not fully developed and are called blasts or leukemia cells. Symptoms may include bleeding and bruising, bone pain, fatigue, fever, and an increased risk of infections. These symptoms occur due to a lack of normal blood cells. Diagnosis is typically made by blood tests or bone marrow biopsy.
The exact cause of leukemia is unknown. A combination of genetic factors and environmental (non-inherited) factors are believed to play a role. Risk factors include smoking, ionizing radiation, petrochemicals (such as benzene), prior chemotherapy, and Down syndrome. People with a family history of leukemia are also at higher risk. There are four main types of leukemia—acute lymphoblastic leukemia (ALL), acute myeloid leukemia (AML), chronic lymphocytic leukemia (CLL) and chronic myeloid leukemia (CML)—and a number of less common types. Leukemias and lymphomas both belong to a broader group of tumors that affect the blood, bone marrow, and lymphoid system, known as tumors of the hematopoietic and lymphoid tissues.
Treatment may involve some combination of chemotherapy, radiation therapy, targeted therapy, and bone marrow transplant, with supportive and palliative care provided as needed. Certain types of leukemia may be managed with watchful waiting. The success of treatment depends on the type of leukemia and the age of the person. Outcomes have improved in the developed world. Five-year survival rate was 67% in the United States in the period from 2014 to 2020. In children under 15 in first-world countries, the five-year survival rate is greater than 60% or even 90%, depending on the type of leukemia. In children who are cancer-free five years after diagnosis of acute leukemia, the cancer is unlikely to return.
In 2015, leukemia was present in 2.3 million people worldwide and caused 353,500 deaths. In 2012, it had newly developed in 352,000 people. It is the most common type of cancer in children, with three-quarters of leukemia cases in children being the acute lymphoblastic type. However, over 90% of all leukemias are diagnosed in adults, CLL and AML being most common. It occurs more commonly in the developed world.
Classification
General classification
Clinically and pathologically, leukemia is subdivided into a variety of large groups. The first division is between its acute and chronic forms:
Acute leukemia is characterized by a rapid increase in the number of immature blood cells. The crowding that results from such cells makes the bone marrow unable to produce healthy blood cells resulting in low hemoglobin and low platelets. Immediate treatment is required in acute leukemia because of the rapid progression and accumulation of the malignant cells, which then spill over into the bloodstream and spread to other organs of the body. Acute forms of leukemia are the most common forms of leukemia in children.
Chronic leukemia is characterized by the excessive buildup of relatively mature, but still abnormal, white blood cells (or, more rarely, red blood cells). Typically taking months or years to progress, the cells are produced at a much higher rate than normal, resulting in many abnormal white blood cells. Whereas acute leukemia must be treated immediately, chronic forms are sometimes monitored for some time before treatment to ensure maximum effectiveness of therapy. Chronic leukemia mostly occurs in older people but can occur in any age group.
Additionally, the diseases are subdivided according to which kind of blood cell is affected. This divides leukemias into lymphoblastic or lymphocytic leukemias and myeloid or myelogenous leukemias:
In lymphoblastic or lymphocytic leukemias, the cancerous change takes place in a type of marrow cell that normally goes on to form lymphocytes, which are infection-fighting immune system cells. Most lymphocytic leukemias involve a specific subtype of lymphocyte, the B cell.
In myeloid or myelogenous leukemias, the cancerous change takes place in a type of marrow cell that normally goes on to form red blood cells, some other types of white cells, and platelets.
Combining these two classifications provides a total of four main categories. Within each of these main categories, there are typically several subcategories. Finally, some rarer types are usually considered to be outside of this classification scheme.
Specific types
Acute lymphoblastic leukemia (ALL) is the most common type of leukemia in young children. It also affects adults, especially those 65 and older. Standard treatments involve chemotherapy and radiotherapy. Subtypes include precursor B acute lymphoblastic leukemia, precursor T acute lymphoblastic leukemia, Burkitt's leukemia, and acute biphenotypic leukemia. While most cases of ALL occur in children, 80% of deaths from ALL occur in adults.
Chronic lymphocytic leukemia (CLL) most often affects adults over the age of 55. It sometimes occurs in younger adults, but it almost never affects children. Two-thirds of affected people are men. The five-year survival rate is 85%. It is incurable, but there are many effective treatments. One subtype is B-cell prolymphocytic leukemia, a more aggressive disease.
Acute myelogenous leukemia (AML) occurs far more commonly in adults than in children, and more commonly in men than women. It is treated with chemotherapy. The five-year survival rate is 20%. Subtypes of AML include acute promyelocytic leukemia, acute myeloblastic leukemia, and acute megakaryoblastic leukemia.
Chronic myelogenous leukemia (CML) occurs mainly in adults; a very small number of children also develop this disease. It is treated with imatinib (Gleevec in United States, Glivec in Europe) or other drugs. The five-year survival rate is 90%. One subtype is chronic myelomonocytic leukemia.
Hairy cell leukemia (HCL) is sometimes considered a subset of chronic lymphocytic leukemia, but does not fit neatly into this category. About 80% of affected people are adult men. No cases in children have been reported. HCL is incurable but easily treatable. Survival is 96% to 100% at ten years.
T-cell prolymphocytic leukemia (T-PLL) is a very rare and aggressive leukemia affecting adults; somewhat more men than women are diagnosed with this disease. Despite its overall rarity, it is the most common type of mature T cell leukemia; nearly all other leukemias involve B cells. It is difficult to treat, and the median survival is measured in months.
Large granular lymphocytic leukemia may involve either T-cells or NK cells; like hairy cell leukemia, which involves solely B cells, it is a rare and indolent (not aggressive) leukemia.
Adult T-cell leukemia is caused by human T-lymphotropic virus (HTLV), a virus similar to HIV. Like HIV, HTLV infects CD4+ T-cells and replicates within them; however, unlike HIV, it does not destroy them. Instead, HTLV "immortalizes" the infected T-cells, giving them the ability to proliferate abnormally. Human T-cell lymphotropic virus types I and II (HTLV-I/II) are endemic in certain areas of the world.
Clonal eosinophilias (also called clonal hypereosinophilias) are a group of blood disorders characterized by the growth of eosinophils in the bone marrow, blood, and/or other tissues. They may be pre-cancerous or cancerous. Clonal eosinophilias involve a "clone" of eosinophils, i.e., a group of genetically identical eosinophils that all grew from the same mutated ancestor cell. These disorders may evolve into chronic eosinophilic leukemia or may be associated with various forms of myeloid neoplasms, lymphoid neoplasms, myelofibrosis, or the myelodysplastic syndrome.
Pre-leukemia
Transient myeloproliferative disease, also termed transient leukemia, involves the abnormal proliferation of a clone of non-cancerous megakaryoblasts. The disease is restricted to individuals with Down syndrome or genetic changes similar to those in Down syndrome, develops in a baby during pregnancy or shortly after birth, and resolves within 3 months or, in ~10% of cases, progresses to acute megakaryoblastic leukemia. Transient myeloid leukemia is a pre-leukemic condition.
Clonal hematopoiesis is a common age-related phenomenon with a low risk of progression to myelodysplastic syndrome (MDS) and leukemia. Once MDS has developed, the risk of progression to acute leukemia can be assessed using the International Prognostic Scoring System (IPSS).
Monoclonal B-cell lymphocytosis has a low risk of progression to B-cell leukemia.
Signs and symptoms
The most common symptoms in children are easy bruising, pale skin, fever, and an enlarged spleen or liver.
Damage to the bone marrow, by way of displacing the normal bone marrow cells with higher numbers of immature white blood cells, results in a lack of blood platelets, which are important in the blood clotting process. This means people with leukemia may easily become bruised, bleed excessively, or develop pinprick bleeds (petechiae).
White blood cells, which are involved in fighting pathogens, may be suppressed or dysfunctional. This could cause the person's immune system to be unable to fight off a simple infection or to start attacking other body cells. Because leukemia prevents the immune system from working normally, some people experience frequent infection, ranging from infected tonsils, sores in the mouth, or diarrhea to life-threatening pneumonia or opportunistic infections.
Finally, the red blood cell deficiency leads to anemia, which may cause dyspnea and pallor.
Some people experience other symptoms, such as fevers, chills, night sweats, weakness in the limbs, feeling fatigued and other common flu-like symptoms. Some people experience nausea or a feeling of fullness due to an enlarged liver and spleen; this can result in unintentional weight loss. Blasts affected by the disease may come together and become swollen in the liver or in the lymph nodes causing pain and leading to nausea.
If the leukemic cells invade the central nervous system, then neurological symptoms (notably headaches) can occur. Uncommon neurological symptoms like migraines, seizures, or coma can occur as a result of brain stem pressure. All symptoms associated with leukemia can be attributed to other diseases. Consequently, leukemia is always diagnosed through medical tests.
The word leukemia, which means 'white blood', is derived from the characteristic high white blood cell count that presents in most affected people before treatment. The high number of white blood cells is apparent when a blood sample is viewed under a microscope, with the extra white blood cells frequently being immature or dysfunctional. The excessive number of cells can also interfere with the level of other cells, causing further harmful imbalance in the blood count.
Some people diagnosed with leukemia do not have high white blood cell counts visible during a regular blood count. This less-common condition is called aleukemia. The bone marrow still contains cancerous white blood cells that disrupt the normal production of blood cells, but they remain in the marrow instead of entering the bloodstream, where they would be visible in a blood test. For a person with aleukemia, the white blood cell counts in the bloodstream can be normal or low. Aleukemia can occur in any of the four major types of leukemia, and is particularly common in hairy cell leukemia.
Causes
Studies in 2009 and 2010 have shown a positive correlation between exposure to formaldehyde and the development of leukemia, particularly myeloid leukemia. The different leukemias likely have different causes.
Leukemia, like other cancers, results from mutations in the DNA. Certain mutations can trigger leukemia by activating oncogenes or deactivating tumor suppressor genes, and thereby disrupting the regulation of cell death, differentiation or division. These mutations may occur spontaneously or as a result of exposure to radiation or carcinogenic substances.
Among adults, the known causes are natural and artificial ionizing radiation and petrochemicals, notably benzene and alkylating chemotherapy agents for previous malignancies. Use of tobacco is associated with a small increase in the risk of developing acute myeloid leukemia in adults. Cohort and case-control studies have linked exposure to some petrochemicals and hair dyes to the development of some forms of leukemia. Diet has very limited or no effect, although eating more vegetables may confer a small protective benefit.
Viruses have also been linked to some forms of leukemia. For example, human T-lymphotropic virus (HTLV-1) causes adult T-cell leukemia.
A few cases of maternal-fetal transmission (a baby acquires leukemia because its mother had leukemia during the pregnancy) have been reported. Children born to mothers who use fertility drugs to induce ovulation are more than twice as likely to develop leukemia during their childhoods than other children.
In a recent systematic review and meta-analysis of any type of leukemia in neonates using phototherapy, typically to treat neonatal jaundice, a statistically significant association was detected between using phototherapy and myeloid leukemia. However, it is still questionable whether phototherapy is genuinely the cause of cancer or simply a result of the same underlying factors that gave rise to cancer.
Radiation
Large doses of Sr-90 (called a bone seeking radioisotope) from nuclear reactor accidents, increases the risk of bone cancer and leukemia in animals and is presumed to do so in people.
Genetic conditions
Some people have a genetic predisposition towards developing leukemia. This predisposition is demonstrated by family histories and twin studies. The affected people may have a single gene or multiple genes in common. In some cases, families tend to develop the same kinds of leukemia as other members; in other families, affected people may develop different forms of leukemia or related blood cancers.
In addition to these genetic issues, people with chromosomal abnormalities or certain other genetic conditions have a greater risk of leukemia. For example, people with Down syndrome have a significantly increased risk of developing forms of acute leukemia (especially acute myeloid leukemia), and Fanconi anemia is a risk factor for developing acute myeloid leukemia. Mutation in SPRED1 gene has been associated with a predisposition to childhood leukemia.
Inherited bone marrow failure syndromes represent a kind of premature aging of the bone marrow. In people with these syndromes and in older adults, mutations associated with clonal hematopoiesis may arise as an adaptive response to a progressively deteriorating hematopoietic niche, i.e., a depleting pool of Hematopoietic stem cells. The mutated stem cells then acquire a self-renewal advantage.
Chronic myelogenous leukemia is associated with a genetic abnormality called the Philadelphia translocation; 95% of people with CML carry the Philadelphia mutation, although this is not exclusive to CML and can be observed in people with other types of leukemia.
Non-ionizing radiation
Whether or not non-ionizing radiation causes leukemia has been studied for several decades. The International Agency for Research on Cancer expert working group undertook a detailed review of all data on static and extremely low frequency electromagnetic energy, which occurs naturally and in association with the generation, transmission, and use of electrical power. They concluded that there is limited evidence that high levels of ELF magnetic (but not electric) fields might cause some cases of childhood leukemia. No evidence for a relationship to leukemia or another form of malignancy in adults has been demonstrated. Since exposure to such levels of ELFs is relatively uncommon, the World Health Organization concludes that ELF exposure, if later proven to be causative, would account for just 100 to 2400 cases worldwide each year, representing 0.2 to 4.9% of the total incidence of childhood leukemia for that year (about 0.03 to 0.9% of all leukemias).
Diagnosis
Diagnosis is usually based on repeated complete blood counts and a bone marrow examination following observations of the symptoms. Sometimes, blood tests may not show that a person has leukemia, especially in the early stages of the disease or during remission. A lymph node biopsy can be performed to diagnose certain types of leukemia in certain situations.
Following diagnosis, blood chemistry tests can be used to determine the degree of liver and kidney damage or the effects of chemotherapy on the person. When concerns arise about other damages due to leukemia, doctors may use an X-ray, MRI, or ultrasound. These can potentially show leukemia's effects on such body parts as bones (X-ray), the brain (MRI), or the kidneys, spleen, and liver (ultrasound). CT scans can be used to check lymph nodes in the chest, though this is uncommon.
Despite the use of these methods to diagnose whether or not a person has leukemia, many people have not been diagnosed because many of the symptoms are vague, non-specific, and can refer to other diseases. For this reason, the American Cancer Society estimates that at least one-fifth of the people with leukemia have not yet been diagnosed.
Treatment
Most forms of leukemia are treated with pharmaceutical medication, typically combined into a multi-drug chemotherapy regimen. Some are also treated with radiation therapy. In some cases, a bone marrow transplant is effective.
Acute lymphoblastic
Management of ALL is directed towards control of bone marrow and systemic (whole-body) disease. Additionally, treatment must prevent leukemic cells from spreading to other sites, particularly the central nervous system (CNS); periodic lumbar punctures are used for diagnostic purposes and to administer intrathecal prophylactic methotrexate. In general, ALL treatment is divided into several phases:
Induction chemotherapy to bring about bone marrow remission. For adults, standard induction plans include prednisone, vincristine, and an anthracycline drug; other drug plans may include L-asparaginase or cyclophosphamide. For children with low-risk ALL, standard therapy usually consists of three drugs (prednisone, L-asparaginase, and vincristine) for the first month of treatment.
Consolidation therapy or intensification therapy to eliminate any remaining leukemia cells. There are many different approaches to consolidation, but it is typically a high-dose, multi-drug treatment that is undertaken for a few months. People with low- to average-risk ALL receive therapy with antimetabolite drugs such as methotrexate and 6-mercaptopurine (6-MP). People who are high-risk receive higher drug doses of these drugs, plus additional drugs.
CNS prophylaxis (preventive therapy) to stop cancer from spreading to the brain and nervous system in high-risk people. Standard prophylaxis may include radiation of the head and/or drugs delivered directly into the spine.
Maintenance treatments with chemotherapeutic drugs to prevent disease recurrence once remission has been achieved. Maintenance therapy usually involves lower drug doses and may continue for up to three years.
Alternatively, allogeneic bone marrow transplantation may be appropriate for high-risk or relapsed people.
Chronic lymphocytic
Decision to treat
Hematologists base CLL treatment on both the stage and symptoms of the individual person. A large group of people with CLL have low-grade disease, which does not benefit from treatment. Individuals with CLL-related complications or more advanced disease often benefit from treatment. In general, the indications for treatment are:
Falling hemoglobin or platelet count
Progression to a later stage of disease
Painful, disease-related overgrowth of lymph nodes or spleen
An increase in the rate of lymphocyte production
Treatment approach
Most CLL cases are incurable by present treatments, so treatment is directed towards suppressing the disease for many years, rather than curing it. The primary chemotherapeutic plan is combination chemotherapy with chlorambucil or cyclophosphamide, plus a corticosteroid such as prednisone or prednisolone. The use of a corticosteroid has the additional benefit of suppressing some related autoimmune diseases, such as immunohemolytic anemia or immune-mediated thrombocytopenia. In resistant cases, single-agent treatments with nucleoside drugs such as fludarabine, pentostatin, or cladribine may be successful. Younger and healthier people may choose allogeneic or autologous bone marrow transplantation in the hope of a permanent cure.
Acute myelogenous
Many different anti-cancer drugs are effective for the treatment of AML. Treatments vary somewhat according to the age of the person and according to the specific subtype of AML. Overall, the strategy is to control bone marrow and systemic (whole-body) disease, while offering specific treatment for the central nervous system (CNS), if involved.
In general, most oncologists rely on combinations of drugs for the initial, induction phase of chemotherapy. Such combination chemotherapy usually offers the benefits of early remission and a lower risk of disease resistance. Consolidation and maintenance treatments are intended to prevent disease recurrence. Consolidation treatment often entails a repetition of induction chemotherapy or the intensification of chemotherapy with additional drugs. By contrast, maintenance treatment involves drug doses that are lower than those administered during the induction phase.
Chronic myelogenous
There are many possible treatments for CML, but the standard of care for newly diagnosed people is imatinib (Gleevec) therapy. Compared to most anti-cancer drugs, it has relatively few side effects and can be taken orally at home. With this drug, more than 90% of people will be able to keep the disease in check for at least five years, so that CML becomes a chronic, manageable condition.
In a more advanced, uncontrolled state, when the person cannot tolerate imatinib, or if the person wishes to attempt a permanent cure, then an allogeneic bone marrow transplantation may be performed. This procedure involves high-dose chemotherapy and radiation followed by infusion of bone marrow from a compatible donor. Approximately 30% of people die from this procedure.
Hairy cell
Decision to treat
People with hairy cell leukemia who are symptom-free typically do not receive immediate treatment. Treatment is generally considered necessary when the person shows signs and symptoms such as low blood cell counts (e.g., infection-fighting neutrophil count below 1.0 K/μL), frequent infections, unexplained bruises, anemia, or fatigue that is significant enough to disrupt the person's everyday life.
Typical treatment approach
People who need treatment usually receive either one week of cladribine, given daily by intravenous infusion or a simple injection under the skin, or six months of pentostatin, given every four weeks by intravenous infusion. In most cases, one round of treatment will produce a prolonged remission.
Other treatments include rituximab infusion or self-injection with Interferon-alpha. In limited cases, the person may benefit from splenectomy (removal of the spleen). These treatments are not typically given as the first treatment because their success rates are lower than cladribine or pentostatin.
T-cell prolymphocytic
Most people with T-cell prolymphocytic leukemia, a rare and aggressive leukemia with a median survival of less than one year, require immediate treatment.
T-cell prolymphocytic leukemia is difficult to treat, and it does not respond to most available chemotherapeutic drugs. Many different treatments have been attempted, with limited success in certain people: purine analogues (pentostatin, fludarabine, cladribine), chlorambucil, and various forms of combination chemotherapy (cyclophosphamide, doxorubicin, vincristine, prednisone CHOP, cyclophosphamide, vincristine, prednisone [COP], vincristine, doxorubicin, prednisone, etoposide, cyclophosphamide, bleomycin VAPEC-B). Alemtuzumab (Campath), a monoclonal antibody that attacks white blood cells, has been used in treatment with greater success than previous options.
Some people who successfully respond to treatment also undergo stem cell transplantation to consolidate the response.
Juvenile myelomonocytic
Treatment for juvenile myelomonocytic leukemia can include splenectomy, chemotherapy, and bone marrow transplantation.
Prognosis
The success of treatment depends on the type of leukemia and the age of the person. Outcomes have improved in the developed world. The average five-year survival rate is 65% in the United States. In children under 15, the five-year survival rate is greater (60 to 85%), depending on the type of leukemia. In children with acute leukemia who are cancer-free after five years, the cancer is unlikely to return.
Outcomes depend on whether it is acute or chronic, the specific abnormal white blood cell type, the presence and severity of anemia or thrombocytopenia, the degree of tissue abnormality, the presence of metastasis and lymph node and bone marrow infiltration, the availability of therapies and the skills of the health care team. Treatment outcomes may be better when people are treated at larger centers with greater experience.
Epidemiology
In 2010, globally, approximately 281,500 people died of leukemia. In 2000, approximately 256,000 children and adults around the world developed a form of leukemia, and 209,000 died from it. This represents about 3% of the almost seven million deaths due to cancer that year, and about 0.35% of all deaths from any cause. Of the sixteen separate sites the body compared, leukemia was the 12th most common class of neoplastic disease and the 11th most common cause of cancer-related death. Leukemia occurs more commonly in the developed world.
United States
About 245,000 people in the United States are affected with some form of leukemia, including those that have achieved remission or cure. Rates from 1975 to 2011 have increased by 0.7% per year among children. Approximately 44,270 new cases of leukemia were diagnosed in the year 2008 in the US. This represents 2.9% of all cancers (excluding simple basal cell and squamous cell skin cancers) in the United States, and 30.4% of all blood cancers.
Among children with some form of cancer, about a third have a type of leukemia, most commonly acute lymphoblastic leukemia. A type of leukemia is the second most common form of cancer in infants (under the age of 12 months) and the most common form of cancer in older children. Boys are somewhat more likely to develop leukemia than girls, and white American children are almost twice as likely to develop leukemia than black American children. Only about 3% cancer diagnoses among adults are for leukemias, but because cancer is much more common among adults, more than 90% of all leukemias are diagnosed in adults.
Race is a risk factor in the United States. Hispanics, especially those under the age of 20, are at the highest risk for leukemia, while whites, Native Americans, Asian Americans, and Alaska Natives are at higher risk than African Americans.
More men than women are diagnosed with leukemia and die from the disease. Around 30 percent more men than women have leukemia.
Australia
In Australia, leukemia is the eleventh most common cancer. In 2014–2018, Australians diagnosed with leukemia had a 64% chance (65% for males and 64% for females) of surviving for five years compared to the rest of the Australian population–there was a 21% increase in survival rates between 1989–1993.
UK
Overall, leukemia is the eleventh most common cancer in the UK (around 8,600 people were diagnosed with the disease in 2011), and it is the ninth most common cause of cancer death (around 4,800 people died in 2012).
History
Leukemia was first described by anatomist and surgeon Alfred-Armand-Louis-Marie Velpeau in 1827. A more complete description was given by pathologist Rudolf Virchow in 1845. Around ten years after Virchow's findings, pathologist Franz Ernst Christian Neumann found that the bone marrow of a deceased person with leukemia was colored "dirty green-yellow" as opposed to the normal red. This finding allowed Neumann to conclude that a bone marrow problem was responsible for the abnormal blood of people with leukemia.
By 1900, leukemia was viewed as a family of diseases as opposed to a single disease. By 1947, Boston pathologist Sidney Farber believed from past experiments that aminopterin, a folic acid mimic, could potentially cure leukemia in children. The majority of the children with ALL who were tested showed signs of improvement in their bone marrow, but none of them were actually cured. Nevertheless, this result did lead to further experiments.
In 1962, researchers Emil J. Freireich, Jr. and Emil Frei III used combination chemotherapy to attempt to cure leukemia. The tests were successful with some people surviving long after the tests.
Etymology
Observing an abnormally large number of white blood cells in a blood sample from a person, Virchow called the condition Leukämie in German, which he formed from the two Greek words leukos (λευκός), meaning 'white', and haima (αἷμα), meaning 'blood'. It was formerly also called leucemia.
Society and culture
According to Susan Sontag, leukemia was often romanticized in 20th-century fiction, portrayed as a joy-ending, clean disease whose fair, innocent and gentle victims die young or at the wrong time. As such, it was the cultural successor to tuberculosis, which held this cultural position until it was discovered to be an infectious disease. The 1970 romance novel Love Story is an example of this romanticization of leukemia.
In the United States, around $5.4 billion is spent on treatment a year.
Research directions
Significant research into the causes, prevalence, diagnosis, treatment, and prognosis of leukemia is being performed. Hundreds of clinical trials are being planned or conducted at any given time. Studies may focus on effective means of treatment, better ways of treating the disease, improving the quality of life for people, or appropriate care in remission or after cures.
In general, there are two types of leukemia research: clinical or translational research and basic research. Clinical/translational research focuses on studying the disease in a defined and generally immediately applicable way, such as testing a new drug in people. By contrast, basic science research studies the disease process at a distance, such as seeing whether a suspected carcinogen can cause leukemic changes in isolated cells in the laboratory or how the DNA changes inside leukemia cells as the disease progresses. The results from basic research studies are generally less immediately useful to people with the disease.
Treatment through gene therapy is currently being pursued. One such approach used genetically modified T cells, known as chimeric antigen receptor T cells (CAR-T cells), to attack cancer cells. In 2011, a year after treatment, two of the three people with advanced chronic lymphocytic leukemia were reported to be cancer-free and in 2013, three of five subjects who had acute lymphocytic leukemia were reported to be in remission for five months to two years. Subsequent studies with a variety of CAR-T types continue to be promising. As of 2018, two CAR-T therapies have been approved by the Food and Drug Administration. CAR-T treatment has significant side effects, and loss of the antigen targeted by the CAR-T cells is a common mechanism for relapse. The stem cells that cause different types of leukemia are also being researched.
Pregnancy
Leukemia is rarely associated with pregnancy, affecting only about 1 in 10,000 pregnant women. How it is handled depends primarily on the type of leukemia. Nearly all leukemias appearing in pregnant women are acute leukemias. Acute leukemias normally require prompt, aggressive treatment, despite significant risks of pregnancy loss and birth defects, especially if chemotherapy is given during the developmentally sensitive first trimester. Chronic myelogenous leukemia can be treated with relative safety at any time during pregnancy with Interferon-alpha hormones. Treatment for chronic lymphocytic leukemias, which are rare in pregnant women, can often be postponed until after the end of the pregnancy.
| Biology and health sciences | Cancer | null |
18542 | https://en.wikipedia.org/wiki/Length | Length | Length is a measure of distance. In the International System of Quantities, length is a quantity with dimension distance. In most systems of measurement a base unit for length is chosen, from which all other units are derived. In the International System of Units (SI) system the base unit for length is the metre.
Length is commonly understood to mean the most extended dimension of a fixed object. However, this is not always the case and may depend on the position the object is in.
Various terms for the length of a fixed object are used, and these include height, which is vertical length or vertical extent, width, breadth, and depth. Height is used when there is a base from which vertical measurements can be taken. Width and breadth usually refer to a shorter dimension than length. Depth is used for the measure of a third dimension.
Length is the measure of one spatial dimension, whereas area is a measure of two dimensions (length squared) and volume is a measure of three dimensions (length cubed).
History
Measurement has been important ever since humans settled from nomadic lifestyles and started using building materials, occupying land and trading with neighbours. As trade between different places increased, the need for standard units of length increased. And later, as society has become more technologically oriented, much higher accuracy of measurement is required in an increasingly diverse set of fields, from micro-electronics to interplanetary ranging.
Under Einstein's special relativity, length can no longer be thought of as being constant in all reference frames. Thus a ruler that is one metre long in one frame of reference will not be one metre long in a reference frame that is moving relative to the first frame. This means the length of an object varies depending on the speed of the observer.
Use in mathematics
Euclidean geometry
In Euclidean geometry, length is measured along straight lines unless otherwise specified and refers to segments on them. Pythagoras's theorem relating the length of the sides of a right triangle is one of many applications in Euclidean geometry. Length may also be measured along other types of curves and is referred to as arclength.
In a triangle, the length of an altitude, a line segment drawn from a vertex perpendicular to the side not passing through the vertex (referred to as a base of the triangle), is called the height of the triangle.
The area of a rectangle is defined to be length × width of the rectangle. If a long thin rectangle is stood up on its short side then its area could also be described as its height × width.
The volume of a solid rectangular box (such as a plank of wood) is often described as length × height × depth.
The perimeter of a polygon is the sum of the lengths of its sides.
The circumference of a circular disk is the length of the boundary (a circle) of that disk.
Other geometries
In other geometries, length may be measured along possibly curved paths, called geodesics. The Riemannian geometry used in general relativity is an example of such a geometry. In spherical geometry, length is measured along the great circles on the sphere and the distance between two points on the sphere is the shorter of the two lengths on the great circle, which is determined by the plane through the two points and the center of the sphere.
Graph theory
In an unweighted graph, the length of a cycle, path, or walk is the number of edges it uses. In a weighted graph, it may instead be the sum of the weights of the edges that it uses.
Length is used to define the shortest path, girth (shortest cycle length), and longest path between two vertices in a graph.
Measure theory
In measure theory, length is most often generalized to general sets of via the Lebesgue measure. In the one-dimensional case, the Lebesgue outer measure of a set is defined in terms of the lengths of open intervals. Concretely, the length of an open interval is first defined as
so that the Lebesgue outer measure of a general set may then be defined as
Units
In the physical sciences and engineering, when one speaks of , the word is synonymous with distance. There are several units that are used to measure length. Historically, units of length may have been derived from the lengths of human body parts, the distance travelled in a number of paces, the distance between landmarks or places on the Earth, or arbitrarily on the length of some common object.
In the International System of Units (SI), the base unit of length is the metre (symbol, m), now defined in terms of the speed of light (about 300 million metres per second). The millimetre (mm), centimetre (cm) and the kilometre (km), derived from the metre, are also commonly used units. In U.S. customary units, English or imperial system of units, commonly used units of length are the inch (in), the foot (ft), the yard (yd), and the mile (mi). A unit of length used in navigation is the nautical mile (nmi).
km =
miles
Units used to denote distances in the vastness of space, as in astronomy, are much longer than those typically used on Earth (metre or kilometre) and include the astronomical unit (au), the light-year, and the parsec (pc).
Units used to denote sub-atomic distances, as in nuclear physics, are much smaller than the millimetre. Examples include the fermi (fm).
| Mathematics | Geometry | null |
18567 | https://en.wikipedia.org/wiki/Legendre%20symbol | Legendre symbol | In number theory, the Legendre symbol is a multiplicative function with values 1, −1, 0 that is a quadratic character modulo of an odd prime number p: its value at a (nonzero) quadratic residue mod p is 1 and at a non-quadratic residue (non-residue) is −1. Its value at zero is 0.
The Legendre symbol was introduced by Adrien-Marie Legendre in 1798 in the course of his attempts at proving the law of quadratic reciprocity. Generalizations of the symbol include the Jacobi symbol and Dirichlet characters of higher order. The notational convenience of the Legendre symbol inspired introduction of several other "symbols" used in algebraic number theory, such as the Hilbert symbol and the Artin symbol.
Definition
Let be an odd prime number. An integer is a quadratic residue modulo if it is congruent to a perfect square modulo and is a quadratic nonresidue modulo otherwise. The Legendre symbol is a function of and defined as
Legendre's original definition was by means of the explicit formula
By Euler's criterion, which had been discovered earlier and was known to Legendre, these two definitions are equivalent. Thus Legendre's contribution lay in introducing a convenient notation that recorded quadratic residuosity of a mod p. For the sake of comparison, Gauss used the notation aRp, aNp according to whether a is a residue or a non-residue modulo p. For typographical convenience, the Legendre symbol is sometimes written as (a | p) or (a/p). For fixed p, the sequence is periodic with period p and is sometimes called the Legendre sequence. Each row in the following table exhibits periodicity, just as described.
Table of values
The following is a table of values of Legendre symbol with p ≤ 127, a ≤ 30, p odd prime.
Properties of the Legendre symbol
There are a number of useful properties of the Legendre symbol which, together with the law of quadratic reciprocity, can be used to compute it efficiently.
Given a generator , if , then is a quadratic residue if and only if is even. This shows that half of the elements in are quadratic residues.
If then the fact that
gives us that is a square root of the quadratic residue .
The Legendre symbol is periodic in its first (or top) argument: if a ≡ b (mod p), then
The Legendre symbol is a completely multiplicative function of its top argument:
In particular, the product of two numbers that are both quadratic residues or quadratic non-residues modulo p is a residue, whereas the product of a residue with a non-residue is a non-residue. A special case is the Legendre symbol of a square:
When viewed as a function of a, the Legendre symbol is the unique quadratic (or order 2) Dirichlet character modulo p.
The first supplement to the law of quadratic reciprocity:
The second supplement to the law of quadratic reciprocity:
Special formulas for the Legendre symbol for small values of a:
For an odd prime p ≠ 3,
For an odd prime p ≠ 5,
The Fibonacci numbers 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ... are defined by the recurrence If p is a prime number then
For example,
This result comes from the theory of Lucas sequences, which are used in primality testing. See Wall–Sun–Sun prime.
Legendre symbol and quadratic reciprocity
Let p and q be distinct odd primes. Using the Legendre symbol, the quadratic reciprocity law can be stated concisely:
Many proofs of quadratic reciprocity are based on Euler's criterion
In addition, several alternative expressions for the Legendre symbol were devised in order to produce various proofs of the quadratic reciprocity law.
Gauss introduced the quadratic Gauss sum and used the formula
in his fourth and sixth proofs of quadratic reciprocity.
Kronecker's proof first establishes that
Reversing the roles of p and q, he obtains the relation between () and ().
One of Eisenstein's proofs begins by showing that
Using certain elliptic functions instead of the sine function, Einstein was able to prove cubic and quartic reciprocity as well.
Related functions
The Jacobi symbol () is a generalization of the Legendre symbol that allows for a composite second (bottom) argument n, although n must still be odd and positive. This generalization provides an efficient way to compute all Legendre symbols without performing factorization along the way.
A further extension is the Kronecker symbol, in which the bottom argument may be any integer.
The power residue symbol () generalizes the Legendre symbol to higher power n. The Legendre symbol represents the power residue symbol for n = 2.
Computational example
The above properties, including the law of quadratic reciprocity, can be used to evaluate any Legendre symbol. For example:
Or using a more efficient computation:
The article Jacobi symbol has more examples of Legendre symbol manipulation.
Since no efficient factorization algorithm is known, but efficient modular exponentiation algorithms are, in general it is more efficient to use Legendre's original definition, e.g.
using repeated squaring modulo 331, reducing every value using the modulus after every operation to avoid computation with large integers.
| Mathematics | Modular arithmetic | null |
18589 | https://en.wikipedia.org/wiki/Ligand | Ligand | In coordination chemistry, a ligand is an ion or molecule with a functional group that binds to a central metal atom to form a coordination complex. The bonding with the metal generally involves formal donation of one or more of the ligand's electron pairs, often through Lewis bases. The nature of metal–ligand bonding can range from covalent to ionic. Furthermore, the metal–ligand bond order can range from one to three. Ligands are viewed as Lewis bases, although rare cases are known to involve Lewis acidic "ligands".
Metals and metalloids are bound to ligands in almost all circumstances, although gaseous "naked" metal ions can be generated in a high vacuum. Ligands in a complex dictate the reactivity of the central atom, including ligand substitution rates, the reactivity of the ligands themselves, and redox. Ligand selection requires critical consideration in many practical areas, including bioinorganic and medicinal chemistry, homogeneous catalysis, and environmental chemistry.
Ligands are classified in many ways, including: charge, size (bulk), the identity of the coordinating atom(s), and the number of electrons donated to the metal (denticity or hapticity). The size of a ligand is indicated by its cone angle.
History
The composition of coordination complexes have been known since the early 1800s, such as Prussian blue and copper vitriol. The key breakthrough occurred when Alfred Werner reconciled formulas and isomers. He showed, among other things, that the formulas of many cobalt(III) and chromium(III) compounds can be understood if the metal has six ligands in an octahedral geometry. The first to use the term "ligand" were Alfred Werner and Carl Somiesky, in relation to silicon chemistry. The theory allows one to understand the difference between coordinated and ionic chloride in the cobalt ammine chlorides and to explain many of the previously inexplicable isomers. He resolved the first coordination complex called hexol into optical isomers, overthrowing the theory that chirality was necessarily associated with carbon compounds.
Strong field and weak field ligands
In general, ligands are viewed as electron donors and the metals as electron acceptors, i.e., respectively, Lewis bases and Lewis acids. This description has been semi-quantified in many ways, e.g. ECW model. Bonding is often described using the formalisms of molecular orbital theory.
Ligands and metal ions can be ordered in many ways; one ranking system focuses on ligand 'hardness' (see also hard/soft acid/base theory). Metal ions preferentially bind certain ligands. In general, 'hard' metal ions prefer weak field ligands, whereas 'soft' metal ions prefer strong field ligands. According to the molecular orbital theory, the HOMO (Highest Occupied Molecular Orbital) of the ligand should have an energy that overlaps with the LUMO (Lowest Unoccupied Molecular Orbital) of the metal preferential. Metal ions bound to strong-field ligands follow the Aufbau principle, whereas complexes bound to weak-field ligands follow Hund's rule.
Binding of the metal with the ligands results in a set of molecular orbitals, where the metal can be identified with a new HOMO and LUMO (the orbitals defining the properties and reactivity of the resulting complex) and a certain ordering of the 5 d-orbitals (which may be filled, or partially filled with electrons). In an octahedral environment, the 5 otherwise degenerate d-orbitals split in sets of 3 and 2 orbitals (for a more in-depth explanation, see crystal field theory):
3 orbitals of low energy: dxy, dxz and dyz and
2 orbitals of high energy: dz2 and dx2−y2.
The energy difference between these 2 sets of d-orbitals is called the splitting parameter, Δo. The magnitude of Δo is determined by the field-strength of the ligand: strong field ligands, by definition, increase Δo more than weak field ligands. Ligands can now be sorted according to the magnitude of Δo (see the table below). This ordering of ligands is almost invariable for all metal ions and is called spectrochemical series.
For complexes with a tetrahedral surrounding, the d-orbitals again split into two sets, but this time in reverse order:
2 orbitals of low energy: dz2 and dx2−y2 and
3 orbitals of high energy: dxy, dxz and dyz.
The energy difference between these 2 sets of d-orbitals is now called Δt. The magnitude of Δt is smaller than for Δo, because in a tetrahedral complex only 4 ligands influence the d-orbitals, whereas in an octahedral complex the d-orbitals are influenced by 6 ligands. When the coordination number is neither octahedral nor tetrahedral, the splitting becomes correspondingly more complex. For the purposes of ranking ligands, however, the properties of the octahedral complexes and the resulting Δo has been of primary interest.
The arrangement of the d-orbitals on the central atom (as determined by the 'strength' of the ligand), has a strong effect on virtually all the properties of the resulting complexes. E.g., the energy differences in the d-orbitals has a strong effect in the optical absorption spectra of metal complexes. It turns out that valence electrons occupying orbitals with significant 3 d-orbital character absorb in the 400–800 nm region of the spectrum (UV–visible range). The absorption of light (what we perceive as the color) by these electrons (that is, excitation of electrons from one orbital to another orbital under influence of light) can be correlated to the ground state of the metal complex, which reflects the bonding properties of the ligands. The relative change in (relative) energy of the d-orbitals as a function of the field-strength of the ligands is described in Tanabe–Sugano diagrams.
In cases where the ligand has low energy LUMO, such orbitals also participate in the bonding. The metal–ligand bond can be further stabilised by a formal donation of electron density back to the ligand in a process known as back-bonding. In this case a filled, central-atom-based orbital donates density into the LUMO of the (coordinated) ligand. Carbon monoxide is the preeminent example a ligand that engages metals via back-donation. Complementarily, ligands with low-energy filled orbitals of pi-symmetry can serve as pi-donor.
Classification of ligands as L and X
Ligands are classified according to the number of electrons that they "donate" to the metal. L ligands are Lewis bases. L ligands are represented by amines, phosphines, CO, N2, and alkenes. Examples of L ligands extend to include dihydrogen and hydrocarbons that interact by agostic interactions. X ligands are halides and pseudohalides. X ligands typically are derived from anionic precursors such as chloride but includes ligands where salts of anion do not really exist such as hydride and alkyl.
Especially in the area of organometallic chemistry, ligands are classified according to the "CBC Method" for Covalent Bond Classification, as popularized by M. L. H. Green and "is based on the notion that there are three basic types [of ligands]... represented by the symbols L, X, and Z, which correspond respectively to 2-electron, 1-electron and 0-electron neutral ligands."
Polydentate and polyhapto ligand motifs and nomenclature
Denticity
Many ligands are capable of binding metal ions through multiple sites, usually because the ligands have lone pairs on more than one atom. Such ligands are polydentate. Ligands that bind via more than one atom are often termed chelating. A ligand that binds through two sites is classified as bidentate, and three sites as tridentate. The "bite angle" refers to the angle between the two bonds of a bidentate chelate. Chelating ligands are commonly formed by linking donor groups via organic linkers. A classic bidentate ligand is ethylenediamine, which is derived by the linking of two ammonia groups with an ethylene (−CH2CH2−) linker. A classic example of a polydentate ligand is the hexadentate chelating agent EDTA, which is able to bond through six sites, completely surrounding some metals. The number of times a polydentate ligand binds to a metal centre is symbolized by "κn", where n indicates the number of sites by which a ligand attaches to a metal. EDTA4−, when it is hexidentate, binds as a κ6-ligand, the amines and the carboxylate oxygen atoms are not contiguous. In practice, the n value of a ligand is not indicated explicitly but rather assumed. The binding affinity of a chelating system depends on the chelating angle or bite angle.
Denticity (represented by κ) is nomenclature that described to the number of noncontiguous atoms of a ligand bonded to a metal. This descriptor is often omitted because the denticity of a ligand is often obvious. The complex tris(ethylenediamine)cobalt(III) could be described as [Co(κ2-en)3]3+.
Complexes of polydentate ligands are called chelate complexes. They tend to be more stable than complexes derived from monodentate ligands. This enhanced stability, called the chelate effect, is usually attributed to effects of entropy, which favors the displacement of many ligands by one polydentate ligand.
Related to the chelate effect is the macrocyclic effect. A macrocyclic ligand is any large ligand that at least partially surrounds the central atom and bonds to it, leaving the central atom at the centre of a large ring. The more rigid and the higher its denticity, the more inert will be the macrocyclic complex. Heme is an example, in which the iron atom is at the centre of a porphyrin macrocycle, bound to four nitrogen atoms of the tetrapyrrole macrocycle. The very stable dimethylglyoximate complex of nickel is a synthetic macrocycle derived from dimethylglyoxime.
Hapticity
Hapticity (represented by Greek letter η) refers to the number of contiguous atoms that comprise a donor site and attach to a metal center. The η-notation applies when multiple atoms are coordinated. For example, η2 is a ligand that coordinates through two contiguous atoms. Butadiene forms both η2 and η4 complexes depending on the number of carbon atoms that are bonded to the metal.
Ligand motifs
Trans-spanning ligands
Trans-spanning ligands are bidentate ligands that can span coordination positions on opposite sides of a coordination complex.
Ambidentate ligand
In contrast to polydentate ligands, ambidentate ligands can attach to the central atom in either one of two (or more) places, but not both. An example is thiocyanate, SCN−, which can attach at either the sulfur atom or the nitrogen atom. Such compounds give rise to linkage isomerism.
Polydentate and ambidentate are therefore two different types of polyfunctional ligands (ligands with more than one functional group) which can bond to a metal center through different ligand atoms to form various isomers. Polydentate ligands can bond through one atom AND another (or several others) at the same time, whereas ambidentate ligands bond through one atom OR another. Proteins are complex examples of polyfunctional ligands, usually polydentate.
Bridging ligand
A bridging ligand links two or more metal centers. Virtually all inorganic solids with simple formulas are coordination polymers, consisting of metal ion centres linked by bridging ligands. This group of materials includes all anhydrous binary metal ion halides and pseudohalides. Bridging ligands also persist in solution. Polyatomic ligands such as carbonate are ambidentate and thus are found to often bind to two or three metals simultaneously. Atoms that bridge metals are sometimes indicated with the prefix "μ". Most inorganic solids are polymers by virtue of the presence of multiple bridging ligands. Bridging ligands, capable of coordinating multiple metal ions, have been attracting considerable interest because of their potential use as building blocks for the fabrication of functional multimetallic assemblies.
Binucleating ligand
Binucleating ligands bind two metal ions. Usually binucleating ligands feature bridging ligands, such as phenoxide, pyrazolate, or pyrazine, as well as other donor groups that bind to only one of the two metal ions.
Metal–ligand multiple bond
Some ligands can bond to a metal center through the same atom but with a different number of lone pairs. The bond order of the metal ligand bond can be in part distinguished through the metal ligand bond angle (M−X−R). This bond angle is often referred to as being linear or bent with further discussion concerning the degree to which the angle is bent. For example, an imido ligand in the ionic form has three lone pairs. One lone pair is used as a sigma X donor, the other two lone pairs are available as L-type pi donors. If both lone pairs are used in pi bonds then the M−N−R geometry is linear. However, if one or both these lone pairs is nonbonding then the M−N−R bond is bent and the extent of the bend speaks to how much pi bonding there may be. η1-Nitric oxide can coordinate to a metal center in linear or bent manner.
Spectator ligand
A spectator ligand is a tightly coordinating polydentate ligand that does not participate in chemical reactions but removes active sites on a metal. Spectator ligands influence the reactivity of the metal center to which they are bound.
Bulky ligands
Bulky ligands are used to control the steric properties of a metal center. They are used for many reasons, both practical and academic. On the practical side, they influence the selectivity of metal catalysts, e.g., in hydroformylation. Of academic interest, bulky ligands stabilize unusual coordination sites, e.g., reactive coligands or low coordination numbers. Often bulky ligands are employed to simulate the steric protection afforded by proteins to metal-containing active sites. Of course excessive steric bulk can prevent the coordination of certain ligands.
Chiral ligands
Chiral ligands are useful for inducing asymmetry within the coordination sphere. Often the ligand is employed as an optically pure group. In some cases, such as secondary amines, the asymmetry arises upon coordination. Chiral ligands are used in homogeneous catalysis, such as asymmetric hydrogenation.
Hemilabile ligands
Hemilabile ligands contain at least two electronically different coordinating groups and form complexes where one of these is easily displaced from the metal center while the other remains firmly bound, a behaviour which has been found to increase the reactivity of catalysts when compared to the use of more traditional ligands.
Non-innocent ligand
Non-innocent ligands bond with metals in such a manner that the distribution of electron density between the metal center and ligand is unclear. Describing the bonding of non-innocent ligands often involves writing multiple resonance forms that have partial contributions to the overall state.
Common ligands
Virtually every molecule and every ion can serve as a ligand for (or "coordinate to") metals. Monodentate ligands include virtually all anions and all simple Lewis bases. Thus, the halides and pseudohalides are important anionic ligands whereas ammonia, carbon monoxide, and water are particularly common charge-neutral ligands. Simple organic species are also very common, be they anionic (RO− and ) or neutral (R2O, R2S, R3−xNHx, and R3P). The steric properties of some ligands are evaluated in terms of their cone angles.
Beyond the classical Lewis bases and anions, all unsaturated molecules are also ligands, utilizing their pi electrons in forming the coordinate bond. Also, metals can bind to the σ bonds in for example silanes, hydrocarbons, and dihydrogen (see also: Agostic interaction).
In complexes of non-innocent ligands, the ligand is bonded to metals via conventional bonds, but the ligand is also redox-active.
Examples of common ligands (by field strength)
In the following table the ligands are sorted by field strength (weak field ligands first):
The entries in the table are sorted by field strength, binding through the stated atom (i.e. as a terminal ligand). The 'strength' of the ligand changes when the ligand binds in an alternative binding mode (e.g., when it bridges between metals) or when the conformation of the ligand gets distorted (e.g., a linear ligand that is forced through steric interactions to bind in a nonlinear fashion).
Other generally encountered ligands (alphabetical)
In this table other common ligands are listed in alphabetical order.
Ligand exchange
A ligand exchange (also called ligand substitution) is a chemical reaction in which a ligand in a compound is replaced by another. Two general mechanisms are recognized: associative substitution or by dissociative substitution.
Associative substitution closely resembles the SN2 mechanism in organic chemistry. A typically smaller ligand can attach to an unsaturated complex followed by loss of another ligand. Typically, the rate of the substitution is first order in entering ligand L and the unsaturated complex.
Dissociative substitution is common for octahedral complexes. This pathway closely resembles the SN1 mechanism in organic chemistry. The identity of the entering ligand does not affect the rate.
Ligand–protein binding database
BioLiP is a comprehensive ligand–protein interaction database, with the 3D structure of the ligand–protein interactions taken from the Protein Data Bank. MANORAA is a webserver for analyzing conserved and differential molecular interaction of the ligand in complex with protein structure homologs from the Protein Data Bank. It provides the linkage to protein targets such as its location in the biochemical pathways, SNPs and protein/RNA baseline expression in target organ.
| Physical sciences | Bond structure | Chemistry |
18610 | https://en.wikipedia.org/wiki/Laplace%20transform | Laplace transform | In mathematics, the Laplace transform, named after Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually , in the time domain) to a function of a complex variable (in the complex-valued frequency domain, also known as s-domain, or s-plane).
The transform is useful for converting differentiation and integration in the time domain into much easier multiplication and division in the Laplace domain (analogous to how logarithms are useful for simplifying multiplication and division into addition and subtraction). This gives the transform many applications in science and engineering, mostly as a tool for solving linear differential equations and dynamical systems by simplifying ordinary differential equations and integral equations into algebraic polynomial equations, and by simplifying convolution into multiplication. Once solved, the inverse Laplace transform reverts to the original domain.
The Laplace transform is defined (for suitable functions ) by the integral
where s is a complex number. It is related to many other transforms, most notably the Fourier transform and the Mellin transform. Formally, the Laplace transform is converted into a Fourier transform by the substitution where is real. However, unlike the Fourier transform, which gives the decomposition of a function into its components in each frequency, the Laplace transform of a function with suitable decay is an analytic function, and so has a convergent power series, the coefficients of which give the decomposition of a function into its moments. Also unlike the Fourier transform, when regarded in this way as an analytic function, the techniques of complex analysis, and especially contour integrals, can be used for calculations.
History
The Laplace transform is named after mathematician and astronomer Pierre-Simon, Marquis de Laplace, who used a similar transform in his work on probability theory. Laplace wrote extensively about the use of generating functions (1814), and the integral form of the Laplace transform evolved naturally as a result.
Laplace's use of generating functions was similar to what is now known as the z-transform, and he gave little attention to the continuous variable case which was discussed by Niels Henrik Abel.
From 1744, Leonhard Euler investigated integrals of the form
as solutions of differential equations, introducing in particular the gamma function. Joseph-Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form
which resembles a Laplace transform.
These types of integrals seem first to have attracted Laplace's attention in 1782, where he was following in the spirit of Euler in using the integrals themselves as solutions of equations. However, in 1785, Laplace took the critical step forward when, rather than simply looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form
akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power.
Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space, because those solutions were periodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space. In 1821, Cauchy developed an operational calculus for the Laplace transform that could be used to study linear differential equations in much the same way the transform is now used in basic engineering. This method was popularized, and perhaps rediscovered, by Oliver Heaviside around the turn of the century.
Bernhard Riemann used the Laplace transform in his 1859 paper On the Number of Primes Less Than a Given Magnitude, in which he also developed the inversion theorem. Riemann used the Laplace transform to develop the functional equation of the Riemann zeta function, and this method is still used to related the modular transformation law of the Jacobi theta function, which is simple to prove via Poisson summation, to the functional equation.
Hjalmar Mellin was among the first to study the Laplace transform, rigorously in the Karl Weierstrass school of analysis, and apply it to the study of differential equations and special functions, at the turn of the 20th century. At around the same time, Heaviside was busy with his operational calculus. Thomas Joannes Stieltjes considered a generalization of the Laplace transform connected to his work on moments. Other contributors in this time period included Mathias Lerch, Oliver Heaviside, and Thomas Bromwich.
In 1934, Raymond Paley and Norbert Wiener published the important work Fourier transforms in the complex domain, about what is now called the Laplace transform (see below). Also during the 30s, the Laplace transform was instrumental in G H Hardy and John Edensor Littlewood's study of tauberian theorems, and this application was later expounded on by Widder (1941), who developed other aspects of the theory such as a new method for inversion. Edward Charles Titchmarsh wrote the influential Introduction to the theory of the Fourier integral (1937).
The current widespread use of the transform (mainly in engineering) came about during and soon after World War II, replacing the earlier Heaviside operational calculus. The advantages of the Laplace transform had been emphasized by Gustav Doetsch, to whom the name Laplace transform is apparently due.
Formal definition
The Laplace transform of a function , defined for all real numbers , is the function , which is a unilateral transform defined by
where s is a complex frequency-domain parameter
with real numbers and .
An alternate notation for the Laplace transform is instead of , often written as in an abuse of notation.
The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the integral is that must be locally integrable on . For locally integrable functions that decay at infinity or are of exponential type (), the integral can be understood to be a (proper) Lebesgue integral. However, for many applications it is necessary to regard it as a conditionally convergent improper integral at . Still more generally, the integral can be understood in a weak sense, and this is dealt with below.
One can define the Laplace transform of a finite Borel measure by the Lebesgue integral
An important special case is where is a probability measure, for example, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a probability density function . In that case, to avoid potential confusion, one often writes
where the lower limit of is shorthand notation for
This limit emphasizes that any point mass located at is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform.
Bilateral Laplace transform
When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is usually intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform, or two-sided Laplace transform, by extending the limits of integration to be the entire real axis. If that is done, the common unilateral transform simply becomes a special case of the bilateral transform, where the definition of the function being transformed is multiplied by the Heaviside step function.
The bilateral Laplace transform is defined as follows:
An alternate notation for the bilateral Laplace transform is , instead of .
Inverse Laplace transform
Two integrable functions have the same Laplace transform only if they differ on a set of Lebesgue measure zero. This means that, on the range of the transform, there is an inverse transform. In fact, besides integrable functions, the Laplace transform is a one-to-one mapping from one function space into another in many other function spaces as well, although there is usually no easy characterization of the range.
Typical function spaces in which this is true include the spaces of bounded continuous functions, the space , or more generally tempered distributions on . The Laplace transform is also defined and injective for suitable spaces of tempered distributions.
In these cases, the image of the Laplace transform lives in a space of analytic functions in the region of convergence. The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the Fourier–Mellin integral, and Mellin's inverse formula):
where is a real number so that the contour path of integration is in the region of convergence of . In most applications, the contour can be closed, allowing the use of the residue theorem. An alternative formula for the inverse Laplace transform is given by Post's inversion formula. The limit here is interpreted in the weak-* topology.
In practice, it is typically more convenient to decompose a Laplace transform into known transforms of functions obtained from a table and construct the inverse by inspection.
Probability theory
In pure and applied probability, the Laplace transform is defined as an expected value. If is a random variable with probability density function , then the Laplace transform of is given by the expectation
where is the expectation of random variable .
By convention, this is referred to as the Laplace transform of the random variable itself. Here, replacing by gives the moment generating function of . The Laplace transform has applications throughout probability theory, including first passage times of stochastic processes such as Markov chains, and renewal theory.
Of particular use is the ability to recover the cumulative distribution function of a continuous random variable by means of the Laplace transform as follows:
Algebraic construction
The Laplace transform can be alternatively defined in a purely algebraic manner by applying a field of fractions construction to the convolution ring of functions on the positive half-line. The resulting space of abstract operators is exactly equivalent to Laplace space, but in this construction the forward and reverse transforms never need to be explicitly defined (avoiding the related difficulties with proving convergence).
Region of convergence
If is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform of converges provided that the limit
exists.
The Laplace transform converges absolutely if the integral
exists as a proper Lebesgue integral. The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former but not in the latter sense.
The set of values for which converges absolutely is either of the form or , where is an extended real constant with (a consequence of the dominated convergence theorem). The constant is known as the abscissa of absolute convergence, and depends on the growth behavior of . Analogously, the two-sided transform converges absolutely in a strip of the form , and possibly including the lines or . The subset of values of for which the Laplace transform converges absolutely is called the region of absolute convergence, or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence: this is a consequence of Fubini's theorem and Morera's theorem.
Similarly, the set of values for which converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at , then it automatically converges for all with . Therefore, the region of convergence is a half-plane of the form , possibly including some points of the boundary line .
In the region of convergence , the Laplace transform of can be expressed by integrating by parts as the integral
That is, can effectively be expressed, in the region of convergence, as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.
There are several Paley–Wiener theorems concerning the relationship between the decay properties of , and the properties of the Laplace transform within the region of convergence.
In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region . As a result, LTI systems are stable, provided that the poles of the Laplace transform of the impulse response function have negative real part.
This ROC is used in knowing about the causality and stability of a system.
Properties and theorems
The Laplace transform's key property is that it converts differentiation and integration in the time domain into multiplication and division by in the Laplace domain. Thus, the Laplace variable is also known as an operator variable in the Laplace domain: either the derivative operator or (for the integration operator.
Given the functions and , and their respective Laplace transforms and ,
the following table is a list of properties of unilateral Laplace transform:
Initial value theorem
Final value theorem
, if all poles of are in the left half-plane.
The final value theorem is useful because it gives the long-term behaviour without having to perform partial fraction decompositions (or other difficult algebra). If has a pole in the right-hand plane or poles on the imaginary axis (e.g., if or ), then the behaviour of this formula is undefined.
Relation to power series
The Laplace transform can be viewed as a continuous analogue of a power series. If is a discrete function of a positive integer , then the power series associated to is the series
where is a real variable (see Z-transform). Replacing summation over with integration over , a continuous version of the power series becomes
where the discrete function is replaced by the continuous one .
Changing the base of the power from to gives
For this to converge for, say, all bounded functions , it is necessary to require that . Making the substitution gives just the Laplace transform:
In other words, the Laplace transform is a continuous analog of a power series, in which the discrete parameter is replaced by the continuous parameter , and is replaced by .
Relation to moments
The quantities
are the moments of the function . If the first moments of converge absolutely, then by repeated differentiation under the integral,
This is of special significance in probability theory, where the moments of a random variable are given by the expectation values . Then, the relation holds
Transform of a function's derivative
It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows:
yielding
and in the bilateral case,
The general result
where denotes the th derivative of , can then be established with an inductive argument.
Evaluating integrals over the positive real axis
A useful property of the Laplace transform is the following:
under suitable assumptions on the behaviour of in a right neighbourhood of and on the decay rate of in a left neighbourhood of . The above formula is a variation of integration by parts, with the operators
and being replaced by and . Let us prove the equivalent formulation:
By plugging in the left-hand side turns into:
but assuming Fubini's theorem holds, by reversing the order of integration we get the wanted right-hand side.
This method can be used to compute integrals that would otherwise be difficult to compute using elementary methods of real calculus. For example,
Relationship to other transforms
Laplace–Stieltjes transform
The (unilateral) Laplace–Stieltjes transform of a function is defined by the Lebesgue–Stieltjes integral
The function is assumed to be of bounded variation. If is the antiderivative of :
then the Laplace–Stieltjes transform of and the Laplace transform of coincide. In general, the Laplace–Stieltjes transform is the Laplace transform of the Stieltjes measure associated to . So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the Laplace–Stieltjes transform is thought of as operating on its cumulative distribution function.
Fourier transform
Let be a complex-valued Lebesgue integrable function supported on , and let be its Laplace transform. Then, within the region of convergence, we have
which is the Fourier transform of the function .
Indeed, the Fourier transform is a special case (under certain conditions) of the bilateral Laplace transform. The main difference is that the Fourier transform of a function is a complex function of a real variable (frequency), the Laplace transform of a function is a complex function of a complex variable. The Laplace transform is usually restricted to transformation of functions of with . A consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable . Unlike the Fourier transform, the Laplace transform of a distribution is generally a well-behaved function. Techniques of complex variables can also be used to directly study Laplace transforms. As a holomorphic function, the Laplace transform has a power series representation. This power series expresses a function as a linear superposition of moments of the function. This perspective has applications in probability theory.
Formally, the Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary argument when the condition explained below is fulfilled,
This convention of the Fourier transform ( in ) requires a factor of on the inverse Fourier transform. This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system.
The above relation is valid as stated if and only if the region of convergence (ROC) of contains the imaginary axis, .
For example, the function has a Laplace transform whose ROC is . As is a pole of , substituting in does not yield the Fourier transform of , which contains terms proportional to the Dirac delta functions .
However, a relation of the form
holds under much weaker conditions. For instance, this holds for the above example provided that the limit is understood as a weak limit of measures (see vague topology). General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form of Paley–Wiener theorems.
Mellin transform
The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables.
If in the Mellin transform
we set we get a two-sided Laplace transform.
Z-transform
The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of
where is the sampling interval (in units of time e.g., seconds) and is the sampling rate (in samples per second or hertz).
Let
be a sampling impulse train (also called a Dirac comb) and
be the sampled representation of the continuous-time
The Laplace transform of the sampled signal is
This is the precise definition of the unilateral Z-transform of the discrete function
with the substitution of .
Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal,
The similarity between the Z- and Laplace transforms is expanded upon in the theory of time scale calculus.
Borel transform
The integral form of the Borel transform
is a special case of the Laplace transform for an entire function of exponential type, meaning that
for some constants and . The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type. Nachbin's theorem gives necessary and sufficient conditions for the Borel transform to be well defined.
Fundamental relationships
Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms.
Table of selected Laplace transforms
The following table provides Laplace transforms for many common functions of a single variable. For definitions and explanations, see the Explanatory | Mathematics | Calculus and analysis | null |
18631 | https://en.wikipedia.org/wiki/Lorentz%20force | Lorentz force | In physics, specifically in electromagnetism, the Lorentz force law is the combination of electric and magnetic force on a point charge due to electromagnetic fields. The Lorentz force, on the other hand, is a physical effect that occurs in the vicinity of electrically neutral, current-carrying conductors causing moving electrical charges to experience a magnetic force.
The Lorentz force law states that a particle of charge moving with a velocity in an electric field and a magnetic field experiences a force (in SI units) of
It says that the electromagnetic force on a charge is a combination of (1) a force in the direction of the electric field (proportional to the magnitude of the field and the quantity of charge), and (2) a force at right angles to both the magnetic field and the velocity of the charge (proportional to the magnitude of the field, the charge, and the velocity).
Variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called Laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction), and the force on a moving charged particle.
Historians suggest that the law is implicit in a paper by James Clerk Maxwell, published in 1865. Hendrik Lorentz arrived at a complete derivation in 1895, identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force.
Lorentz force law as the definition of E and B
In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the definition of the electric and magnetic fields and . To be specific, the Lorentz force is understood to be the following empirical statement:
The electromagnetic force on a test charge at a given point and time is a certain function of its charge and velocity , which can be parameterized by exactly two vectors and , in the functional form:
This is valid, even for particles approaching the speed of light (that is, magnitude of , ). So the two vector fields and are thereby defined throughout space and time, and these are called the "electric field" and "magnetic field". The fields are defined everywhere in space and time with respect to what force a test charge would receive regardless of whether a charge is present to experience the force.
Physical interpretation of the Lorentz force
Coulomb's law is only valid for point charges at rest. In fact, the electromagnetic force between two point charges depends not only on the distance but also on the relative velocity. For small relative velocities and very small accelerations, instead of the Coulomb force, the Weber force can be applied. The sum of the Weber forces of all charge carriers in a closed DC loop on a single test charge produces – regardless of the shape of the current loop – the Lorentz force.
The interpretation of magnetism by means of a modified Coulomb law was first proposed by Carl Friedrich Gauss. In 1835, Gauss assumed that each segment of a DC loop contains an equal number of negative and positive point charges that move at different speeds. If Coulomb's law were completely correct, no force should act between any two short segments of such current loops. However, around 1825, André-Marie Ampère demonstrated experimentally that this is not the case. Ampère also formulated a force law. Based on this law, Gauss concluded that the electromagnetic force between two point charges depends not only on the distance but also on the relative velocity.
The Weber force is a central force and complies with Newton's third law. This demonstrates not only the conservation of momentum but also that the conservation of energy and the conservation of angular momentum apply. Weber electrodynamics is only a quasistatic approximation, i.e. it should not be used for higher velocities and accelerations. However, the Weber force illustrates that the Lorentz force can be traced back to central forces between numerous point-like charge carriers.
Equation
Charged particle
The force acting on a particle of electric charge with instantaneous velocity , due to an external electric field and magnetic field , is given by (SI definition of quantities):
where is the vector cross product (all boldface quantities are vectors). In terms of Cartesian components, we have:
In general, the electric and magnetic fields are functions of the position and time. Therefore, explicitly, the Lorentz force can be written as:
in which is the position vector of the charged particle, is time, and the overdot is a time derivative.
A positively charged particle will be accelerated in the same linear orientation as the field, but will curve perpendicularly to both the instantaneous velocity vector and the field according to the right-hand rule (in detail, if the fingers of the right hand are extended to point in the direction of and are then curled to point in the direction of , then the extended thumb will point in the direction of ).
The term is called the electric force, while the term is called the magnetic force. According to some definitions, the term "Lorentz force" refers specifically to the formula for the magnetic force, with the total electromagnetic force (including the electric force) given some other (nonstandard) name. This article will not follow this nomenclature: In what follows, the term Lorentz force will refer to the expression for the total force.
The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force.
The Lorentz force is a force exerted by the electromagnetic field on the charged particle, that is, it is the rate at which linear momentum is transferred from the electromagnetic field to the particle. Associated with it is the power which is the rate at which energy is transferred from the electromagnetic field to the particle. That power is
Notice that the magnetic field does not contribute to the power because the magnetic force is always perpendicular to the velocity of the particle.
Continuous charge distribution
For a continuous charge distribution in motion, the Lorentz force equation becomes:
where is the force on a small piece of the charge distribution with charge . If both sides of this equation are divided by the volume of this small piece of the charge distribution , the result is:
where is the force density (force per unit volume) and is the charge density (charge per unit volume). Next, the current density corresponding to the motion of the charge continuum is
so the continuous analogue to the equation is
The total force is the volume integral over the charge distribution:
By eliminating and , using Maxwell's equations, and manipulating using the theorems of vector calculus, this form of the equation can be used to derive the Maxwell stress tensor , in turn this can be combined with the Poynting vector to obtain the electromagnetic stress–energy tensor T used in general relativity.
In terms of and , another way to write the Lorentz force (per unit volume) is
where is the speed of light and ∇· denotes the divergence of a tensor field. Rather than the amount of charge and its velocity in electric and magnetic fields, this equation relates the energy flux (flow of energy per unit time per unit distance) in the fields to the force exerted on a charge distribution. See Covariant formulation of classical electromagnetism for more details.
The density of power associated with the Lorentz force in a material medium is
If we separate the total charge and total current into their free and bound parts, we get that the density of the Lorentz force is
where: is the density of free charge; is the polarization density; is the density of free current; and is the magnetization density. In this way, the Lorentz force can explain the torque applied to a permanent magnet by the magnetic field. The density of the associated power is
Formulation in the Gaussian system
The above-mentioned formulae use the conventions for the definition of the electric and magnetic field used with the SI, which is the most common. However, other conventions with the same physics (i.e. forces on e.g. an electron) are possible and used. In the conventions used with the older CGS-Gaussian units, which are somewhat more common among some theoretical physicists as well as condensed matter experimentalists, one has instead
where c is the speed of light. Although this equation looks slightly different, it is equivalent, since one has the following relations:
where is the vacuum permittivity and the vacuum permeability. In practice, the subscripts "G" and "SI" are omitted, and the used convention (and unit) must be determined from context.
History
Early attempts to quantitatively describe the electromagnetic force were made in the mid-18th century. It was proposed that the force on magnetic poles, by Johann Tobias Mayer and others in 1760, and electrically charged objects, by Henry Cavendish in 1762, obeyed an inverse-square law. However, in both cases the experimental proof was neither complete nor conclusive. It was not until 1784 when Charles-Augustin de Coulomb, using a torsion balance, was able to definitively show through experiment that this was true. Soon after the discovery in 1820 by Hans Christian Ørsted that a magnetic needle is acted on by a voltaic current, André-Marie Ampère that same year was able to devise through experimentation the formula for the angular dependence of the force between two current elements. In all these descriptions, the force was always described in terms of the properties of the matter involved and the distances between two masses or charges rather than in terms of electric and magnetic fields.
The modern concept of electric and magnetic fields first arose in the theories of Michael Faraday, particularly his idea of lines of force, later to be given full mathematical description by Lord Kelvin and James Clerk Maxwell. From a modern perspective it is possible to identify in Maxwell's 1865 formulation of his field equations a form of the Lorentz force equation in relation to electric currents, although in the time of Maxwell it was not evident how his equations related to the forces on moving charged objects. J. J. Thomson was the first to attempt to derive from Maxwell's field equations the electromagnetic forces on a moving charged object in terms of the object's properties and external fields. Interested in determining the electromagnetic behavior of the charged particles in cathode rays, Thomson published a paper in 1881 wherein he gave the force on the particles due to an external magnetic field as
Thomson derived the correct basic form of the formula, but, because of some miscalculations and an incomplete description of the displacement current, included an incorrect scale-factor of a half in front of the formula. Oliver Heaviside invented the modern vector notation and applied it to Maxwell's field equations; he also (in 1885 and 1889) had fixed the mistakes of Thomson's derivation and arrived at the correct form of the magnetic force on a moving charged object. Finally, in 1895, Hendrik Lorentz derived the modern form of the formula for the electromagnetic force which includes the contributions to the total force from both the electric and the magnetic fields. Lorentz began by abandoning the Maxwellian descriptions of the ether and conduction. Instead, Lorentz made a distinction between matter and the luminiferous aether and sought to apply the Maxwell equations at a microscopic scale. Using Heaviside's version of the Maxwell equations for a stationary ether and applying Lagrangian mechanics (see below), Lorentz arrived at the correct and complete form of the force law that now bears his name.
Trajectories of particles due to the Lorentz force
In many cases of practical interest, the motion in a magnetic field of an electrically charged particle (such as an electron or ion in a plasma) can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation.
Significance of the Lorentz force
While the modern Maxwell's equations describe how electrically charged particles and currents or moving charged particles give rise to electric and magnetic fields, the Lorentz force law completes that picture by describing the force acting on a moving point charge q in the presence of electromagnetic fields. The Lorentz force law describes the effect of E and B upon a point charge, but such electromagnetic forces are not the entire picture. Charged particles are possibly coupled to other forces, notably gravity and nuclear forces. Thus, Maxwell's equations do not stand separate from other physical laws, but are coupled to them via the charge and current densities. The response of a point charge to the Lorentz law is one aspect; the generation of E and B by currents and charges is another.
In real materials the Lorentz force is inadequate to describe the collective behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium not only respond to the E and B fields but also generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, stellar evolution. An entire physical apparatus for dealing with these matters has developed. See for example, Green–Kubo relations and Green's function (many-body theory).
Force on a current-carrying wire
When a wire carrying an electric current is placed in a magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force). By combining the Lorentz force law above with the definition of electric current, the following equation results, in the case of a straight stationary wire in a homogeneous field:
where is a vector whose magnitude is the length of the wire, and whose direction is along the wire, aligned with the direction of the conventional current .
If the wire is not straight, the force on it can be computed by applying this formula to each infinitesimal segment of wire , then adding up all these forces by integration. This results in the same formal expression, but should now be understood as the vector connecting the end points of the curved wire with direction from starting to end point of conventional current. Usually, there will also be a net torque.
If, in addition, the magnetic field is inhomogeneous, the net force on a stationary rigid wire carrying a steady current is given by integration along the wire,
One application of this is Ampère's force law, which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's magnetic field.
EMF
The magnetic force () component of the Lorentz force is responsible for motional electromotive force (or motional EMF), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic field exerts opposite forces on electrons and nuclei in the wire, and this creates the EMF. The term "motional EMF" is applied to this phenomenon, since the EMF is due to the motion of the wire.
In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force () term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an induced EMF, as described by the Maxwell–Faraday equation (one of the four modern Maxwell's equations).
Both of these EMFs, despite their apparently distinct origins, are described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see below.) Einstein's special theory of relativity was partially motivated by the desire to better understand this link between the two effects. In fact, the electric and magnetic fields are different facets of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the E-field can change in whole or in part to a B-field or vice versa.
Lorentz force and Faraday's law of induction
Given a loop of wire in a magnetic field, Faraday's law of induction states the induced electromotive force (EMF) in the wire is:
where
is the magnetic flux through the loop, is the magnetic field, is a surface bounded by the closed contour , at time , is an infinitesimal vector area element of (magnitude is the area of an infinitesimal patch of surface, direction is orthogonal to that surface patch).
The sign of the EMF is determined by Lenz's law. Note that this is valid for not only a stationary wirebut also for a moving wire.
From Faraday's law of induction (that is valid for a moving wire, for instance in a motor) and the Maxwell Equations, the Lorentz Force can be deduced. The reverse is also true, the Lorentz force and the Maxwell Equations can be used to derive the Faraday Law.
Let be the moving wire, moving together without rotation and with constant velocity and be the internal surface of the wire. The EMF around the closed path is given by:
where is the electric field and is an infinitesimal vector element of the contour .
NB: Both and have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem.
The above result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called here the Maxwell–Faraday equation:
The Maxwell–Faraday equation also can be written in an integral form using the Kelvin–Stokes theorem.
So we have, the Maxwell Faraday equation:
and the Faraday Law,
The two are equivalent if the wire is not moving. Using the Leibniz integral rule and that , results in,
and using the Maxwell Faraday equation,
since this is valid for any wire position it implies that,
Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law.
If the magnetic field is fixed in time and the conducting loop moves through the field, the magnetic flux linking the loop can change in several ways. For example, if the B-field varies with position, and the loop moves to a location with different B-field, will change. Alternatively, if the loop changes orientation with respect to the B-field, the differential element will change because of the different angle between and , also changing . As a third example, if a portion of the circuit is swept through a uniform, time-independent B-field, and another portion of the circuit is held stationary, the flux linking the entire closed circuit can change due to the shift in relative position of the circuit's component parts with time (surface time-dependent). In all three cases, Faraday's law of induction then predicts the EMF generated by the change in .
Note that the Maxwell Faraday's equation implies that the Electric Field is non conservative when the Magnetic Field varies in time, and is not expressible as the gradient of a scalar field, and not subject to the gradient theorem since its curl is not zero.
Lorentz force in terms of potentials
The and fields can be replaced by the magnetic vector potential and (scalar) electrostatic potential by
where is the gradient, is the divergence, and is the curl.
The force becomes
Using an identity for the triple product this can be rewritten as,
(Notice that the coordinates and the velocity components should be treated as independent variables, so the del operator acts only on not on thus, there is no need of using Feynman's subscript notation in the equation above). Using the chain rule, the total derivative of is:
so that the above expression becomes:
With , we can put the equation into the convenient Euler–Lagrange form
where and
Lorentz force and analytical mechanics
The Lagrangian for a charged particle of mass and charge in an electromagnetic field equivalently describes the dynamics of the particle in terms of its energy, rather than the force exerted on it. The classical expression is given by:
where and are the potential fields as above. The quantity can be thought as a velocity-dependent potential function. Using Lagrange's equations, the equation for the Lorentz force given above can be obtained again.
The potential energy depends on the velocity of the particle, so the force is velocity dependent, so it is not conservative.
The relativistic Lagrangian is
The action is the relativistic arclength of the path of the particle in spacetime, minus the potential energy contribution, plus an extra contribution which quantum mechanically is an extra phase a charged particle gets when it is moving along a vector potential.
Relativistic form of the Lorentz force
Covariant form of the Lorentz force
Field tensor
Using the metric signature , the Lorentz force for a charge can be written in covariant form:
where is the four-momentum, defined as
the proper time of the particle, the contravariant electromagnetic tensor
and is the covariant 4-velocity of the particle, defined as:
in which
is the Lorentz factor.
The fields are transformed to a frame moving with constant relative velocity by:
where is the Lorentz transformation tensor.
Translation to vector notation
The component (x-component) of the force is
Substituting the components of the covariant electromagnetic tensor F yields
Using the components of covariant four-velocity yields
The calculation for (force components in the and directions) yields similar results, so collecting the 3 equations into one:
and since differentials in coordinate time and proper time are related by the Lorentz factor,
so we arrive at
This is precisely the Lorentz force law, however, it is important to note that is the relativistic expression,
Lorentz force in spacetime algebra (STA)
The electric and magnetic fields are dependent on the velocity of an observer, so the relativistic form of the Lorentz force law can best be exhibited starting from a coordinate-independent expression for the electromagnetic and magnetic fields , and an arbitrary time-direction, . This can be settled through spacetime algebra (or the geometric algebra of spacetime), a type of Clifford algebra defined on a pseudo-Euclidean space, as
and
is a spacetime bivector (an oriented plane segment, just like a vector is an oriented line segment), which has six degrees of freedom corresponding to boosts (rotations in spacetime planes) and rotations (rotations in space-space planes). The dot product with the vector pulls a vector (in the space algebra) from the translational part, while the wedge-product creates a trivector (in the space algebra) who is dual to a vector which is the usual magnetic field vector.
The relativistic velocity is given by the (time-like) changes in a time-position vector where
(which shows our choice for the metric) and the velocity is
The proper (invariant is an inadequate term because no transformation has been defined) form of the Lorentz force law is simply
Note that the order is important because between a bivector and a vector the dot product is anti-symmetric. Upon a spacetime split like one can obtain the velocity, and fields as above yielding the usual expression.
Lorentz force in general relativity
In the general theory of relativity the equation of motion for a particle with mass and charge , moving in a space with metric tensor and electromagnetic field , is given as
where ( is taken along the trajectory), and
The equation can also be written as
where is the Christoffel symbol (of the torsion-free metric connection in general relativity), or as
where is the covariant differential in general relativity (metric, torsion-free).
Applications
The Lorentz force occurs in many devices, including:
Cyclotrons and other circular path particle accelerators
Mass spectrometers
Velocity Filters
Magnetrons
Lorentz force velocimetry
In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices including:
Electric motors
Railguns
Linear motors
Loudspeakers
Magnetoplasmadynamic thrusters
Electrical generators
Homopolar generators
Linear alternators
| Physical sciences | Electrodynamics | null |
18634 | https://en.wikipedia.org/wiki/Lemma%20%28mathematics%29 | Lemma (mathematics) | In mathematics and other fields, a lemma (: lemmas or lemmata) is a generally minor, proven proposition which is used to prove a larger statement. For that reason, it is also known as a "helping theorem" or an "auxiliary theorem". In many cases, a lemma derives its importance from the theorem it aims to prove; however, a lemma can also turn out to be more important than originally thought.
Etymology
From the Ancient Greek λῆμμα, (perfect passive εἴλημμαι) something received or taken. Thus something taken for granted in an argument.
Comparison with theorem
There is no formal distinction between a lemma and a theorem, only one of intention (see Theorem terminology). However, a lemma can be considered a minor result whose sole purpose is to help prove a more substantial theorem – a step in the direction of proof.
Well-known lemmas
Some powerful results in mathematics are known as lemmas, first named for their originally minor purpose. These include, among others:
Bézout's lemma
Burnside's lemma
Dehn's lemma
Euclid's lemma
Farkas' lemma
Fatou's lemma
Gauss's lemma (any of several named after Carl Friedrich Gauss)
Greendlinger's lemma
Itô's lemma
Jordan's lemma
Lovász local lemma
Nakayama's lemma
Poincaré's lemma
Riesz's lemma
Schur's lemma
Schwarz's lemma
Sperner's lemma
Urysohn's lemma
Vitali covering lemma
Yoneda's lemma
Zorn's lemma
While these results originally seemed too simple or too technical to warrant independent interest, they have eventually turned out to be central to the theories in which they occur.
| Mathematics | Basics | null |
18643 | https://en.wikipedia.org/wiki/Lactose | Lactose | Lactose, or milk sugar, is a disaccharide composed of galactose and glucose and has the molecular formula C12H22O11. Lactose makes up around 2–8% of milk (by mass). The name comes from (gen. ), the Latin word for milk, plus the suffix -ose used to name sugars. The compound is a white, water-soluble, non-hygroscopic solid with a mildly sweet taste. It is used in the food industry.
Structure and reactions
Lactose is a disaccharide composed of galactose and glucose, which form a β-1→4 glycosidic linkage. Its systematic name is β-D-galactopyranosyl-(1→4)-D-glucose. The glucose can be in either the α-pyranose form or the β-pyranose form, whereas the galactose can have only the β-pyranose form: hence α-lactose and β-lactose refer to the anomeric form of the glucopyranose ring alone. Detection reactions for lactose are the Wöhlk- and Fearon's test. They can be used to detect the different lactose content of dairy products such as whole milk, lactose free milk, yogurt, buttermilk, coffee creamer, sour cream, kefir, etc.
Lactose is hydrolysed to glucose and galactose, isomerised in alkaline solution to lactulose, and catalytically hydrogenated to the corresponding polyhydric alcohol, lactitol. Lactulose is a commercial product, used for treatment of constipation.
Occurrence and isolation
Lactose comprises about 2–8% of milk by weight. Several million tons are produced annually as a by-product of the dairy industry.
Whey or milk plasma is the liquid remaining after milk is curdled and strained, for example in the production of cheese. Whey is made up of 6.5% solids, of which 4.8% is lactose, which is purified by crystallisation. Industrially, lactose is produced from whey permeate – whey filtrated for all major proteins. The protein fraction is used in infant nutrition and sports nutrition while the permeate can be evaporated to 60–65% solids and crystallized while cooling. Lactose can also be isolated by dilution of whey with ethanol.
Dairy products such as yogurt and cheese contain very little lactose. This is because the bacteria used to make these products breaks down lactose through the use of β-Galactosidases.
Metabolism
Infant mammals nurse on their mothers to drink milk, which is rich in lactose. The intestinal villi secrete the enzyme lactase (β-D-galactosidase) to digest it. This enzyme cleaves the lactose molecule into its two subunits, the simple sugars glucose and galactose, which can be absorbed. Since lactose occurs mostly in milk, in most mammals, the production of lactase gradually decreases with maturity due to weaning; the removal of lactose from the diet removes the metabolic pressure to continue to produce lactase for its digestion.
Many people with ancestry in Europe, West Asia, South Asia, the Sahel belt in West Africa, East Africa and a few other parts of Central Africa maintain lactase production into adulthood due to selection for genes that continue lactase production. In many of these areas, milk from mammals such as cattle, goats, and sheep is used as a large source of food. Hence, it was in these regions that genes for lifelong lactase production first evolved. The genes of adult lactose tolerance have evolved independently in various ethnic groups. By descent, more than 70% of western Europeans can digest lactose as adults, compared with less than 30% of people from areas of Africa, eastern and south-eastern Asia and Oceania. In people who are lactose intolerant, lactose is not broken down and provides food for gas-producing gut flora, which can lead to diarrhea, bloating, flatulence, and other gastrointestinal symptoms.
Biological properties
The sweetness of lactose is 0.2 to 0.4, relative to 1.0 for sucrose. For comparison, the sweetness of glucose is 0.6 to 0.7, of fructose is 1.3, of galactose is 0.5 to 0.7, of maltose is 0.4 to 0.5, of sorbose is 0.4, and of xylose is 0.6 to 0.7.
When lactose is completely digested in the small intestine, its caloric value is 4 kcal/g, or the same as that of other carbohydrates. However, lactose is not always fully digested in the small intestine. Depending on ingested dose, combination with meals (either solid or liquid), and lactase activity in the intestines, the caloric value of lactose ranges from 2 to 4 kcal/g. Undigested lactose acts as dietary fiber. It also has positive effects on absorption of minerals, such as calcium and magnesium.
The glycemic index of lactose is 46 to 65. For comparison, the glycemic index of glucose is 100 to 138, of sucrose is 68 to 92, of maltose is 105, and of fructose is 19 to 27.
Lactose has relatively low cariogenicity among sugars. This is because it is not a substrate for dental plaque formation and it is not rapidly fermented by oral bacteria. The buffering capacity of milk also reduces the cariogenicity of lactose.
Applications
Its mild flavor and easy handling properties have led to its use as a carrier and stabiliser of aromas and pharmaceutical products. Lactose is not added directly to many foods, because its solubility is less than that of other sugars commonly used in food. Infant formula is a notable exception, where the addition of lactose is necessary to match the composition of human milk.
Lactose is not fermented by most yeast during brewing, which may be used to advantage. For example, lactose may be used to sweeten stout beer; the resulting beer is usually called a milk stout or a cream stout.
Yeast belonging to the genus Kluyveromyces have a unique industrial application, as they are capable of fermenting lactose for ethanol production. Surplus lactose from the whey by-product of dairy operations is a potential source of alternative energy.
Another significant lactose use is in the pharmaceutical industry. Lactose is added to tablet and capsule drug products as an ingredient because of its physical and functional properties (examples are atorvastatin, levocetirizine or thiamazole among many others). For similar reasons, it can be used to dilute illicit drugs such as cocaine or heroin.
History
The first crude isolation of lactose, by Italian physician Fabrizio Bartoletti (1576–1630), was published in 1633. In 1700, the Venetian pharmacist Lodovico Testi (1640–1707) published a booklet of testimonials to the power of milk sugar () to relieve, among other ailments, the symptoms of arthritis. In 1715, Testi's procedure for making milk sugar was published by Antonio Vallisneri. Lactose was identified as a sugar in 1780 by Carl Wilhelm Scheele.
In 1812, Heinrich Vogel (1778–1867) recognized that glucose was a product of hydrolyzing lactose. In 1856, Louis Pasteur crystallized the other component of lactose, galactose. By 1894, Emil Fischer had established the configurations of the component sugars.
Lactose was named by the French chemist Jean Baptiste André Dumas (1800–1884) in 1843. In 1856, Pasteur named galactose "lactose". In 1860, Marcellin Berthelot renamed it "galactose", and transferred the name "lactose" to what is now called lactose. It has a formula of C12H22O11 and the hydrate formula C12H22O11·H2O, making it an isomer of sucrose.
| Biology and health sciences | Biochemistry and molecular biology | null |
18739 | https://en.wikipedia.org/wiki/Laika | Laika | Laika ( ; , ; – 3 November 1957) was a Soviet space dog who was one of the first animals in space and the first to orbit the Earth. A stray mongrel from the streets of Moscow, she flew aboard the Sputnik 2 spacecraft, launched into low orbit on 3 November 1957. As the technology to re-enter the atmosphere had not yet been developed, Laika's survival was never expected. She died of hyperthermia hours into the flight, on the craft's fourth orbit.
Little was known about the effects of spaceflight on living creatures at the time of Laika's mission, and animal flights were viewed by engineers as a necessary precursor to human missions. The experiment, which monitored Laika's vital signs, aimed to prove that a living organism could survive being launched into orbit and continue to function under conditions of weakened gravity and increased radiation, providing scientists with some of the first data on the biological effects of spaceflight.
Laika's death was possibly caused by a failure of the central R7 sustainer to separate from the payload. The true cause and time of her death were not made public until 2002; instead, it was widely reported that she died when her oxygen ran out on day six or, as the Soviet government initially claimed, she was euthanised prior to oxygen depletion. In 2008, a small monument to Laika depicting her standing atop a rocket was unveiled near the military research facility in Moscow that prepared her flight. She also appears on the Monument to the Conquerors of Space in Moscow.
Sputnik 2
After the success of Sputnik 1 in October 1957, Nikita Khrushchev, leader of the Soviet Union, wanted a spacecraft launched on 7 November 1957 to celebrate the 40th anniversary of the October Revolution. Khrushchev specifically wanted to deliver a "space spectacular", a mission that would repeat the triumph of Sputnik1, stunning the world with Soviet prowess.
While Construction had already started on Sputnik 3, a more sophisticated satellite, it would not be ready until December. To meet the November deadline, a new simple satellite would need to be built. Sergei Korolev proposed that a dog be placed in the satellite, an idea which was quickly adopted by planners. Soviet rocket engineers had long intended a canine orbit before attempting human spaceflight; since 1951, they had lofted 12 dogs into sub-orbital space on ballistic flights, working gradually toward an orbital mission set for some time in 1958. To satisfy Khrushchev's demands, they expedited the orbital canine flight for the November launch.
According to Russian sources, the official decision to launch Sputnik2 was made on 10 or 12 October, leaving less than four weeks to design and build the spacecraft. Sputnik2, therefore, was something of a rushed job, with most elements of the spacecraft being constructed from rough sketches. Aside from the primary mission of sending a living passenger into space, Sputnik2 also contained instrumentation for measuring solar irradiance and cosmic rays.
The craft was equipped with a life-support system consisting of an oxygen generator and devices to avoid oxygen poisoning and to absorb carbon dioxide. A fan, designed to activate whenever the cabin temperature exceeded , was added to keep the dog cool. Enough food (in a gelatinous form) was provided for a seven-day flight, and the dog was fitted with a bag to collect waste. A harness was designed to be fitted to the dog, and there were chains to restrict her movements to standing, sitting, or lying down; there was no room to turn around in the cabin. An electrocardiogram monitored heart rate and further instrumentation tracked respiration rate, maximum arterial pressure, and the dog's movements.
Training
Laika was found as a stray wandering the streets of Moscow. Soviet scientists chose to use Moscow strays since they assumed that such animals had already learned to endure conditions of extreme cold and hunger. She was a mongrel female, approximately three years old. Another account reported that she weighed about . Soviet personnel gave her several names and nicknames, among them Kudryavka (Russian for Little Curly), Zhuchka (Little Bug), and Limonchik (Little Lemon). Laika, the Russian name for several breeds of dogs similar to the husky, was the name popularised around the world. Its literal translation would be "Barker", from the Russian verb "layat" (лаять), "to bark". According to some accounts, the technicians actually renamed her from Kudryavka to Laika due to her loud barking. The American press dubbed her Muttnik (mutt + suffix -nik) as a pun on Sputnik, or referred to her as Curly. Her true pedigree is unknown, although it is generally accepted that she was part husky or other Nordic breed, and possibly part terrier. NASA refers to Laika as a "part-Samoyed terrier". A Russian magazine described her temperament as phlegmatic, saying that she did not quarrel with other dogs.
The Soviet Union and United States had previously sent animals only on sub-orbital flights. Three dogs were trained for the Sputnik2 flight: Albina, Mushka, and Laika. Soviet space-life scientists Vladimir Yazdovsky and Oleg Gazenko trained the dogs.
To adapt the dogs to the confines of the tiny cabin of Sputnik2, they were kept in progressively smaller cages for periods of up to twenty days. The extensive close confinement caused them to stop urinating or defecating, made them restless, and caused their general condition to deteriorate. Laxatives did not improve their condition, and the researchers found that only long periods of training proved effective. The dogs were placed in centrifuges that simulated the acceleration of a rocket launch and were placed in machines that simulated the noises of the spacecraft. This caused their pulses to double and their blood pressure to increase by . The dogs were trained to eat a special high-nutrition gel that would be their food in space.
Ten days before the launch, Vladimir Yazdovsky chose Laika to be the primary flight dog. Before the launch, Yazdovsky took Laika home to play with his children. In a book chronicling the story of Soviet space medicine, he wrote, "Laika was quiet and charming... I wanted to do something nice for her: She had so little time left to live."
Preflight preparations
Yazdovsky made the final selection of dogs and their designated roles. Laika was to be the "flight dog"a sacrifice to science on a one-way mission to space. Albina, who had already flown twice on a high-altitude test rocket, was to act as Laika's backup. The third dog, Mushka, was a "control dog"she was to stay on the ground and be used to test instrumentation and life support.
Before leaving for the Baikonur Cosmodrome, Yazdovsky and Gazenko conducted surgery on the dogs, routing the cables from the transmitters to the sensors that would measure breathing, pulse, and blood pressure.
Because the existing airstrip at Turatam near the cosmodrome was small, the dogs and crew had to be first flown aboard a Tu104 plane to Tashkent. From there, a smaller and lighter Il14 plane took them to Turatam. Training of dogs continued upon arrival; one after another they were placed in the capsules to get familiar with the feeding system.
According to a NASA document, Laika was placed in the capsule of the satellite on 31 October 1957three days before the start of the mission. At that time of year, the temperatures at the launch site were extremely low, and a hose connected to a heater was used to keep her container warm. Two assistants were assigned to keep a constant watch on Laika before launch. Just prior to liftoff on 3 November 1957, from Baikonur Cosmodrome, Laika's fur was sponged in a weak ethanol solution and carefully groomed, while iodine was painted onto the areas where sensors would be placed to monitor her bodily functions.
One of the technicians preparing the capsule before final lift-off stated: "After placing Laika in the container and before closing the hatch, we kissed her nose and wished her bon voyage, knowing that she would not survive the flight."
Voyage
Accounts of the time of launch vary from source to source, given as 05:30:42 Moscow Time or 07:22 Moscow Time.
At peak acceleration, Laika's respiration increased to between three and four times the pre-launch rate. The sensors showed her heart rate was 103 beats/min before launch and increased to 240 beats/min during the early acceleration. After reaching orbit, Sputnik2's nose cone was jettisoned successfully; however, the "Block A" core did not separate as planned, preventing the thermal control system from operating correctly. Some of the thermal insulation tore loose, raising the cabin temperature to . After three hours of weightlessness, Laika's pulse rate had settled back to 102 beats/min, three times longer than it had taken during earlier ground tests, an indication of the stress she was under. The early telemetry indicated that Laika was agitated but eating her food. After approximately five to seven hours into the flight, no further signs of life were received from the spacecraft.
The Soviet scientists had planned to euthanise Laika with a serving of poisoned food. For many years, the Soviet Union gave conflicting statements that she had died either from asphyxia, when the batteries failed, or that she had been euthanised. Many rumours circulated about the exact manner of her death. In 1999, several Russian sources reported that Laika had died when the cabin overheated on the fourth day. In October 2002, Dimitri Malashenkov, one of the scientists behind the Sputnik2 mission, revealed that Laika had died by the fourth circuit of flight from overheating. According to a paper he presented to the World Space Congress in Houston, Texas, "It turned out that it was practically impossible to create a reliable temperature control system in such limited time constraints."
Over five months later, after 2,570 orbits, Sputnik2 (including Laika's remains) disintegrated during re-entry on 14 April 1958.
Ethics of animal testing
Due to the overshadowing issue of the Soviet–U.S. Space Race, the ethical issues raised by this experiment went largely unaddressed for some time. As newspaper clippings from 1957 show, the press was initially focused on reporting the political perspective, while Laika's health and retrievalor lack thereofonly became an issue later.
Sputnik 2 was not designed to be retrievable, and it had always been accepted that Laika would die. The mission sparked a debate across the globe on the mistreatment of animals and animal testing in general to advance science. In the United Kingdom, the National Canine Defence League called on all dog owners to observe a minute's silence on each day Laika remained in space, while the Royal Society for the Prevention of Cruelty to Animals (RSPCA) received protests even before Radio Moscow had finished announcing the launch. Animal rights groups at the time called on members of the public to protest at Soviet embassies. Others demonstrated outside the United Nations in New York. Laboratory researchers in the US offered some support for the Soviets, at least before the news of Laika's death.
In the Soviet Union, there was less controversy. Neither the media, books in the following years, nor the public openly questioned the decision to send a dog into space. In 1998, after the collapse of the Soviet regime, Oleg Gazenko, one of the scientists responsible for sending Laika into space, expressed regret for allowing her to die:
In other Warsaw Pact countries, open criticism of the Soviet space program was difficult because of political censorship, but there were notable cases of criticism in Polish scientific circles. A Polish scientific periodical, Kto, Kiedy, Dlaczego ("Who, When, Why"), published in 1958, discussed the mission of Sputnik2. In the periodical's section dedicated to astronautics, Krzysztof Boruń described the Sputnik2 mission as "regrettable" and criticised not bringing Laika back to Earth alive as "undoubtedly a great loss for science".
Legacy
Laika is memorialised in the form of a statue and plaque at Star City, the Russian Cosmonaut training facility. Created in 1997, Laika is positioned behind the cosmonauts with her ears erect. The Monument to the Conquerors of Space in Moscow, constructed in 1964, also includes Laika. On 11 April 2008 at the military research facility where staff had been responsible for readying Laika for the flight, officials unveiled a monument of her poised on top of a space rocket. Stamps and envelopes picturing Laika were produced, as well as branded cigarettes and matches.
Future space missions carrying dogs would be designed to be recovered; the first successful recovery followed the flight of Korabl-Sputnik 2, wherein the dogs Belka and Strelka, alongside dozens of other organisms, safely returned to Earth. Nonetheless, four other dogs later died in Soviet space missions: Bars and Lisichka were killed when their R7 rocket exploded shortly after launch on 28 July 1960, while Pchyolka and Mushka died when Korabl-Sputnik 3 suffered an emergency and had to be detonated.
The allegorical title of Karl Schroeder's science fiction novelette Laika's Ghost is an allusion to what a critic wrote about the novelette: "[ Gennady Malianov ] discovers that the only people ready to take up the dream of flight to other worlds are aged remnants of the former Soviet Union".
| Biology and health sciences | Individual animals | Animals |
18831 | https://en.wikipedia.org/wiki/Mathematics | Mathematics | Mathematics is a field of study that discovers and organizes methods, theories and theorems that are developed and proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics).
Mathematics involves the description and manipulation of abstract objects that consist of either abstractions from nature orin modern mathematicspurely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to prove properties of objects, a proof consisting of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of the theory under consideration.
Mathematics is essential in the natural sciences, engineering, medicine, finance, computer science, and the social sciences. Although mathematics is extensively used for modeling phenomena, the fundamental truths of mathematics are independent of any scientific experimentation. Some areas of mathematics, such as statistics and game theory, are developed in close correlation with their applications and are often grouped under applied mathematics. Other areas are developed independently from any application (and are therefore called pure mathematics) but often later find practical applications.
Historically, the concept of a proof and its associated mathematical rigour first appeared in Greek mathematics, most notably in Euclid's Elements. Since its beginning, mathematics was primarily divided into geometry and arithmetic (the manipulation of natural numbers and fractions), until the 16th and 17th centuries, when algebra and infinitesimal calculus were introduced as new fields. Since then, the interaction between mathematical innovations and scientific discoveries has led to a correlated increase in the development of both. At the end of the 19th century, the foundational crisis of mathematics led to the systematization of the axiomatic method, which heralded a dramatic increase in the number of mathematical areas and their fields of application. The contemporary Mathematics Subject Classification lists more than sixty first-level areas of mathematics.
Areas of mathematics
Before the Renaissance, mathematics was divided into two main areas: arithmetic, regarding the manipulation of numbers, and geometry, regarding the study of shapes. Some types of pseudoscience, such as numerology and astrology, were not then clearly distinguished from mathematics.
During the Renaissance, two more areas appeared. Mathematical notation led to algebra which, roughly speaking, consists of the study and the manipulation of formulas. Calculus, consisting of the two subfields differential calculus and integral calculus, is the study of continuous functions, which model the typically nonlinear relationships between varying quantities, as represented by variables. This division into four main areasarithmetic, geometry, algebra, and calculusendured until the end of the 19th century. Areas such as celestial mechanics and solid mechanics were then studied by mathematicians, but now are considered as belonging to physics. The subject of combinatorics has been studied for much of recorded history, yet did not become a separate branch of mathematics until the seventeenth century.
At the end of the 19th century, the foundational crisis in mathematics and the resulting systematization of the axiomatic method led to an explosion of new areas of mathematics. The 2020 Mathematics Subject Classification contains no less than first-level areas. Some of these areas correspond to the older division, as is true regarding number theory (the modern name for higher arithmetic) and geometry. Several other first-level areas have "geometry" in their names or are otherwise commonly considered part of geometry. Algebra and calculus do not appear as first-level areas but are respectively split into several first-level areas. Other first-level areas emerged during the 20th century or had not previously been considered as mathematics, such as mathematical logic and foundations.
Number theory
Number theory began with the manipulation of numbers, that is, natural numbers and later expanded to integers and rational numbers Number theory was once called arithmetic, but nowadays this term is mostly used for numerical calculations. Number theory dates back to ancient Babylon and probably China. Two prominent early number theorists were Euclid of ancient Greece and Diophantus of Alexandria. The modern study of number theory in its abstract form is largely attributed to Pierre de Fermat and Leonhard Euler. The field came to full fruition with the contributions of Adrien-Marie Legendre and Carl Friedrich Gauss.
Many easily stated number problems have solutions that require sophisticated methods, often from across mathematics. A prominent example is Fermat's Last Theorem. This conjecture was stated in 1637 by Pierre de Fermat, but it was proved only in 1994 by Andrew Wiles, who used tools including scheme theory from algebraic geometry, category theory, and homological algebra. Another example is Goldbach's conjecture, which asserts that every even integer greater than 2 is the sum of two prime numbers. Stated in 1742 by Christian Goldbach, it remains unproven despite considerable effort.
Number theory includes several subareas, including analytic number theory, algebraic number theory, geometry of numbers (method oriented), diophantine equations, and transcendence theory (problem oriented).
Geometry
Geometry is one of the oldest branches of mathematics. It started with empirical recipes concerning shapes, such as lines, angles and circles, which were developed mainly for the needs of surveying and architecture, but has since blossomed out into many other subfields.
A fundamental innovation was the ancient Greeks' introduction of the concept of proofs, which require that every assertion must be proved. For example, it is not sufficient to verify by measurement that, say, two lengths are equal; their equality must be proven via reasoning from previously accepted results (theorems) and a few basic statements. The basic statements are not subject to proof because they are self-evident (postulates), or are part of the definition of the subject of study (axioms). This principle, foundational for all mathematics, was first elaborated for geometry, and was systematized by Euclid around 300 BC in his book Elements.
The resulting Euclidean geometry is the study of shapes and their arrangements constructed from lines, planes and circles in the Euclidean plane (plane geometry) and the three-dimensional Euclidean space.
Euclidean geometry was developed without change of methods or scope until the 17th century, when René Descartes introduced what is now called Cartesian coordinates. This constituted a major change of paradigm: Instead of defining real numbers as lengths of line segments (see number line), it allowed the representation of points using their coordinates, which are numbers. Algebra (and later, calculus) can thus be used to solve geometrical problems. Geometry was split into two new subfields: synthetic geometry, which uses purely geometrical methods, and analytic geometry, which uses coordinates systemically.
Analytic geometry allows the study of curves unrelated to circles and lines. Such curves can be defined as the graph of functions, the study of which led to differential geometry. They can also be defined as implicit equations, often polynomial equations (which spawned algebraic geometry). Analytic geometry also makes it possible to consider Euclidean spaces of higher than three dimensions.
In the 19th century, mathematicians discovered non-Euclidean geometries, which do not follow the parallel postulate. By questioning that postulate's truth, this discovery has been viewed as joining Russell's paradox in revealing the foundational crisis of mathematics. This aspect of the crisis was solved by systematizing the axiomatic method, and adopting that the truth of the chosen axioms is not a mathematical problem. In turn, the axiomatic method allows for the study of various geometries obtained either by changing the axioms or by considering properties that do not change under specific transformations of the space.
Today's subareas of geometry include:
Projective geometry, introduced in the 16th century by Girard Desargues, extends Euclidean geometry by adding points at infinity at which parallel lines intersect. This simplifies many aspects of classical geometry by unifying the treatments for intersecting and parallel lines.
Affine geometry, the study of properties relative to parallelism and independent from the concept of length.
Differential geometry, the study of curves, surfaces, and their generalizations, which are defined using differentiable functions.
Manifold theory, the study of shapes that are not necessarily embedded in a larger space.
Riemannian geometry, the study of distance properties in curved spaces.
Algebraic geometry, the study of curves, surfaces, and their generalizations, which are defined using polynomials.
Topology, the study of properties that are kept under continuous deformations.
Algebraic topology, the use in topology of algebraic methods, mainly homological algebra.
Discrete geometry, the study of finite configurations in geometry.
Convex geometry, the study of convex sets, which takes its importance from its applications in optimization.
Complex geometry, the geometry obtained by replacing real numbers with complex numbers.
Algebra
Algebra is the art of manipulating equations and formulas. Diophantus (3rd century) and al-Khwarizmi (9th century) were the two main precursors of algebra. Diophantus solved some equations involving unknown natural numbers by deducing new relations until he obtained the solution. Al-Khwarizmi introduced systematic methods for transforming equations, such as moving a term from one side of an equation into the other side. The term algebra is derived from the Arabic word al-jabr meaning 'the reunion of broken parts' that he used for naming one of these methods in the title of his main treatise.
Algebra became an area in its own right only with François Viète (1540–1603), who introduced the use of variables for representing unknown or unspecified numbers. Variables allow mathematicians to describe the operations that have to be done on the numbers represented using mathematical formulas.
Until the 19th century, algebra consisted mainly of the study of linear equations (presently linear algebra), and polynomial equations in a single unknown, which were called algebraic equations (a term still in use, although it may be ambiguous). During the 19th century, mathematicians began to use variables to represent things other than numbers (such as matrices, modular integers, and geometric transformations), on which generalizations of arithmetic operations are often valid. The concept of algebraic structure addresses this, consisting of a set whose elements are unspecified, of operations acting on the elements of the set, and rules that these operations must follow. The scope of algebra thus grew to include the study of algebraic structures. This object of algebra was called modern algebra or abstract algebra, as established by the influence and works of Emmy Noether.
Some types of algebraic structures have useful and often fundamental properties, in many areas of mathematics. Their study became autonomous parts of algebra, and include:
group theory
field theory
vector spaces, whose study is essentially the same as linear algebra
ring theory
commutative algebra, which is the study of commutative rings, includes the study of polynomials, and is a foundational part of algebraic geometry
homological algebra
Lie algebra and Lie group theory
Boolean algebra, which is widely used for the study of the logical structure of computers
The study of types of algebraic structures as mathematical objects is the purpose of universal algebra and category theory. The latter applies to every mathematical structure (not only algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such as topological spaces; this particular area of application is called algebraic topology.
Calculus and analysis
Calculus, formerly called infinitesimal calculus, was introduced independently and simultaneously by 17th-century mathematicians Newton and Leibniz. It is fundamentally the study of the relationship of variables that depend on each other. Calculus was expanded in the 18th century by Euler with the introduction of the concept of a function and many other results. Presently, "calculus" refers mainly to the elementary part of this theory, and "analysis" is commonly used for advanced parts.
Analysis is further subdivided into real analysis, where variables represent real numbers, and complex analysis, where variables represent complex numbers. Analysis includes many subareas shared by other areas of mathematics which include:
Multivariable calculus
Functional analysis, where variables represent varying functions
Integration, measure theory and potential theory, all strongly related with probability theory on a continuum
Ordinary differential equations
Partial differential equations
Numerical analysis, mainly devoted to the computation on computers of solutions of ordinary and partial differential equations that arise in many applications
Discrete mathematics
Discrete mathematics, broadly speaking, is the study of individual, countable mathematical objects. An example is the set of all integers. Because the objects of study here are discrete, the methods of calculus and mathematical analysis do not directly apply. Algorithmsespecially their implementation and computational complexityplay a major role in discrete mathematics.
The four color theorem and optimal sphere packing were two major problems of discrete mathematics solved in the second half of the 20th century. The P versus NP problem, which remains open to this day, is also important for discrete mathematics, since its solution would potentially impact a large number of computationally difficult problems.
Discrete mathematics includes:
Combinatorics, the art of enumerating mathematical objects that satisfy some given constraints. Originally, these objects were elements or subsets of a given set; this has been extended to various objects, which establishes a strong link between combinatorics and other parts of discrete mathematics. For example, discrete geometry includes counting configurations of geometric shapes.
Graph theory and hypergraphs
Coding theory, including error correcting codes and a part of cryptography
Matroid theory
Discrete geometry
Discrete probability distributions
Game theory (although continuous games are also studied, most common games, such as chess and poker are discrete)
Discrete optimization, including combinatorial optimization, integer programming, constraint programming
Mathematical logic and set theory
The two subjects of mathematical logic and set theory have belonged to mathematics since the end of the 19th century. Before this period, sets were not considered to be mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians.
Before Cantor's study of infinite sets, mathematicians were reluctant to consider actually infinite collections, and considered infinity to be the result of endless enumeration. Cantor's work offended many mathematicians not only by considering actually infinite sets but by showing that this implies different sizes of infinity, per Cantor's diagonal argument. This led to the controversy over Cantor's set theory. In the same period, various areas of mathematics concluded the former intuitive definitions of the basic mathematical objects were insufficient for ensuring mathematical rigour.
This became the foundational crisis of mathematics. It was eventually solved in mainstream mathematics by systematizing the axiomatic method inside a formalized set theory. Roughly speaking, each mathematical object is defined by the set of all similar objects and the properties that these objects must have. For example, in Peano arithmetic, the natural numbers are defined by "zero is a number", "each number has a unique successor", "each number but zero has a unique predecessor", and some rules of reasoning. This mathematical abstraction from reality is embodied in the modern philosophy of formalism, as founded by David Hilbert around 1910.
The "nature" of the objects defined this way is a philosophical problem that mathematicians leave to philosophers, even if many mathematicians have opinions on this nature, and use their opinionsometimes called "intuition"to guide their study and proofs. The approach allows considering "logics" (that is, sets of allowed deducing rules), theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For example, Gödel's incompleteness theorems assert, roughly speaking that, in every consistent formal system that contains the natural numbers, there are theorems that are true (that is provable in a stronger system), but not provable inside the system. This approach to the foundations of mathematics was challenged during the first half of the 20th century by mathematicians led by Brouwer, who promoted intuitionistic logic, which explicitly lacks the law of excluded middle.
These problems and debates led to a wide expansion of mathematical logic, with subareas such as model theory (modeling some logical theories inside other theories), proof theory, type theory, computability theory and computational complexity theory. Although these aspects of mathematical logic were introduced before the rise of computers, their use in compiler design, formal verification, program analysis, proof assistants and other aspects of computer science, contributed in turn to the expansion of these logical theories.
Statistics and other decision sciences
The field of statistics is a mathematical application that is employed for the collection and processing of data samples, using procedures based on mathematical methods especially probability theory. Statisticians generate data with random sampling or randomized experiments.
Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints. For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence. Because of its use of optimization, the mathematical theory of statistics overlaps with other decision sciences, such as operations research, control theory, and mathematical economics.
Computational mathematics
Computational mathematics is the study of mathematical problems that are typically too large for human, numerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis broadly includes the study of approximation and discretization with special focus on rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph theory. Other areas of computational mathematics include computer algebra and symbolic computation.
History
Etymology
The word mathematics comes from the Ancient Greek word máthēma (), meaning , and the derived expression mathēmatikḗ tékhnē (), meaning . It entered the English language during the Late Middle English period through French and Latin.
Similarly, one of the two main schools of thought in Pythagoreanism was known as the mathēmatikoi (μαθηματικοί)which at the time meant "learners" rather than "mathematicians" in the modern sense. The Pythagoreans were likely the first to constrain the use of the word to just the study of arithmetic and geometry. By the time of Aristotle (384–322 BC) this meaning was fully established.
In Latin and English, until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This change has resulted in several mistranslations: For example, Saint Augustine's warning that Christians should beware of mathematici, meaning "astrologers", is sometimes mistranslated as a condemnation of mathematicians.
The apparent plural form in English goes back to the Latin neuter plural (Cicero), based on the Greek plural ta mathēmatiká () and means roughly "all things mathematical", although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, inherited from Greek. In English, the noun mathematics takes a singular verb. It is often shortened to maths or, in North America, math.
Ancient
In addition to recognizing how to count physical objects, prehistoric peoples may have also known how to count abstract quantities, like timedays, seasons, or years. Evidence for more complex mathematics does not appear until around 3000 , when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy. The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. It is in Babylonian mathematics that elementary arithmetic (addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time.
In the 6th century BC, Greek mathematics began to emerge as a distinct discipline and some Ancient Greeks such as the Pythagoreans appeared to have considered it a subject in its own right. Around 300 BC, Euclid organized mathematical knowledge by way of postulates and first principles, which evolved into the axiomatic method that is used in mathematics today, consisting of definition, axiom, theorem, and proof. His book, Elements, is widely considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is often held to be Archimedes () of Syracuse. He developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. Other notable achievements of Greek mathematics are conic sections (Apollonius of Perga, 3rd century BC), trigonometry (Hipparchus of Nicaea, 2nd century BC), and the beginnings of algebra (Diophantus, 3rd century AD).
The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition and approximation of sine and cosine, and an early form of infinite series.
Medieval and later
During the Golden Age of Islam, especially during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics. The most notable achievement of Islamic mathematics was the development of algebra. Other achievements of the Islamic period include advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system. Many notable mathematicians from this period were Persian, such as Al-Khwarizmi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī. The Greek and Arabic mathematical texts were in turn translated to Latin during the Middle Ages and made available in Europe.
During the early modern period, mathematics began to develop at an accelerating pace in Western Europe, with innovations that revolutionized mathematics, such as the introduction of variables and symbolic notation by François Viète (1540–1603), the introduction of logarithms by John Napier in 1614, which greatly simplified numerical calculations, especially for astronomy and marine navigation, the introduction of coordinates by René Descartes (1596–1650) for reducing geometry to algebra, and the development of calculus by Isaac Newton (1643–1727) and Gottfried Leibniz (1646–1716). Leonhard Euler (1707–1783), the most notable mathematician of the 18th century, unified these innovations into a single corpus with a standardized terminology, and completed them with the discovery and the proof of numerous theorems.
Perhaps the foremost mathematician of the 19th century was the German mathematician Carl Gauss, who made numerous contributions to fields such as algebra, analysis, differential geometry, matrix theory, number theory, and statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show in part that any consistent axiomatic systemif powerful enough to describe arithmeticwill contain true propositions that cannot be proved.
Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made to this very day. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews (MR) database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs."
Symbolic notation and terminology
Mathematical notation is widely used in science and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way. This notation consists of symbols used for representing operations, unspecified numbers, relations and any other mathematical objects, and then assembling them into expressions and formulas. More precisely, numbers and other mathematical objects are represented by symbols called variables, which are generally Latin or Greek letters, and often include subscripts. Operation and relations are generally represented by specific symbols or glyphs, such as (plus), (multiplication), (integral), (equal), and (less than). All these symbols are generally grouped according to specific rules to form expressions and formulas. Normally, expressions and formulas do not appear alone, but are included in sentences of the current language, where expressions play the role of noun phrases and formulas play the role of clauses.
Mathematics has developed a rich terminology covering a broad range of fields that study the properties of various abstract, idealized objects and how they interact. It is based on rigorous definitions that provide a standard foundation for communication. An axiom or postulate is a mathematical statement that is taken to be true without need of proof. If a mathematical statement has yet to be proven (or disproven), it is termed a conjecture. Through a series of rigorous arguments employing deductive reasoning, a statement that is proven to be true becomes a theorem. A specialized theorem that is mainly used to prove another theorem is called a lemma. A proven instance that forms part of a more general finding is termed a corollary.
Numerous technical terms used in mathematics are neologisms, such as polynomial and homeomorphism. Other technical terms are words of the common language that are used in an accurate meaning that may differ slightly from their common meaning. For example, in mathematics, "or" means "one, the other or both", while, in common language, it is either ambiguous or means "one or the other but not both" (in mathematics, the latter is called "exclusive or"). Finally, many mathematical terms are common words that are used with a completely different meaning. This may lead to sentences that are correct and true mathematical assertions, but appear to be nonsense to people who do not have the required background. For example, "every free module is flat" and "a field is always a ring".
Relationship with sciences
Mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws. The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model. Inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used. For example, the perihelion precession of Mercury could only be explained after the emergence of Einstein's general relativity, which replaced Newton's law of gravitation as a better mathematical model.
There is still a philosophical debate whether mathematics is a science. However, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. Like them, it is falsifiable, which means in mathematics that, if a result or a theory is wrong, this can be proved by providing a counterexample. Similarly as in science, theories and results (theorems) are often obtained from experimentation. In mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects (often mind representations without physical support). For example, when asked how he came about his theorems, Gauss once replied "durch planmässiges Tattonieren" (through systematic experimentation). However, some authors emphasize that mathematics differs from the modern notion of science by not on empirical evidence.
Pure and applied mathematics
Until the 19th century, the development of mathematics in the West was mainly motivated by the needs of technology and science, and there was no clear distinction between pure and applied mathematics. For example, the natural numbers and arithmetic were introduced for the need of counting, and geometry was motivated by surveying, architecture and astronomy. Later, Isaac Newton introduced infinitesimal calculus for explaining the movement of the planets with his law of gravitation. Moreover, most mathematicians were also scientists, and many scientists were also mathematicians. However, a notable exception occurred with the tradition of pure mathematics in Ancient Greece. The problem of integer factorization, for example, which goes back to Euclid in 300 BC, had no practical application before its use in the RSA cryptosystem, now widely used for the security of computer networks.
In the 19th century, mathematicians such as Karl Weierstrass and Richard Dedekind increasingly focused their research on internal problems, that is, pure mathematics. This led to split mathematics into pure mathematics and applied mathematics, the latter being often considered as having a lower value among mathematical purists. However, the lines between the two are frequently blurred.
The aftermath of World War II led to a surge in the development of applied mathematics in the US and elsewhere. Many of the theories developed for applications were found interesting from the point of view of pure mathematics, and many results of pure mathematics were shown to have applications outside mathematics; in turn, the study of these applications may give new insights on the "pure theory".
An example of the first case is the theory of distributions, introduced by Laurent Schwartz for validating computations done in quantum mechanics, which became immediately an important tool of (pure) mathematical analysis. An example of the second case is the decidability of the first-order theory of the real numbers, a problem of pure mathematics that was proved true by Alfred Tarski, with an algorithm that is impossible to implement because of a computational complexity that is much too high. For getting an algorithm that can be implemented and can solve systems of polynomial equations and inequalities, George Collins introduced the cylindrical algebraic decomposition that became a fundamental tool in real algebraic geometry.
In the present day, the distinction between pure and applied mathematics is more a question of personal research aim of mathematicians than a division of mathematics into broad areas. The Mathematics Subject Classification has a section for "general applied mathematics" but does not mention "pure mathematics". However, these terms are still used in names of some university departments, such as at the Faculty of Mathematics at the University of Cambridge.
Unreasonable effectiveness
The unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist Eugene Wigner. It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. Examples of unexpected applications of mathematical theories can be found in many areas of mathematics.
A notable example is the prime factorization of natural numbers that was discovered more than 2,000 years before its common use for secure internet communications through the RSA cryptosystem. A second historical example is the theory of ellipses. They were studied by the ancient Greek mathematicians as conic sections (that is, intersections of cones with planes). It was almost 2,000 years later that Johannes Kepler discovered that the trajectories of the planets are ellipses.
In the 19th century, the internal development of geometry (pure mathematics) led to definition and study of non-Euclidean geometries, spaces of dimension higher than three and manifolds. At this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, Albert Einstein developed the theory of relativity that uses fundamentally these concepts. In particular, spacetime of special relativity is a non-Euclidean space of dimension four, and spacetime of general relativity is a (curved) manifold of dimension four.
A striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. This is illustrated by the discoveries of the positron and the baryon In both cases, the equations of the theories had unexplained solutions, which led to conjecture of the existence of an unknown particle, and the search for these particles. In both cases, these particles were discovered a few years later by specific experiments.
Specific sciences
Physics
Mathematics and physics have influenced each other over their modern history. Modern physics uses mathematics abundantly, and is also considered to be the motivation of major mathematical developments.
Computing
Computing is closely related to mathematics in several ways. Theoretical computer science is considered to be mathematical in nature. Communication technologies apply branches of mathematics that may be very old (e.g., arithmetic), especially with respect to transmission security, in cryptography and coding theory. Discrete mathematics is useful in many areas of computer science, such as complexity theory, information theory, and graph theory. In 1998, the Kepler conjecture on sphere packing seemed to also be partially proven by computer.
Biology and chemistry
Biology uses probability extensively in fields such as ecology or neurobiology. Most discussion of probability centers on the concept of evolutionary fitness. Ecology heavily uses modeling to simulate population dynamics, study ecosystems such as the predator-prey model, measure pollution diffusion, or to assess climate change. The dynamics of a population can be modeled by coupled differential equations, such as the Lotka–Volterra equations.
Statistical hypothesis testing, is run on data from clinical trials to determine whether a new treatment works. Since the start of the 20th century, chemistry has used computing to model molecules in three dimensions.
Earth sciences
Structural geology and climatology use probabilistic models to predict the risk of natural catastrophes. Similarly, meteorology, oceanography, and planetology also use mathematics due to their heavy use of models.
Social sciences
Areas of mathematics used in the social sciences include probability/statistics and differential equations. These are used in linguistics, economics, sociology, and psychology.
Often the fundamental postulate of mathematical economics is that of the rational individual actor – Homo economicus (). In this model, the individual seeks to maximize their self-interest, and always makes optimal choices using perfect information. This atomistic view of economics allows it to relatively easily mathematize its thinking, because individual calculations are transposed into mathematical calculations. Such mathematical modeling allows one to probe economic mechanisms. Some reject or criticise the concept of Homo economicus. Economists note that real people have limited information, make poor choices and care about fairness, altruism, not just personal gain.
Without mathematical modeling, it is hard to go beyond statistical observations or untestable speculation. Mathematical modeling allows economists to create structured frameworks to test hypotheses and analyze complex interactions. Models provide clarity and precision, enabling the translation of theoretical concepts into quantifiable predictions that can be tested against real-world data.
At the start of the 20th century, there was a development to express historical movements in formulas. In 1922, Nikolai Kondratiev discerned the ~50-year-long Kondratiev cycle, which explains phases of economic growth or crisis. Towards the end of the 19th century, mathematicians extended their analysis into geopolitics. Peter Turchin developed cliodynamics since the 1990s.
Mathematization of the social sciences is not without risk. In the controversial book Fashionable Nonsense (1997), Sokal and Bricmont denounced the unfounded or abusive use of scientific terminology, particularly from mathematics or physics, in the social sciences. The study of complex systems (evolution of unemployment, business capital, demographic evolution of a population, etc.) uses mathematical knowledge. However, the choice of counting criteria, particularly for unemployment, or of models, can be subject to controversy.
Philosophy
Reality
The connection between mathematics and material reality has led to philosophical debates since at least the time of Pythagoras. The ancient philosopher Plato argued that abstractions that reflect material reality have themselves a reality that exists outside space and time. As a result, the philosophical view that mathematical objects somehow exist on their own in abstraction is often referred to as Platonism. Independently of their possible philosophical opinions, modern mathematicians may be generally considered as Platonists, since they think of and talk of their objects of study as real objects.
Armand Borel summarized this view of mathematics reality as follows, and provided quotations of G. H. Hardy, Charles Hermite, Henri Poincaré and Albert Einstein that support his views.
Nevertheless, Platonism and the concurrent views on abstraction do not explain the unreasonable effectiveness of mathematics.
Proposed definitions
There is no general consensus about the definition of mathematics or its epistemological statusthat is, its place inside knowledge. A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable. There is not even consensus on whether mathematics is an art or a science. Some just say, "mathematics is what mathematicians do". A common approach is to define mathematics by its object of study.
Aristotle defined mathematics as "the science of quantity" and this definition prevailed until the 18th century. However, Aristotle also noted a focus on quantity alone may not distinguish mathematics from sciences like physics; in his view, abstraction and studying quantity as a property "separable in thought" from real instances set mathematics apart. In the 19th century, when mathematicians began to address topicssuch as infinite setswhich have no clear-cut relation to physical reality, a variety of new definitions were given. With the large number of new areas of mathematics that have appeared since the beginning of the 20th century, defining mathematics by its object of study has become increasingly difficult. For example, in lieu of a definition, Saunders Mac Lane in Mathematics, form and function summarizes the basics of several areas of mathematics, emphasizing their inter-connectedness, and observes:
Another approach for defining mathematics is to use its methods. For example, an area of study is often qualified as mathematics as soon as one can prove theoremsassertions whose validity relies on a proof, that is, a purely-logical deduction.
Rigor
Mathematical reasoning requires rigor. This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of inference rules, without any use of empirical evidence and intuition. Rigorous reasoning is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere. Despite mathematics' concision, rigorous proofs can require hundreds of pages to express, such as the 255-page Feit–Thompson theorem. The emergence of computer-assisted proofs has allowed proof lengths to further expand. The result of this trend is a philosophy of the quasi-empiricist proof that can not be considered infallible, but has a probability attached to it.
The concept of rigor in mathematics dates back to ancient Greece, where their society encouraged logical, deductive reasoning. However, this rigorous approach would tend to discourage exploration of new approaches, such as irrational numbers and concepts of infinity. The method of demonstrating rigorous proof was enhanced in the sixteenth century through the use of symbolic notation. In the 18th century, social transition led to mathematicians earning their keep through teaching, which led to more careful thinking about the underlying concepts of mathematics. This produced more rigorous approaches, while transitioning from geometric methods to algebraic and then arithmetic proofs.
At the end of the 19th century, it appeared that the definitions of the basic concepts of mathematics were not accurate enough for avoiding paradoxes (non-Euclidean geometries and Weierstrass function) and contradictions (Russell's paradox). This was solved by the inclusion of axioms with the apodictic inference rules of mathematical theories; the re-introduction of axiomatic method pioneered by the ancient Greeks. It results that "rigor" is no more a relevant concept in mathematics, as a proof is either correct or erroneous, and a "rigorous proof" is simply a pleonasm. Where a special concept of rigor comes into play is in the socialized aspects of a proof, wherein it may be demonstrably refuted by other mathematicians. After a proof has been accepted for many years or even decades, it can then be considered as reliable.
Nevertheless, the concept of "rigor" may remain useful for teaching to beginners what is a mathematical proof.
Training and practice
Education
Mathematics has a remarkable ability to cross cultural boundaries and time periods. As a human activity, the practice of mathematics has a social side, which includes education, careers, recognition, popularization, and so on. In education, mathematics is a core part of the curriculum and forms an important element of the STEM academic disciplines. Prominent careers for professional mathematicians include math teacher or professor, statistician, actuary, financial analyst, economist, accountant, commodity trader, or computer consultant.
Archaeological evidence shows that instruction in mathematics occurred as early as the second millennium BCE in ancient Babylonia. Comparable evidence has been unearthed for scribal mathematics training in the ancient Near East and then for the Greco-Roman world starting around 300 BCE. The oldest known mathematics textbook is the Rhind papyrus, dated from in Egypt. Due to a scarcity of books, mathematical teachings in ancient India were communicated using memorized oral tradition since the Vedic period (). In Imperial China during the Tang dynasty (618–907 CE), a mathematics curriculum was adopted for the civil service exam to join the state bureaucracy.
Following the Dark Ages, mathematics education in Europe was provided by religious schools as part of the Quadrivium. Formal instruction in pedagogy began with Jesuit schools in the 16th and 17th century. Most mathematical curricula remained at a basic and practical level until the nineteenth century, when it began to flourish in France and Germany. The oldest journal addressing instruction in mathematics was L'Enseignement Mathématique, which began publication in 1899. The Western advancements in science and technology led to the establishment of centralized education systems in many nation-states, with mathematics as a core componentinitially for its military applications. While the content of courses varies, in the present day nearly all countries teach mathematics to students for significant amounts of time.
During school, mathematical capabilities and positive expectations have a strong association with career interest in the field. Extrinsic factors such as feedback motivation by teachers, parents, and peer groups can influence the level of interest in mathematics. Some students studying math may develop an apprehension or fear about their performance in the subject. This is known as math anxiety or math phobia, and is considered the most prominent of the disorders impacting academic performance. Math anxiety can develop due to various factors such as parental and teacher attitudes, social stereotypes, and personal traits. Help to counteract the anxiety can come from changes in instructional approaches, by interactions with parents and teachers, and by tailored treatments for the individual.
Psychology (aesthetic, creativity and intuition)
The validity of a mathematical theorem relies only on the rigor of its proof, which could theoretically be done automatically by a computer program. This does not mean that there is no place for creativity in a mathematical work. On the contrary, many important mathematical results (theorems) are solutions of problems that other mathematicians failed to solve, and the invention of a way for solving them may be a fundamental way of the solving process. An extreme example is Apery's theorem: Roger Apery provided only the ideas for a proof, and the formal proof was given only several months later by three other mathematicians.
Creativity and rigor are not the only psychological aspects of the activity of mathematicians. Some mathematicians can see their activity as a game, more specifically as solving puzzles. This aspect of mathematical activity is emphasized in recreational mathematics.
Mathematicians can find an aesthetic value to mathematics. Like beauty, it is hard to define, it is commonly related to elegance, which involves qualities like simplicity, symmetry, completeness, and generality. G. H. Hardy in A Mathematician's Apology expressed the belief that the aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He also identified other criteria such as significance, unexpectedness, and inevitability, which contribute to mathematical aesthetics. Paul Erdős expressed this sentiment more ironically by speaking of "The Book", a supposed divine collection of the most beautiful proofs. The 1998 book Proofs from THE BOOK, inspired by Erdős, is a collection of particularly succinct and revelatory mathematical arguments. Some examples of particularly elegant results included are Euclid's proof that there are infinitely many prime numbers and the fast Fourier transform for harmonic analysis.
Some feel that to consider mathematics a science is to downplay its artistry and history in the seven traditional liberal arts. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematical results are created (as in art) or discovered (as in science). The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.
Cultural impact
Artistic expression
| Mathematics | null | null |
18837 | https://en.wikipedia.org/wiki/Median | Median | The median of a set of numbers is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as the “middle" value. The basic feature of the median in describing data compared to the mean (often simply described as the "average") is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of the center. Median income, for example, may be a better way to describe the center of the income distribution because increases in the largest incomes alone have no effect on the median. For this reason, the median is of central importance in robust statistics.
Finite set of numbers
The median of a finite list of numbers is the "middle" number, when those numbers are listed in order from smallest to greatest.
If the data set has an odd number of observations, the middle one is selected (after arranging in ascending order). For example, the following list of seven numbers,
1, 3, 3, 6, 7, 8, 9
has the median of 6, which is the fourth value.
If the data set has an even number of observations, there is no distinct middle value and the median is usually defined to be the arithmetic mean of the two middle values. For example, this data set of 8 numbers
1, 2, 3, 4, 5, 6, 8, 9
has a median value of 4.5, that is . (In more technical terms, this interprets the median as the fully trimmed mid-range).
In general, with this convention, the median can be defined as follows: For a data set of elements, ordered from smallest to greatest,
if is odd,
if is even,
Definition and notation
Formally, a median of a population is any value such that at least half of the population is less than or equal to the proposed median and at least half is greater than or equal to the proposed median. As seen above, medians may not be unique. If each set contains more than half the population, then some of the population is exactly equal to the unique median.
The median is well-defined for any ordered (one-dimensional) data and is independent of any distance metric. The median can thus be applied to school classes which are ranked but not numerical (e.g. working out a median grade when student test scores are graded from F to A), although the result might be halfway between classes if there is an even number of classes. (For odd number classes, one specific class is determined as the median.)
A geometric median, on the other hand, is defined in any number of dimensions. A related concept, in which the outcome is forced to correspond to a member of the sample, is the medoid.
There is no widely accepted standard notation for the median, but some authors represent the median of a variable x as med(x), x͂, as μ1/2, or as M. In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced.
The median is a special case of other ways of summarizing the typical values associated with a statistical distribution: it is the 2nd quartile, 5th decile, and 50th percentile.
Uses
The median can be used as a measure of location when one attaches reduced importance to extreme values, typically because a distribution is skewed, extreme values are not known, or outliers are untrustworthy, i.e., may be measurement or transcription errors.
For example, consider the multiset
1, 2, 2, 2, 3, 14.
The median is 2 in this case, as is the mode, and it might be seen as a better indication of the center than the arithmetic mean of 4, which is larger than all but one of the values. However, the widely cited empirical relationship that the mean is shifted "further into the tail" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be "too far" apart; see below.
As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.
Because the median is simple to understand and easy to calculate, while also a robust approximation to the mean, the median is a popular summary statistic in descriptive statistics. In this context, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation.
For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the efficiency of candidate estimators shows that the sample mean is more statistically efficient when—and only when— data is uncontaminated by data from heavy-tailed distributions or from mixtures of distributions. Even then, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean.
Probability distributions
For any real-valued probability distribution with cumulative distribution function F, a median is defined as any real number m that satisfies the inequalities
(cf. the drawing in the definition of expected value for arbitrary real-valued random variables). An equivalent phrasing uses a random variable X distributed according to F:
Note that this definition does not require X to have an absolutely continuous distribution (which has a probability density function f), nor does it require a discrete one. In the former case, the inequalities can be upgraded to equality: a median satisfies
and
Any probability distribution on the real number set has at least one median, but in pathological cases there may be more than one median: if F is constant 1/2 on an interval (so that f = 0 there), then any value of that interval is a median.
Medians of particular distributions
The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as the Cauchy distribution:
The median of a symmetric unimodal distribution coincides with the mode.
The median of a symmetric distribution which possesses a mean μ also takes the value μ.
The median of a normal distribution with mean μ and variance σ2 is μ. In fact, for a normal distribution, mean = median = mode.
The median of a uniform distribution in the interval [a, b] is (a + b) / 2, which is also the mean.
The median of a Cauchy distribution with location parameter x0 and scale parameter y is x0, the location parameter.
The median of a power law distribution x−a, with exponent a > 1 is 21/(a − 1)xmin, where xmin is the minimum value for which the power law holds
The median of an exponential distribution with rate parameter λ is the natural logarithm of 2 divided by the rate parameter: λ−1ln 2.
The median of a Weibull distribution with shape parameter k and scale parameter λ is λ(ln 2)1/k.
Properties
Optimality property
The mean absolute error of a real variable c with respect to the random variable X is
Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X. In particular, if m is a sample median, then it minimizes the arithmetic mean of the absolute deviations. Note, however, that in cases where the sample contains an even number of elements, this minimizer is not unique.
More generally, a median is defined as a minimum of
as discussed below in the section on multivariate medians (specifically, the spatial median).
This optimization-based definition of the median is useful in statistical data-analysis, for example, in k-medians clustering.
Inequality relating means and medians
If the distribution has finite variance, then the distance between the median and the mean is bounded by one standard deviation.
This bound was proved by Book and Sher in 1979 for discrete samples, and more generally by Page and Murty in 1982. In a comment on a subsequent proof by O'Cinneide, Mallows in 1991 presented a compact proof that uses Jensen's inequality twice, as follows. Using |·| for the absolute value, we have
The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each convex. The second inequality comes from the fact that a median minimizes the absolute deviation function .
Mallows's proof can be generalized to obtain a multivariate version of the inequality simply by replacing the absolute value with a norm:
where m is a spatial median, that is, a minimizer of the function The spatial median is unique when the data-set's dimension is two or more.
An alternative proof uses the one-sided Chebyshev inequality; it appears in an inequality on location and scale parameters. This formula also follows directly from Cantelli's inequality.
Unimodal distributions
For the case of unimodal distributions, one can achieve a sharper bound on the distance between the median and the mean:
.
A similar relation holds between the median and the mode:
Mean, median, and skew
A typical heuristic is that positively skewed distributions have mean > median. This is true for all members of the Pearson distribution family. However this is not always true. For example, the Weibull distribution family has members with positive mean, but mean < median. Violations of the rule are particularly common for discrete distributions. For example, any Poisson distribution has positive skew, but its mean < median whenever . See for a proof sketch.
When the distribution has a monotonically decreasing probability density, then the median is less than the mean, as shown in the figure.
Jensen's inequality for medians
Jensen's inequality states that for any random variable X with a finite expectation E[X] and for any convex function f
This inequality generalizes to the median as well. We say a function is a C function if, for any t,
is a closed interval (allowing the degenerate cases of a single point or an empty set). Every convex function is a C function, but the reverse does not hold. If f is a C function, then
If the medians are not unique, the statement holds for the corresponding suprema.
Medians for samples
Efficient computation of the sample median
Even though comparison-sorting n items requires operations, selection algorithms can compute the th-smallest of items with only operations. This includes the median, which is the th order statistic (or for an even number of samples, the arithmetic mean of the two middle order statistics).
Selection algorithms still have the downside of requiring memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in the quicksort sorting algorithm, which uses an estimate of its input's median. A more robust estimator is Tukey's ninther, which is the median of three rule applied with limited recursion: if is the sample laid out as an array, and
,
then
The remedian is an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample.
Sampling distribution
The distributions of both the sample mean and the sample median were determined by Laplace. The distribution of the sample median from a population with a density function is asymptotically normal with mean and variance
where is the median of and is the sample size:
A modern proof follows below. Laplace's result is now understood as a special case of the asymptotic distribution of arbitrary quantiles.
For normal samples, the density is , thus for large samples the variance of the median equals ( | Mathematics | Statistics and probability | null |
18838 | https://en.wikipedia.org/wiki/Mammal | Mammal | A mammal () is a vertebrate animal of the class Mammalia (). Mammals are characterised by the presence of milk-producing mammary glands for feeding their young, a broad neocortex region of the brain, fur or hair, and three middle ear bones. These characteristics distinguish them from reptiles and birds, from which their ancestors diverged in the Carboniferous Period over 300 million years ago. Around 6,400 extant species of mammals have been described and divided into 27 orders. The study of mammals is called mammalogy.
The largest orders of mammals, by number of species, are the rodents, bats, and eulipotyphlans (including hedgehogs, moles and shrews). The next three are the primates (including humans, monkeys and lemurs), the even-toed ungulates (including pigs, camels, and whales), and the Carnivora (including cats, dogs, and seals).
Mammals are the only living members of Synapsida; this clade, together with Sauropsida (reptiles and birds), constitutes the larger Amniota clade. Early synapsids are referred to as "pelycosaurs." The more advanced therapsids became dominant during the Guadalupian. Mammals originated from cynodonts, an advanced group of therapsids, during the Late Triassic to Early Jurassic. Mammals achieved their modern diversity in the Paleogene and Neogene periods of the Cenozoic era, after the extinction of non-avian dinosaurs, and have been the dominant terrestrial animal group from 66 million years ago to the present.
The basic mammalian body type is quadrupedal, with most mammals using four limbs for terrestrial locomotion; but in some, the limbs are adapted for life at sea, in the air, in trees or underground. The bipeds have adapted to move using only the two lower limbs, while the rear limbs of cetaceans and the sea cows are mere internal vestiges. Mammals range in size from the bumblebee bat to the blue whale—possibly the largest animal to have ever lived. Maximum lifespan varies from two years for the shrew to 211 years for the bowhead whale. All modern mammals give birth to live young, except the five species of monotremes, which lay eggs. The most species-rich group is the viviparous placental mammals, so named for the temporary organ (placenta) used by offspring to draw nutrition from the mother during gestation.
Most mammals are intelligent, with some possessing large brains, self-awareness, and tool use. Mammals can communicate and vocalise in several ways, including the production of ultrasound, scent marking, alarm signals, singing, echolocation; and, in the case of humans, complex language. Mammals can organise themselves into fission–fusion societies, harems, and hierarchies—but can also be solitary and territorial. Most mammals are polygynous, but some can be monogamous or polyandrous.
Domestication of many types of mammals by humans played a major role in the Neolithic Revolution, and resulted in farming replacing hunting and gathering as the primary source of food for humans. This led to a major restructuring of human societies from nomadic to sedentary, with more co-operation among larger and larger groups, and ultimately the development of the first civilisations. Domesticated mammals provided, and continue to provide, power for transport and agriculture, as well as food (meat and dairy products), fur, and leather. Mammals are also hunted and raced for sport, kept as pets and working animals of various types, and are used as model organisms in science. Mammals have been depicted in art since Paleolithic times, and appear in literature, film, mythology, and religion. Decline in numbers and extinction of many mammals is primarily driven by human poaching and habitat destruction, primarily deforestation.
Classification
Mammal classification has been through several revisions since Carl Linnaeus initially defined the class, and at present, no classification system is universally accepted. McKenna & Bell (1997) and Wilson & Reeder (2005) provide useful recent compendiums. Simpson (1945) provides systematics of mammal origins and relationships that had been taught universally until the end of the 20th century.
However, since 1945, a large amount of new and more detailed information has gradually been found: The paleontological record has been recalibrated, and the intervening years have seen much debate and progress concerning the theoretical underpinnings of systematisation itself, partly through the new concept of cladistics. Though fieldwork and lab work progressively outdated Simpson's classification, it remains the closest thing to an official classification of mammals, despite its known issues.
Most mammals, including the six most species-rich orders, belong to the placental group. The three largest orders in numbers of species are Rodentia: mice, rats, porcupines, beavers, capybaras, and other gnawing mammals; Chiroptera: bats; and Eulipotyphla: shrews, moles, and solenodons. The next three biggest orders, depending on the biological classification scheme used, are the primates: apes, monkeys, and lemurs; the Cetartiodactyla: whales and even-toed ungulates; and the Carnivora which includes cats, dogs, weasels, bears, seals, and allies. According to Mammal Species of the World, 5,416 species were identified in 2006. These were grouped into 1,229 genera, 153 families and 29 orders. In 2008, the International Union for Conservation of Nature (IUCN) completed a five-year Global Mammal Assessment for its IUCN Red List, which counted 5,488 species. According to research published in the Journal of Mammalogy in 2018, the number of recognised mammal species is 6,495, including 96 recently extinct.
Definitions
The word "mammal" is modern, from the scientific name Mammalia coined by Carl Linnaeus in 1758, derived from the Latin mamma ("teat, pap"). In an influential 1988 paper, Timothy Rowe defined Mammalia phylogenetically as the crown group of mammals, the clade consisting of the most recent common ancestor of living monotremes (echidnas and platypuses) and therians (marsupials and placentals) and all descendants of that ancestor. Since this ancestor lived in the Jurassic period, Rowe's definition excludes all animals from the earlier Triassic, despite the fact that Triassic fossils in the Haramiyida have been referred to the Mammalia since the mid-19th century. If Mammalia is considered as the crown group, its origin can be roughly dated as the first known appearance of animals more closely related to some extant mammals than to others. Ambondro is more closely related to monotremes than to therian mammals while Amphilestes and Amphitherium are more closely related to the therians; as fossils of all three genera are dated about in the Middle Jurassic, this is a reasonable estimate for the appearance of the crown group.
T. S. Kemp has provided a more traditional definition: "Synapsids that possess a dentary–squamosal jaw articulation and occlusion between upper and lower molars with a transverse component to the movement" or, equivalently in Kemp's view, the clade originating with the last common ancestor of Sinoconodon and living mammals. The earliest-known synapsid satisfying Kemp's definitions is Tikitherium, dated , so the appearance of mammals in this broader sense can be given this Late Triassic date. However, this animal may have actually evolved during the Neogene.
Molecular classification of placentals
As of the early 21st century, molecular studies based on DNA analysis have suggested new relationships among mammal families. Most of these findings have been independently validated by retrotransposon presence/absence data. Classification systems based on molecular studies reveal three major groups or lineages of placentals—Afrotheria, Xenarthra and Boreoeutheria—which diverged in the Cretaceous. The relationships between these three lineages is contentious, and all three possible hypotheses have been proposed with respect to which group is basal. These hypotheses are Atlantogenata (basal Boreoeutheria), Epitheria (basal Xenarthra) and Exafroplacentalia (basal Afrotheria). Boreoeutheria in turn contains two major lineages—Euarchontoglires and Laurasiatheria.
Estimates for the divergence times between these three placental groups range from 105 to 120 million years ago, depending on the type of DNA used (such as nuclear or mitochondrial) and varying interpretations of paleogeographic data.
Mammal phylogeny according to Álvarez-Carretero et al., 2022:
Evolution
Origins
Synapsida, a clade that contains mammals and their extinct relatives, originated during the Pennsylvanian subperiod (~323 million to ~300 million years ago), when they split from the reptile lineage. Crown group mammals evolved from earlier mammaliaforms during the Early Jurassic. The cladogram takes Mammalia to be the crown group.
Evolution from older amniotes
The first fully terrestrial vertebrates were amniotes. Like their amphibious early tetrapod predecessors, they had lungs and limbs. Amniotic eggs, however, have internal membranes that allow the developing embryo to breathe but keep water in. Hence, amniotes can lay eggs on dry land, while amphibians generally need to lay their eggs in water.
The first amniotes apparently arose in the Pennsylvanian subperiod of the Carboniferous. They descended from earlier reptiliomorph amphibious tetrapods, which lived on land that was already inhabited by insects and other invertebrates as well as ferns, mosses and other plants. Within a few million years, two important amniote lineages became distinct: the synapsids, which would later include the common ancestor of the mammals; and the sauropsids, which now include turtles, lizards, snakes, crocodilians and dinosaurs (including birds). Synapsids have a single hole (temporal fenestra) low on each side of the skull. Primitive synapsids included the largest and fiercest animals of the early Permian such as Dimetrodon. Nonmammalian synapsids were traditionally—and incorrectly—called "mammal-like reptiles" or pelycosaurs; we now know they were neither reptiles nor part of reptile lineage.
Therapsids, a group of synapsids, evolved in the Middle Permian, about 265 million years ago, and became the dominant land vertebrates. They differ from basal eupelycosaurs in several features of the skull and jaws, including: larger skulls and incisors which are equal in size in therapsids, but not for eupelycosaurs. The therapsid lineage leading to mammals went through a series of stages, beginning with animals that were very similar to their early synapsid ancestors and ending with probainognathian cynodonts, some of which could easily be mistaken for mammals. Those stages were characterised by:
The gradual development of a bony secondary palate.
Abrupt acquisition of endothermy among Mammaliamorpha, thus prior to the origin of mammals by 30–50 millions of years .
Progression towards an erect limb posture, which would increase the animals' stamina by avoiding Carrier's constraint. But this process was slow and erratic: for example, all herbivorous nonmammaliaform therapsids retained sprawling limbs (some late forms may have had semierect hind limbs); Permian carnivorous therapsids had sprawling forelimbs, and some late Permian ones also had semisprawling hindlimbs. In fact, modern monotremes still have semisprawling limbs.
The dentary gradually became the main bone of the lower jaw which, by the Triassic, progressed towards the fully mammalian jaw (the lower consisting only of the dentary) and middle ear (which is constructed by the bones that were previously used to construct the jaws of reptiles).
First mammals
The Permian–Triassic extinction event about 252 million years ago, which was a prolonged event due to the accumulation of several extinction pulses, ended the dominance of carnivorous therapsids. In the early Triassic, most medium to large land carnivore niches were taken over by archosaurs which, over an extended period (35 million years), came to include the crocodylomorphs, the pterosaurs and the dinosaurs; however, large cynodonts like Trucidocynodon and traversodontids still occupied large sized carnivorous and herbivorous niches respectively. By the Jurassic, the dinosaurs had come to dominate the large terrestrial herbivore niches as well.
The first mammals (in Kemp's sense) appeared in the Late Triassic epoch (about 225 million years ago), 40 million years after the first therapsids. They expanded out of their nocturnal insectivore niche from the mid-Jurassic onwards; the Jurassic Castorocauda, for example, was a close relative of true mammals that had adaptations for swimming, digging and catching fish. Most, if not all, are thought to have remained nocturnal (the nocturnal bottleneck), accounting for much of the typical mammalian traits. The majority of the mammal species that existed in the Mesozoic Era were multituberculates, eutriconodonts and spalacotheriids. The earliest-known fossil of the Metatheria ("changed beasts") is Sinodelphys, found in 125-million-year-old Early Cretaceous shale in China's northeastern Liaoning Province. The fossil is nearly complete and includes tufts of fur and imprints of soft tissues.
The oldest-known fossil among the Eutheria ("true beasts") is the small shrewlike Juramaia sinensis, or "Jurassic mother from China", dated to 160 million years ago in the late Jurassic. A later eutherian relative, Eomaia, dated to 125 million years ago in the early Cretaceous, possessed some features in common with the marsupials but not with the placentals, evidence that these features were present in the last common ancestor of the two groups but were later lost in the placental lineage. In particular, the epipubic bones extend forwards from the pelvis. These are not found in any modern placental, but they are found in marsupials, monotremes, other nontherian mammals and Ukhaatherium, an early Cretaceous animal in the eutherian order Asioryctitheria. This also applies to the multituberculates. They are apparently an ancestral feature, which subsequently disappeared in the placental lineage. These epipubic bones seem to function by stiffening the muscles during locomotion, reducing the amount of space being presented, which placentals require to contain their fetus during gestation periods. A narrow pelvic outlet indicates that the young were very small at birth and therefore pregnancy was short, as in modern marsupials. This suggests that the placenta was a later development.
One of the earliest-known monotremes was Teinolophos, which lived about 120 million years ago in Australia. Monotremes have some features which may be inherited from the original amniotes such as the same orifice to urinate, defecate and reproduce (cloaca)—as reptiles and birds also do— and they lay eggs which are leathery and uncalcified.
Earliest appearances of features
Hadrocodium, whose fossils date from approximately 195 million years ago, in the early Jurassic, provides the first clear evidence of a jaw joint formed solely by the squamosal and dentary bones; there is no space in the jaw for the articular, a bone involved in the jaws of all early synapsids.
The earliest clear evidence of hair or fur is in fossils of Castorocauda and Megaconus, from 164 million years ago in the mid-Jurassic. In the 1950s, it was suggested that the foramina (passages) in the maxillae and premaxillae (bones in the front of the upper jaw) of cynodonts were channels which supplied blood vessels and nerves to vibrissae (whiskers) and so were evidence of hair or fur; it was soon pointed out, however, that foramina do not necessarily show that an animal had vibrissae, as the modern lizard Tupinambis has foramina that are almost identical to those found in the nonmammalian cynodont Thrinaxodon. Popular sources, nevertheless, continue to attribute whiskers to Thrinaxodon. Studies on Permian coprolites suggest that non-mammalian synapsids of the epoch already had fur, setting the evolution of hairs possibly as far back as dicynodonts.
When endothermy first appeared in the evolution of mammals is uncertain, though it is generally agreed to have first evolved in non-mammalian therapsids. Modern monotremes have lower body temperatures and more variable metabolic rates than marsupials and placentals, but there is evidence that some of their ancestors, perhaps including ancestors of the therians, may have had body temperatures like those of modern therians. Likewise, some modern therians like afrotheres and xenarthrans have secondarily developed lower body temperatures.
The evolution of erect limbs in mammals is incomplete—living and fossil monotremes have sprawling limbs. The parasagittal (nonsprawling) limb posture appeared sometime in the late Jurassic or early Cretaceous; it is found in the eutherian Eomaia and the metatherian Sinodelphys, both dated to 125 million years ago. Epipubic bones, a feature that strongly influenced the reproduction of most mammal clades, are first found in Tritylodontidae, suggesting that it is a synapomorphy between them and Mammaliaformes. They are omnipresent in non-placental Mammaliaformes, though Megazostrodon and Erythrotherium appear to have lacked them.
It has been suggested that the original function of lactation (milk production) was to keep eggs moist. Much of the argument is based on monotremes, the egg-laying mammals. In human females, mammary glands become fully developed during puberty, regardless of pregnancy.
Rise of the mammals
Therians took over the medium- to large-sized ecological niches in the Cenozoic, after the Cretaceous–Paleogene extinction event approximately 66 million years ago emptied ecological space once filled by non-avian dinosaurs and other groups of reptiles, as well as various other mammal groups, and underwent an exponential increase in body size (megafauna). The increase in mammalian diversity was not, however, solely because of expansion into large-bodied niches. Mammals diversified very quickly, displaying an exponential rise in diversity. For example, the earliest-known bat dates from about 50 million years ago, only 16 million years after the extinction of the non-avian dinosaurs.
Molecular phylogenetic studies initially suggested that most placental orders diverged about 100 to 85 million years ago and that modern families appeared in the period from the late Eocene through the Miocene. However, no placental fossils have been found from before the end of the Cretaceous. The earliest undisputed fossils of placentals come from the early Paleocene, after the extinction of the non-avian dinosaurs. (Scientists identified an early Paleocene animal named Protungulatum donnae as one of the first placental mammals, but it has since been reclassified as a non-placental eutherian.) Recalibrations of genetic and morphological diversity rates have suggested a Late Cretaceous origin for placentals, and a Paleocene origin for most modern clades.
The earliest-known ancestor of primates is Archicebus achilles from around 55 million years ago. This tiny primate weighed 20–30 grams (0.7–1.1 ounce) and could fit within a human palm.
Anatomy
Distinguishing features
Living mammal species can be identified by the presence of sweat glands, including those that are specialised to produce milk to nourish their young. In classifying fossils, however, other features must be used, since soft tissue glands and many other features are not visible in fossils.
Many traits shared by all living mammals appeared among the earliest members of the group:
Jaw joint – The dentary (the lower jaw bone, which carries the teeth) and the squamosal (a small cranial bone) meet to form the joint. In most gnathostomes, including early therapsids, the joint consists of the articular (a small bone at the back of the lower jaw) and quadrate (a small bone at the back of the upper jaw).
Middle ear – In crown-group mammals, sound is carried from the eardrum by a chain of three bones, the malleus, the incus and the stapes. Ancestrally, the malleus and the incus are derived from the articular and the quadrate bones that constituted the jaw joint of early therapsids.
Tooth replacement – Teeth can be replaced once (diphyodonty) or (as in toothed whales and murid rodents) not at all (monophyodonty). Elephants, manatees, and kangaroos continually grow new teeth throughout their life (polyphyodonty).
Prismatic enamel – The enamel coating on the surface of a tooth consists of prisms, solid, rod-like structures extending from the dentin to the tooth's surface.
Occipital condyles – Two knobs at the base of the skull fit into the topmost neck vertebra; most other tetrapods, in contrast, have only one such knob.
For the most part, these characteristics were not present in the Triassic ancestors of the mammals. Nearly all mammaliaforms possess an epipubic bone, the exception being modern placentals.
Sexual dimorphism
On average, male mammals are larger than females, with males being at least 10% larger than females in over 45% of investigated species. Most mammalian orders also exhibit male-biased sexual dimorphism, although some orders do not show any bias or are significantly female-biased (Lagomorpha). Sexual size dimorphism increases with body size across mammals (Rensch's rule), suggesting that there are parallel selection pressures on both male and female size. Male-biased dimorphism relates to sexual selection on males through male–male competition for females, as there is a positive correlation between the degree of sexual selection, as indicated by mating systems, and the degree of male-biased size dimorphism. The degree of sexual selection is also positively correlated with male and female size across mammals. Further, parallel selection pressure on female mass is identified in that age at weaning is significantly higher in more polygynous species, even when correcting for body mass. Also, the reproductive rate is lower for larger females, indicating that fecundity selection selects for smaller females in mammals. Although these patterns hold across mammals as a whole, there is considerable variation across orders.
Biological systems
The majority of mammals have seven cervical vertebrae (bones in the neck). The exceptions are the manatee and the two-toed sloth, which have six, and the three-toed sloth which has nine. All mammalian brains possess a neocortex, a brain region unique to mammals. Placental brains have a corpus callosum, unlike monotremes and marsupials.
Circulatory systems
The mammalian heart has four chambers, two upper atria, the receiving chambers, and two lower ventricles, the discharging chambers. The heart has four valves, which separate its chambers and ensures blood flows in the correct direction through the heart (preventing backflow). After gas exchange in the pulmonary capillaries (blood vessels in the lungs), oxygen-rich blood returns to the left atrium via one of the four pulmonary veins. Blood flows nearly continuously back into the atrium, which acts as the receiving chamber, and from here through an opening into the left ventricle. Most blood flows passively into the heart while both the atria and ventricles are relaxed, but toward the end of the ventricular relaxation period, the left atrium will contract, pumping blood into the ventricle. The heart also requires nutrients and oxygen found in blood like other muscles, and is supplied via coronary arteries.
Respiratory systems
The lungs of mammals are spongy and honeycombed. Breathing is mainly achieved with the diaphragm, which divides the thorax from the abdominal cavity, forming a dome convex to the thorax. Contraction of the diaphragm flattens the dome, increasing the volume of the lung cavity. Air enters through the oral and nasal cavities, and travels through the larynx, trachea and bronchi, and expands the alveoli. Relaxing the diaphragm has the opposite effect, decreasing the volume of the lung cavity, causing air to be pushed out of the lungs. During exercise, the abdominal wall contracts, increasing pressure on the diaphragm, which forces air out quicker and more forcefully. The rib cage is able to expand and contract the chest cavity through the action of other respiratory muscles. Consequently, air is sucked into or expelled out of the lungs, always moving down its pressure gradient. This type of lung is known as a bellows lung due to its resemblance to blacksmith bellows.
Integumentary systems
The integumentary system (skin) is made up of three layers: the outermost epidermis, the dermis and the hypodermis. The epidermis is typically 10 to 30 cells thick; its main function is to provide a waterproof layer. Its outermost cells are constantly lost; its bottommost cells are constantly dividing and pushing upward. The middle layer, the dermis, is 15 to 40 times thicker than the epidermis. The dermis is made up of many components, such as bony structures and blood vessels. The hypodermis is made up of adipose tissue, which stores lipids and provides cushioning and insulation. The thickness of this layer varies widely from species to species; marine mammals require a thick hypodermis (blubber) for insulation, and right whales have the thickest blubber at . Although other animals have features such as whiskers, feathers, setae, or cilia that superficially resemble it, no animals other than mammals have hair. It is a definitive characteristic of the class, though some mammals have very little.
Digestive systems
Herbivores have developed a diverse range of physical structures to facilitate the consumption of plant material. To break up intact plant tissues, mammals have developed teeth structures that reflect their feeding preferences. For instance, frugivores (animals that feed primarily on fruit) and herbivores that feed on soft foliage have low-crowned teeth specialised for grinding foliage and seeds. Grazing animals that tend to eat hard, silica-rich grasses, have high-crowned teeth, which are capable of grinding tough plant tissues and do not wear down as quickly as low-crowned teeth. Most carnivorous mammals have carnassial teeth (of varying length depending on diet), long canines and similar tooth replacement patterns.
The stomach of even-toed ungulates (Artiodactyla) is divided into four sections: the rumen, the reticulum, the omasum and the abomasum (only ruminants have a rumen). After the plant material is consumed, it is mixed with saliva in the rumen and reticulum and separates into solid and liquid material. The solids lump together to form a bolus (or cud), and is regurgitated. When the bolus enters the mouth, the fluid is squeezed out with the tongue and swallowed again. Ingested food passes to the rumen and reticulum where cellulolytic microbes (bacteria, protozoa and fungi) produce cellulase, which is needed to break down the cellulose in plants. Perissodactyls, in contrast to the ruminants, store digested food that has left the stomach in an enlarged cecum, where it is fermented by bacteria. Carnivora have a simple stomach adapted to digest primarily meat, as compared to the elaborate digestive systems of herbivorous animals, which are necessary to break down tough, complex plant fibres. The cecum is either absent or short and simple, and the large intestine is not sacculated or much wider than the small intestine.
Excretory and genitourinary systems
The mammalian excretory system involves many components. Like most other land animals, mammals are ureotelic, and convert ammonia into urea, which is done by the liver as part of the urea cycle. Bilirubin, a waste product derived from blood cells, is passed through bile and urine with the help of enzymes excreted by the liver. The passing of bilirubin via bile through the intestinal tract gives mammalian feces a distinctive brown coloration. Distinctive features of the mammalian kidney include the presence of the renal pelvis and renal pyramids, and of a clearly distinguishable cortex and medulla, which is due to the presence of elongated loops of Henle. Only the mammalian kidney has a bean shape, although there are some exceptions, such as the multilobed reniculate kidneys of pinnipeds, cetaceans and bears. Most adult placentals have no remaining trace of the cloaca. In the embryo, the embryonic cloaca divides into a posterior region that becomes part of the anus, and an anterior region that has different fates depending on the sex of the individual: in females, it develops into the vestibule or urogenital sinus that receives the urethra and vagina, while in males it forms the entirety of the penile urethra. However, the afrosoricids and some shrews retain a cloaca as adults. In marsupials, the genital tract is separate from the anus, but a trace of the original cloaca does remain externally. Monotremes, which translates from Greek into "single hole", have a true cloaca. Urine flows from the ureters into the cloaca in monotremes and into the bladder in placentals.
Sound production
As in all other tetrapods, mammals have a larynx that can quickly open and close to produce sounds, and a supralaryngeal vocal tract which filters this sound. The lungs and surrounding musculature provide the air stream and pressure required to phonate. The larynx controls the pitch and volume of sound, but the strength the lungs exert to exhale also contributes to volume. More primitive mammals, such as the echidna, can only hiss, as sound is achieved solely through exhaling through a partially closed larynx. Other mammals phonate using vocal folds. The movement or tenseness of the vocal folds can result in many sounds such as purring and screaming. Mammals can change the position of the larynx, allowing them to breathe through the nose while swallowing through the mouth, and to form both oral and nasal sounds; nasal sounds, such as a dog whine, are generally soft sounds, and oral sounds, such as a dog bark, are generally loud.
Some mammals have a large larynx and thus a low-pitched voice, namely the hammer-headed bat (Hypsignathus monstrosus) where the larynx can take up the entirety of the thoracic cavity while pushing the lungs, heart, and trachea into the abdomen. Large vocal pads can also lower the pitch, as in the low-pitched roars of big cats. The production of infrasound is possible in some mammals such as the African elephant (Loxodonta spp.) and baleen whales. Small mammals with small larynxes have the ability to produce ultrasound, which can be detected by modifications to the middle ear and cochlea. Ultrasound is inaudible to birds and reptiles, which might have been important during the Mesozoic, when birds and reptiles were the dominant predators. This private channel is used by some rodents in, for example, mother-to-pup communication, and by bats when echolocating. Toothed whales also use echolocation, but, as opposed to the vocal membrane that extends upward from the vocal folds, they have a melon to manipulate sounds. Some mammals, namely the primates, have air sacs attached to the larynx, which may function to lower the resonances or increase the volume of sound.
The vocal production system is controlled by the cranial nerve nuclei in the brain, and supplied by the recurrent laryngeal nerve and the superior laryngeal nerve, branches of the vagus nerve. The vocal tract is supplied by the hypoglossal nerve and facial nerves. Electrical stimulation of the periaqueductal grey (PEG) region of the mammalian midbrain elicit vocalisations. The ability to learn new vocalisations is only exemplified in humans, seals, cetaceans, elephants and possibly bats; in humans, this is the result of a direct connection between the motor cortex, which controls movement, and the motor neurons in the spinal cord.
Fur
The primary function of the fur of mammals is thermoregulation. Others include protection, sensory purposes, waterproofing, and camouflage. Different types of fur serve different purposes:
Definitive – which may be shed after reaching a certain length
Vibrissae – sensory hairs, most commonly whiskers
Pelage – guard hairs, under-fur, and awn hair
Spines – stiff guard hair used for defence (such as in porcupines)
Bristles – long hairs usually used in visual signals. (such as a lion's mane)
Velli – often called "down fur" which insulates newborn mammals
Wool – long, soft and often curly
Thermoregulation
Hair length is not a factor in thermoregulation: for example, some tropical mammals such as sloths have the same length of fur length as some arctic mammals but with less insulation; and, conversely, other tropical mammals with short hair have the same insulating value as arctic mammals. The denseness of fur can increase an animal's insulation value, and arctic mammals especially have dense fur; for example, the musk ox has guard hairs measuring as well as a dense underfur, which forms an airtight coat, allowing them to survive in temperatures of . Some desert mammals, such as camels, use dense fur to prevent solar heat from reaching their skin, allowing the animal to stay cool; a camel's fur may reach in the summer, but the skin stays at . Aquatic mammals, conversely, trap air in their fur to conserve heat by keeping the skin dry.
Coloration
Mammalian coats are coloured for a variety of reasons, the major selective pressures including camouflage, sexual selection, communication, and thermoregulation. Coloration in both the hair and skin of mammals is mainly determined by the type and amount of melanin; eumelanins for brown and black colours and pheomelanin for a range of yellowish to reddish colours, giving mammals an earth tone. Some mammals have more vibrant colours; certain monkeys such mandrills and vervet monkeys, and opossums such as the Mexican mouse opossums and Derby's woolly opossums, have blue skin due to light diffraction in collagen fibres. Many sloths appear green because their fur hosts green algae; this may be a symbiotic relation that affords camouflage to the sloths.
Camouflage is a powerful influence in a large number of mammals, as it helps to conceal individuals from predators or prey. In arctic and subarctic mammals such as the arctic fox (Alopex lagopus), collared lemming (Dicrostonyx groenlandicus), stoat (Mustela erminea), and snowshoe hare (Lepus americanus), seasonal color change between brown in summer and white in winter is driven largely by camouflage. Some arboreal mammals, notably primates and marsupials, have shades of violet, green, or blue skin on parts of their bodies, indicating some distinct advantage in their largely arboreal habitat due to convergent evolution.
Aposematism, warning off possible predators, is the most likely explanation of the black-and-white pelage of many mammals which are able to defend themselves, such as in the foul-smelling skunk and the powerful and aggressive honey badger. Coat color is sometimes sexually dimorphic, as in many primate species. Differences in female and male coat color may indicate nutrition and hormone levels, important in mate selection. Coat color may influence the ability to retain heat, depending on how much light is reflected. Mammals with a darker coloured coat can absorb more heat from solar radiation, and stay warmer, and some smaller mammals, such as voles, have darker fur in the winter. The white, pigmentless fur of arctic mammals, such as the polar bear, may reflect more solar radiation directly onto the skin. The dazzling black-and-white striping of zebras appear to provide some protection from biting flies.
Reproductive system
Mammals reproduce by internal fertilisation and are solely gonochoric (an animal is born with either male or female genitalia, as opposed to hermaphrodites where there is no such schism). Male mammals inseminate females during copulation and ejaculate semen through a penis, which may be contained in a prepuce when not erect. Male placentals also urinate through a penis, and some placentals also have a penis bone (baculum). Marsupials typically have forked penises, while the echidna penis generally has four heads with only two functioning. Depending on the species, an erection may be fuelled by blood flow into vascular, spongy tissue or by muscular action. The testicles of most mammals descend into the scrotum which is typically posterior to the penis but is often anterior in marsupials. Female mammals generally have a vulva (clitoris and labia) on the outside, while the internal system contains paired oviducts, one or two uteri, one or two cervices and a vagina. Marsupials have two lateral vaginas and a medial vagina. The "vagina" of monotremes is better understood as a "urogenital sinus". The uterine systems of placentals can vary between a duplex, where there are two uteri and cervices which open into the vagina, a bipartite, where two uterine horns have a single cervix that connects to the vagina, a bicornuate, which consists where two uterine horns that are connected distally but separate medially creating a Y-shape, and a simplex, which has a single uterus.
The ancestral condition for mammal reproduction is the birthing of relatively undeveloped young, either through direct vivipary or a short period as soft-shelled eggs. This is likely due to the fact that the torso could not expand due to the presence of epipubic bones. The oldest demonstration of this reproductive style is with Kayentatherium, which produced undeveloped perinates, but at much higher litter sizes than any modern mammal, 38 specimens. Most modern mammals are viviparous, giving birth to live young. However, the five species of monotreme, the platypus and the four species of echidna, lay eggs. The monotremes have a sex-determination system different from that of most other mammals. In particular, the sex chromosomes of a platypus are more like those of a chicken than those of a therian mammal.
Viviparous mammals are in the subclass Theria; those living today are in the marsupial and placental infraclasses. Marsupials have a short gestation period, typically shorter than its estrous cycle and generally giving birth to a number of undeveloped newborns that then undergo further development; in many species, this takes place within a pouch-like sac, the marsupium, located in the front of the mother's abdomen. This is the plesiomorphic condition among viviparous mammals; the presence of epipubic bones in all non-placentals prevents the expansion of the torso needed for full pregnancy. Even non-placental eutherians probably reproduced this way. The placentals give birth to relatively complete and developed young, usually after long gestation periods. They get their name from the placenta, which connects the developing fetus to the uterine wall to allow nutrient uptake. In placentals, the epipubic is either completely lost or converted into the baculum; allowing the torso to be able to expand and thus birth developed offspring.
The mammary glands of mammals are specialised to produce milk, the primary source of nutrition for newborns. The monotremes branched early from other mammals and do not have the teats seen in most mammals, but they do have mammary glands. The young lick the milk from a mammary patch on the mother's belly. Compared to placental mammals, the milk of marsupials changes greatly in both production rate and in nutrient composition, due to the underdeveloped young. In addition, the mammary glands have more autonomy allowing them to supply separate milks to young at different development stages. Lactose is the main sugar in placental milk while monotreme and marsupial milk is dominated by oligosaccharides. Weaning is the process in which a mammal becomes less dependent on their mother's milk and more on solid food.
Endothermy
Nearly all mammals are endothermic ("warm-blooded"). Most mammals also have hair to help keep them warm. Like birds, mammals can forage or hunt in weather and climates too cold for ectothermic ("cold-blooded") reptiles and insects. Endothermy requires plenty of food energy, so mammals eat more food per unit of body weight than most reptiles. Small insectivorous mammals eat prodigious amounts for their size. A rare exception, the naked mole-rat produces little metabolic heat, so it is considered an operational poikilotherm. Birds are also endothermic, so endothermy is not unique to mammals.
Species lifespan
Among mammals, species maximum lifespan varies significantly (for example the shrew has a lifespan of two years, whereas the oldest bowhead whale is recorded to be 211 years). Although the underlying basis for these lifespan differences is still uncertain, numerous studies indicate that the ability to repair DNA damage is an important determinant of mammalian lifespan. In a 1974 study by Hart and Setlow, it was found that DNA excision repair capability increased systematically with species lifespan among seven mammalian species. Species lifespan was observed to be robustly correlated with the capacity to recognise DNA double-strand breaks as well as the level of the DNA repair protein Ku80. In a study of the cells from sixteen mammalian species, genes employed in DNA repair were found to be up-regulated in the longer-lived species. The cellular level of the DNA repair enzyme poly ADP ribose polymerase was found to correlate with species lifespan in a study of 13 mammalian species. Three additional studies of a variety of mammalian species also reported a correlation between species lifespan and DNA repair capability.
Locomotion
Terrestrial
Most vertebrates—the amphibians, the reptiles and some mammals such as humans and bears—are plantigrade, walking on the whole of the underside of the foot. Many mammals, such as cats and dogs, are digitigrade, walking on their toes, the greater stride length allowing more speed. Some animals such as horses are unguligrade, walking on the tips of their toes. This even further increases their stride length and thus their speed. A few mammals, namely the great apes, are also known to walk on their knuckles, at least for their front legs. Giant anteaters and platypuses are also knuckle-walkers. Some mammals are bipeds, using only two limbs for locomotion, which can be seen in, for example, humans and the great apes. Bipedal species have a larger field of vision than quadrupeds, conserve more energy and have the ability to manipulate objects with their hands, which aids in foraging. Instead of walking, some bipeds hop, such as kangaroos and kangaroo rats.
Animals will use different gaits for different speeds, terrain and situations. For example, horses show four natural gaits, the slowest horse gait is the walk, then there are three faster gaits which, from slowest to fastest, are the trot, the canter and the gallop. Animals may also have unusual gaits that are used occasionally, such as for moving sideways or backwards. For example, the main human gaits are bipedal walking and running, but they employ many other gaits occasionally, including a four-legged crawl in tight spaces. Mammals show a vast range of gaits, the order that they place and lift their appendages in locomotion. Gaits can be grouped into categories according to their patterns of support sequence. For quadrupeds, there are three main categories: walking gaits, running gaits and leaping gaits. Walking is the most common gait, where some feet are on the ground at any given time, and found in almost all legged animals. Running is considered to occur when at some points in the stride all feet are off the ground in a moment of suspension.
Arboreal
Arboreal animals frequently have elongated limbs that help them cross gaps, reach fruit or other resources, test the firmness of support ahead and, in some cases, to brachiate (swing between trees). Many arboreal species, such as tree porcupines, silky anteaters, spider monkeys, and possums, use prehensile tails to grasp branches. In the spider monkey, the tip of the tail has either a bare patch or adhesive pad, which provides increased friction. Claws can be used to interact with rough substrates and reorient the direction of forces the animal applies. This is what allows squirrels to climb tree trunks that are so large to be essentially flat from the perspective of such a small animal. However, claws can interfere with an animal's ability to grasp very small branches, as they may wrap too far around and prick the animal's own paw. Frictional gripping is used by primates, relying upon hairless fingertips. Squeezing the branch between the fingertips generates frictional force that holds the animal's hand to the branch. However, this type of grip depends upon the angle of the frictional force, thus upon the diameter of the branch, with larger branches resulting in reduced gripping ability. To control descent, especially down large diameter branches, some arboreal animals such as squirrels have evolved highly mobile ankle joints that permit rotating the foot into a 'reversed' posture. This allows the claws to hook into the rough surface of the bark, opposing the force of gravity. Small size provides many advantages to arboreal species: such as increasing the relative size of branches to the animal, lower center of mass, increased stability, lower mass (allowing movement on smaller branches) and the ability to move through more cluttered habitat. Size relating to weight affects gliding animals such as the sugar glider. Some species of primate, bat and all species of sloth achieve passive stability by hanging beneath the branch. Both pitching and tipping become irrelevant, as the only method of failure would be losing their grip.
Aerial
Bats are the only mammals that can truly fly. They fly through the air at a constant speed by moving their wings up and down (usually with some fore-aft movement as well). Because the animal is in motion, there is some airflow relative to its body which, combined with the velocity of the wings, generates a faster airflow moving over the wing. This generates a lift force vector pointing forwards and upwards, and a drag force vector pointing rearwards and upwards. The upwards components of these counteract gravity, keeping the body in the air, while the forward component provides thrust to counteract both the drag from the wing and from the body as a whole.
The wings of bats are much thinner and consist of more bones than those of birds, allowing bats to manoeuvre more accurately and fly with more lift and less drag. By folding the wings inwards towards their body on the upstroke, they use 35% less energy during flight than birds. The membranes are delicate, ripping easily; however, the tissue of the bat's membrane is able to regrow, such that small tears can heal quickly. The surface of their wings is equipped with touch-sensitive receptors on small bumps called Merkel cells, also found on human fingertips. These sensitive areas are different in bats, as each bump has a tiny hair in the center, making it even more sensitive and allowing the bat to detect and collect information about the air flowing over its wings, and to fly more efficiently by changing the shape of its wings in response.
Fossorial and subterranean
A fossorial (from Latin fossor, meaning "digger") is an animal adapted to digging which lives primarily, but not solely, underground. Some examples are badgers, and naked mole-rats. Many rodent species are also considered fossorial because they live in burrows for most but not all of the day. Species that live exclusively underground are subterranean, and those with limited adaptations to a fossorial lifestyle sub-fossorial. Some organisms are fossorial to aid in temperature regulation while others use the underground habitat for protection from predators or for food storage.
Fossorial mammals have a fusiform body, thickest at the shoulders and tapering off at the tail and nose. Unable to see in the dark burrows, most have degenerated eyes, but degeneration varies between species; pocket gophers, for example, are only semi-fossorial and have very small yet functional eyes, in the fully fossorial marsupial mole, the eyes are degenerated and useless, Talpa moles have vestigial eyes and the Cape golden mole has a layer of skin covering the eyes. External ears flaps are also very small or absent. Truly fossorial mammals have short, stout legs as strength is more important than speed to a burrowing mammal, but semi-fossorial mammals have cursorial legs. The front paws are broad and have strong claws to help in loosening dirt while excavating burrows, and the back paws have webbing, as well as claws, which aids in throwing loosened dirt backwards. Most have large incisors to prevent dirt from flying into their mouth.
Many fossorial mammals such as shrews, hedgehogs, and moles were classified under the now obsolete order Insectivora.
Aquatic
Fully aquatic mammals, the cetaceans and sirenians, have lost their legs and have a tail fin to propel themselves through the water. Flipper movement is continuous. Whales swim by moving their tail fin and lower body up and down, propelling themselves through vertical movement, while their flippers are mainly used for steering. Their skeletal anatomy allows them to be fast swimmers. Most species have a dorsal fin to prevent themselves from turning upside-down in the water. The flukes of sirenians are raised up and down in long strokes to move the animal forward, and can be twisted to turn. The forelimbs are paddle-like flippers which aid in turning and slowing.
Semi-aquatic mammals, like pinnipeds, have two pairs of flippers on the front and back, the fore-flippers and hind-flippers. The elbows and ankles are enclosed within the body. Pinnipeds have several adaptions for reducing drag. In addition to their streamlined bodies, they have smooth networks of muscle bundles in their skin that may increase laminar flow and make it easier for them to slip through water. They also lack arrector pili, so their fur can be streamlined as they swim. They rely on their fore-flippers for locomotion in a wing-like manner similar to penguins and sea turtles. Fore-flipper movement is not continuous, and the animal glides between each stroke. Compared to terrestrial carnivorans, the fore-limbs are reduced in length, which gives the locomotor muscles at the shoulder and elbow joints greater mechanical advantage; the hind-flippers serve as stabilizers. Other semi-aquatic mammals include beavers, hippopotamuses, otters and platypuses. Hippos are very large semi-aquatic mammals, and their barrel-shaped bodies have graviportal skeletal structures, adapted to carrying their enormous weight, and their specific gravity allows them to sink and move along the bottom of a river.
Behavior
Communication and vocalisation
Many mammals communicate by vocalising. Vocal communication serves many purposes, including in mating rituals, as warning calls, to indicate food sources, and for social purposes. Males often call during mating rituals to ward off other males and to attract females, as in the roaring of lions and red deer. The songs of the humpback whale may be signals to females; they have different dialects in different regions of the ocean. Social vocalisations include the territorial calls of gibbons, and the use of frequency in greater spear-nosed bats to distinguish between groups. The vervet monkey gives a distinct alarm call for each of at least four different predators, and the reactions of other monkeys vary according to the call. For example, if an alarm call signals a python, the monkeys climb into the trees, whereas the eagle alarm causes monkeys to seek a hiding place on the ground. Prairie dogs similarly have complex calls that signal the type, size, and speed of an approaching predator. Elephants communicate socially with a variety of sounds including snorting, screaming, trumpeting, roaring and rumbling. Some of the rumbling calls are infrasonic, below the hearing range of humans, and can be heard by other elephants up to away at still times near sunrise and sunset.
Mammals signal by a variety of means. Many give visual anti-predator signals, as when deer and gazelle stot, honestly indicating their fit condition and their ability to escape, or when white-tailed deer and other prey mammals flag with conspicuous tail markings when alarmed, informing the predator that it has been detected. Many mammals make use of scent-marking, sometimes possibly to help defend territory, but probably with a range of functions both within and between species. Microbats and toothed whales including oceanic dolphins vocalise both socially and in echolocation.
Feeding
To maintain a high constant body temperature is energy expensive—mammals therefore need a nutritious and plentiful diet. While the earliest mammals were probably predators, different species have since adapted to meet their dietary requirements in a variety of ways. Some eat other animals—this is a carnivorous diet (and includes insectivorous diets). Other mammals, called herbivores, eat plants, which contain complex carbohydrates such as cellulose. An herbivorous diet includes subtypes such as granivory (seed eating), folivory (leaf eating), frugivory (fruit eating), nectarivory (nectar eating), gummivory (gum eating) and mycophagy (fungus eating). The digestive tract of an herbivore is host to bacteria that ferment these complex substances, and make them available for digestion, which are either housed in the multichambered stomach or in a large cecum. Some mammals are coprophagous, consuming feces to absorb the nutrients not digested when the food was first ingested. An omnivore eats both prey and plants. Carnivorous mammals have a simple digestive tract because the proteins, lipids and minerals found in meat require little in the way of specialised digestion. Exceptions to this include baleen whales who also house gut flora in a multi-chambered stomach, like terrestrial herbivores.
The size of an animal is also a factor in determining diet type (Allen's rule). Since small mammals have a high ratio of heat-losing surface area to heat-generating volume, they tend to have high energy requirements and a high metabolic rate. Mammals that weigh less than about are mostly insectivorous because they cannot tolerate the slow, complex digestive process of an herbivore. Larger animals, on the other hand, generate more heat and less of this heat is lost. They can therefore tolerate either a slower collection process (carnivores that feed on larger vertebrates) or a slower digestive process (herbivores). Furthermore, mammals that weigh more than usually cannot collect enough insects during their waking hours to sustain themselves. The only large insectivorous mammals are those that feed on huge colonies of insects (ants or termites).
Some mammals are omnivores and display varying degrees of carnivory and herbivory, generally leaning in favour of one more than the other. Since plants and meat are digested differently, there is a preference for one over the other, as in bears where some species may be mostly carnivorous and others mostly herbivorous. They are grouped into three categories: mesocarnivory (50–70% meat), hypercarnivory (70% and greater of meat), and hypocarnivory (50% or less of meat). The dentition of hypocarnivores consists of dull, triangular carnassial teeth meant for grinding food. Hypercarnivores, however, have conical teeth and sharp carnassials meant for slashing, and in some cases strong jaws for bone-crushing, as in the case of hyenas, allowing them to consume bones; some extinct groups, notably the Machairodontinae, had sabre-shaped canines.
Some physiological carnivores consume plant matter and some physiological herbivores consume meat. From a behavioural aspect, this would make them omnivores, but from the physiological standpoint, this may be due to zoopharmacognosy. Physiologically, animals must be able to obtain both energy and nutrients from plant and animal materials to be considered omnivorous. Thus, such animals are still able to be classified as carnivores and herbivores when they are just obtaining nutrients from materials originating from sources that do not seemingly complement their classification. For example, it is well documented that some ungulates such as giraffes, camels, and cattle, will gnaw on bones to consume particular minerals and nutrients. Also, cats, which are generally regarded as obligate carnivores, occasionally eat grass to regurgitate indigestible material (such as hairballs), aid with haemoglobin production, and as a laxative.
Many mammals, in the absence of sufficient food requirements in an environment, suppress their metabolism and conserve energy in a process known as hibernation. In the period preceding hibernation, larger mammals, such as bears, become polyphagic to increase fat stores, whereas smaller mammals prefer to collect and stash food. The slowing of the metabolism is accompanied by a decreased heart and respiratory rate, as well as a drop in internal temperatures, which can be around ambient temperature in some cases. For example, the internal temperatures of hibernating Arctic ground squirrels can drop to ; however, the head and neck always stay above . A few mammals in hot environments aestivate in times of drought or extreme heat, for example the fat-tailed dwarf lemur (Cheirogaleus medius).
Drinking
Intelligence
In intelligent mammals, such as primates, the cerebrum is larger relative to the rest of the brain. Intelligence itself is not easy to define, but indications of intelligence include the ability to learn, matched with behavioural flexibility. Rats, for example, are considered to be highly intelligent, as they can learn and perform new tasks, an ability that may be important when they first colonise a fresh habitat. In some mammals, food gathering appears to be related to intelligence: a deer feeding on plants has a brain smaller than a cat, which must think to outwit its prey.
Tool use by animals may indicate different levels of learning and cognition. The sea otter uses rocks as essential and regular parts of its foraging behaviour (smashing abalone from rocks or breaking open shells), with some populations spending 21% of their time making tools. Other tool use, such as chimpanzees using twigs to "fish" for termites, may be developed by watching others use tools and may even be a true example of animal teaching. Tools may even be used in solving puzzles in which the animal appears to experience a "Eureka moment". Other mammals that do not use tools, such as dogs, can also experience a Eureka moment.
Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately the or exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalisation quotient that can be used as another indication of animal intelligence. Sperm whales have the largest brain mass of any animal on earth, averaging and in mature males.
Self-awareness appears to be a sign of abstract thinking. Self-awareness, although not well-defined, is believed to be a precursor to more advanced processes such as metacognitive reasoning. The traditional method for measuring this is the mirror test, which determines if an animal possesses the ability of self-recognition. Mammals that have passed the mirror test include Asian elephants (some pass, some do not); chimpanzees; bonobos; orangutans; humans, from 18 months (mirror stage); common bottlenose dolphins; orcas; and false killer whales.
Social structure
Eusociality is the highest level of social organisation. These societies have an overlap of adult generations, the division of reproductive labour and cooperative caring of young. Usually insects, such as bees, ants and termites, have eusocial behaviour, but it is demonstrated in two rodent species: the naked mole-rat and the Damaraland mole-rat.
Presociality is when animals exhibit more than just sexual interactions with members of the same species, but fall short of qualifying as eusocial. That is, presocial animals can display communal living, cooperative care of young, or primitive division of reproductive labour, but they do not display all of the three essential traits of eusocial animals. Humans and some species of Callitrichidae (marmosets and tamarins) are unique among primates in their degree of cooperative care of young. Harry Harlow set up an experiment with rhesus monkeys, presocial primates, in 1958; the results from this study showed that social encounters are necessary in order for the young monkeys to develop both mentally and sexually.
A fission–fusion society is a society that changes frequently in its size and composition, making up a permanent social group called the "parent group". Permanent social networks consist of all individual members of a community and often varies to track changes in their environment. In a fission–fusion society, the main parent group can fracture (fission) into smaller stable subgroups or individuals to adapt to environmental or social circumstances. For example, a number of males may break off from the main group in order to hunt or forage for food during the day, but at night they may return to join (fusion) the primary group to share food and partake in other activities. Many mammals exhibit this, such as primates (for example orangutans and spider monkeys), elephants, spotted hyenas, lions, and dolphins.
Solitary animals defend a territory and avoid social interactions with the members of its species, except during breeding season. This is to avoid resource competition, as two individuals of the same species would occupy the same niche, and to prevent depletion of food. A solitary animal, while foraging, can also be less conspicuous to predators or prey.
In a hierarchy, individuals are either dominant or submissive. A despotic hierarchy is where one individual is dominant while the others are submissive, as in wolves and lemurs, and a pecking order is a linear ranking of individuals where there is a top individual and a bottom individual. Pecking orders may also be ranked by sex, where the lowest individual of a sex has a higher ranking than the top individual of the other sex, as in hyenas. Dominant individuals, or alphas, have a high chance of reproductive success, especially in harems where one or a few males (resident males) have exclusive breeding rights to females in a group. Non-resident males can also be accepted in harems, but some species, such as the common vampire bat (Desmodus rotundus), may be more strict.
Some mammals are perfectly monogamous, meaning that they mate for life and take no other partners (even after the original mate's death), as with wolves, Eurasian beavers, and otters. There are three types of polygamy: either one or multiple dominant males have breeding rights (polygyny), multiple males that females mate with (polyandry), or multiple males have exclusive relations with multiple females (polygynandry). It is much more common for polygynous mating to happen, which, excluding leks, are estimated to occur in up to 90% of mammals. Lek mating occurs when males congregate around females and try to attract them with various courtship displays and vocalisations, as in harbour seals.
All higher mammals (excluding monotremes) share two major adaptations for care of the young: live birth and lactation. These imply a group-wide choice of a degree of parental care. They may build nests and dig burrows to raise their young in, or feed and guard them often for a prolonged period of time. Many mammals are K-selected, and invest more time and energy into their young than do r-selected animals. When two animals mate, they both share an interest in the success of the offspring, though often to different extremes. Mammalian females exhibit some degree of maternal aggression, another example of parental care, which may be targeted against other females of the species or the young of other females; however, some mammals may "aunt" the infants of other females, and care for them. Mammalian males may play a role in child rearing, as with tenrecs, however this varies species to species, even within the same genus. For example, the males of the southern pig-tailed macaque (Macaca nemestrina) do not participate in child care, whereas the males of the Japanese macaque (M. fuscata) do.
Humans and other mammals
In human culture
Non-human mammals play a wide variety of roles in human culture. They are the most popular of pets, with tens of millions of dogs, cats and other animals including rabbits and mice kept by families around the world. Mammals such as mammoths, horses and deer are among the earliest subjects of art, being found in Upper Paleolithic cave paintings such as at Lascaux. Major artists such as Albrecht Dürer, George Stubbs and Edwin Landseer are known for their portraits of mammals. Many species of mammals have been hunted for sport and for food; deer and wild boar are especially popular as game animals. Mammals such as horses and dogs are widely raced for sport, often combined with betting on the outcome. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. Mammals further play a wide variety of roles in literature, film, mythology, and religion.
Uses and importance
The domestication of mammals was instrumental in the Neolithic development of agriculture and of civilisation, causing farmers to replace hunter-gatherers around the world. This transition from hunting and gathering to herding flocks and growing crops was a major step in human history. The new agricultural economies, based on domesticated mammals, caused "radical restructuring of human societies, worldwide alterations in biodiversity, and significant changes in the Earth's landforms and its atmosphere... momentous outcomes".
Domestic mammals form a large part of the livestock raised for meat across the world. They include (2009) around 1.4 billion cattle, 1 billion sheep, 1 billion domestic pigs, and (1985) over 700 million rabbits. Working domestic animals including cattle and horses have been used for work and transport from the origins of agriculture, their numbers declining with the arrival of mechanised transport and agricultural machinery. In 2004 they still provided some 80% of the power for the mainly small farms in the third world, and some 20% of the world's transport, again mainly in rural areas. In mountainous regions unsuitable for wheeled vehicles, pack animals continue to transport goods. Mammal skins provide leather for shoes, clothing and upholstery. Wool from mammals including sheep, goats and alpacas has been used for centuries for clothing.
Mammals serve a major role in science as experimental animals, both in fundamental biological research, such as in genetics, and in the development of new medicines, which must be tested exhaustively to demonstrate their safety. Millions of mammals, especially mice and rats, are used in experiments each year. A knockout mouse is a genetically modified mouse with an inactivated gene, replaced or disrupted with an artificial piece of DNA. They enable the study of sequenced genes whose functions are unknown. A small percentage of the mammals are non-human primates, used in research for their similarity to humans.
Despite the benefits domesticated mammals had for human development, humans have an increasingly detrimental effect on wild mammals across the world. It has been estimated that the mass of all wild mammals has declined to only 4% of all mammals, with 96% of mammals being humans and their livestock now (see figure). In fact, terrestrial wild mammals make up only 2% of all mammals.
Hybrids
Hybrids are offspring resulting from the breeding of two genetically distinct individuals, which usually will result in a high degree of heterozygosity, though hybrid and heterozygous are not synonymous. The deliberate or accidental hybridising of two or more species of closely related animals through captive breeding is a human activity which has been in existence for millennia and has grown for economic purposes. Hybrids between different subspecies within a species (such as between the Bengal tiger and Siberian tiger) are known as intra-specific hybrids. Hybrids between different species within the same genus (such as between lions and tigers) are known as interspecific hybrids or crosses. Hybrids between different genera (such as between sheep and goats) are known as intergeneric hybrids. Natural hybrids will occur in hybrid zones, where two populations of species within the same genera or species living in the same or adjacent areas will interbreed with each other. Some hybrids have been recognised as species, such as the red wolf (though this is controversial).
Artificial selection, the deliberate selective breeding of domestic animals, is being used to breed back recently extinct animals in an attempt to achieve an animal breed with a phenotype that resembles that extinct wildtype ancestor. A breeding-back (intraspecific) hybrid may be very similar to the extinct wildtype in appearance, ecological niche and to some extent genetics, but the initial gene pool of that wild type is lost forever with its extinction. As a result, bred-back breeds are at best vague look-alikes of extinct wildtypes, as Heck cattle are of the aurochs.
Purebred wild species evolved to a specific ecology can be threatened with extinction through the process of genetic pollution, the uncontrolled hybridisation, introgression genetic swamping which leads to homogenisation or out-competition from the heterosic hybrid species. When new populations are imported or selectively bred by people, or when habitat modification brings previously isolated species into contact, extinction in some species, especially rare varieties, is possible. Interbreeding can swamp the rarer gene pool and create hybrids, depleting the purebred gene pool. For example, the endangered wild water buffalo is most threatened with extinction by genetic pollution from the domestic water buffalo. Such extinctions are not always apparent from a morphological standpoint. Some degree of gene flow is a normal evolutionary process, nevertheless, hybridisation threatens the existence of rare species.
Threats
The loss of species from ecological communities, defaunation, is primarily driven by human activity. This has resulted in empty forests, ecological communities depleted of large vertebrates. In the Quaternary extinction event, the mass die-off of megafaunal variety coincided with the appearance of humans, suggesting a human influence. One hypothesis is that humans hunted large mammals, such as the woolly mammoth, into extinction. The 2019 Global Assessment Report on Biodiversity and Ecosystem Services by IPBES states that the total biomass of wild mammals has declined by 82 per cent since the beginning of human civilisation. Wild animals make up just 4% of mammalian biomass on earth, while humans and their domesticated animals make up 96%.
Various species are predicted to become extinct in the near future, among them the rhinoceros, giraffes, and species of primates and pangolins. According to the WWF's 2020 Living Planet Report, vertebrate wildlife populations have declined by 68% since 1970 as a result of human activities, particularly overconsumption, population growth and intensive farming, which is evidence that humans have triggered a sixth mass extinction event. Hunting alone threatens hundreds of mammalian species around the world. Scientists claim that the growing demand for meat is contributing to biodiversity loss as this is a significant driver of deforestation and habitat destruction; species-rich habitats, such as significant portions of the Amazon rainforest, are being converted to agricultural land for meat production. Another influence is over-hunting and poaching, which can reduce the overall population of game animals, especially those located near villages, as in the case of peccaries. The effects of poaching can especially be seen in the ivory trade with African elephants. Marine mammals are at risk from entanglement from fishing gear, notably cetaceans, with discard mortalities ranging from 65,000 to 86,000 individuals annually.
Attention is being given to endangered species globally, notably through the Convention on Biological Diversity, otherwise known as the Rio Accord, which includes 189 signatory countries that are focused on identifying endangered species and habitats. Another notable conservation organisation is the IUCN, which has a membership of over 1,200 governmental and non-governmental organisations.
Recent extinctions can be directly attributed to human influences. The IUCN characterises 'recent' extinction as those that have occurred past the cut-off point of 1500, and around 80 mammal species have gone extinct since that time and 2015. Some species, such as the Père David's deer are extinct in the wild, and survive solely in captive populations. Other species, such as the Florida panther, are ecologically extinct, surviving in such low numbers that they essentially have no impact on the ecosystem. Other populations are only locally extinct (extirpated), still existing elsewhere, but reduced in distribution, as with the extinction of grey whales in the Atlantic.
| Biology and health sciences | Biology | null |
18845 | https://en.wikipedia.org/wiki/Mouse | Mouse | A mouse (: mice) is a small rodent. Characteristically, mice are known to have a pointed snout, small rounded ears, a body-length scaly tail, and a high breeding rate. The best known mouse species is the common house mouse (Mus musculus). Mice are also popular as pets. In some places, certain kinds of field mice are locally common. They are known to invade homes for food and shelter.
Mice are typically distinguished from rats by their size. Generally, when a muroid rodent is discovered, its common name includes the term mouse if it is smaller, or rat if it is larger. The common terms rat and mouse are not taxonomically specific. Typical mice are classified in the genus Mus, but the term mouse is not confined to members of Mus and can also apply to species from other genera such as the deer mouse (Peromyscus).
Domestic mice sold as pets often differ substantially in size from the common house mouse. This is attributable to breeding and different conditions in the wild. The best-known strain of mouse is the white lab mouse. It has more uniform traits that are appropriate to its use in research.
Cats, wild dogs, foxes, birds of prey, snakes and certain kinds of arthropods have been known to prey upon mice. Despite this, mice populations remain plentiful. Due to its remarkable adaptability to almost any environment, the mouse is one of the most successful mammalian genera living on Earth today.
In certain contexts, mice can be considered vermin. Vermin are a major source of crop damage, as they are known to cause structural damage and spread disease. Mice spread disease through their feces and are often carriers of parasites. In North America, breathing dust that has come in contact with mouse excrement has been linked to hantavirus, which may lead to hantavirus pulmonary syndrome (HPS).
Primarily nocturnal animals, mice compensate for their poor eyesight with a keen sense of hearing. They depend on their sense of smell to locate food and avoid predators.
In the wild, mice are known to build intricate burrows. These burrows have long entrances and are equipped with escape tunnels. In at least one species, the architectural design of a burrow is a genetic trait.
Types of animals known as mice
The most common mice are murines, in the same clade as common rats. They are murids, along with gerbils and other close relatives.
order Dasyuromorphia
marsupial mice, smaller species of Dasyuridae
order Rodentia
suborder Castorimorpha
family Heteromyidae
Kangaroo mouse, genus Microdipodops
Pocket mouse, tribe Perognathinae
Spiny pocket mouse, genus Heteromys
suborder Anomaluromorpha
family Anomaluridae
flying mouse
suborder Myomorpha
family Cricetidae
Brush mouse, Peromyscus boylii
Florida mouse
Golden mouse
American harvest mouse, genus Reithrodontomys
Voles Often referred to as "Field or Meadow mice"
family Muridae
typical mice, the genus Mus
Field mice, genus Apodemus
Wood mouse, Apodemus sylvaticus
Yellow-necked mouse, Apodemus flavicollis
Large Mindoro forest mouse
Big-eared hopping mouse
Luzon montane forest mouse
Forrest's mouse
Pebble-mound mouse
Bolam's mouse
Eurasian harvest mouse, genus Micromys
Emotions
Researchers at the Max Planck Institute of Neurobiology have confirmed that mice have a range of facial expressions. They used machine vision to spot familiar human emotions like pleasure, disgust, nausea, pain, and fear.
Diet
In nature, mice are largely herbivores, consuming any kind of fruit or grain from plants. However, mice adapt well to urban areas and are known for eating almost all types of food scraps. In captivity, mice are commonly fed commercial pelleted mouse diet. These diets are nutritionally complete, but they still need a large variety of vegetables.
Despite popular belief, most mice do not have a special appetite for cheese. They will only eat cheese for lack of better options.
Human use
As experimental animals
Mice are common experimental animals in laboratory research of biology and psychology fields primarily because they are mammals, and also because they share a high degree of homology with humans. They are the most commonly used mammalian model organism, more common than rats. The mouse genome has been sequenced, and virtually all mouse genes have human homologs. The mouse has approximately 2.7 billion base pairs and 20 pairs of chromosomes.
They can also be manipulated in ways that are illegal with humans, although animal rights activists often object. A knockout mouse is a genetically modified mouse that has had one or more of its genes made inoperable through a gene knockout. Experimental mouse model systems include mouse models of colorectal and intestinal cancer, mouse models of Down syndrome and mouse models of breast cancer metastasis.
Reasons for common selection of mice are that they are small and inexpensive, have a widely varied diet, are easily maintained, and can reproduce quickly. Several generations of mice can be observed in a relatively short time. Mice are generally very docile if raised from birth and given sufficient human contact. However, certain strains have been known to be quite temperamental.
As pets
Many people buy mice as companion pets. They can be playful, loving and can grow used to being handled. Like pet rats, pet mice should not be left unsupervised outside as they have many natural predators, including (but not limited to) birds, snakes, lizards, cats, and dogs. Male mice tend to have a stronger odor than the females. However, mice are careful groomers and as pets they never need bathing. Well looked-after mice can make ideal pets. Some common mouse care products are:
Cage – Usually a hamster or gerbil cage, but a variety of special mouse cages are now available. Most should have a secure door.
Food – Special pelleted and seed-based food is available. Mice can generally eat most rodent food (for rats, mice, hamsters, gerbils, etc.)
Bedding – Usually made of hardwood pulp, such as aspen, sometimes from shredded, uninked paper or recycled virgin wood pulp. Using corn husk bedding is avoided because it promotes Aspergillus fungus, and can grow mold once it gets wet, which is rough on their feet.
As feed
Mice are a staple in the diet of many small carnivores. In various countries mice are used as feed for pets such as snakes, lizards, frogs, tarantulas, and birds of prey, and many pet stores carry mice for this purpose. Such mice are sold in various sizes and with various amounts of fur. Mice without fur are easier for the animal to consume; however, mice with fur may be more convincing as animal feed.
As food
Humans have eaten mice since prehistoric times. In Victorian Britain, fried mice were still given to children as a folk remedy for bed-wetting; while Jared Diamond reports creamed mice being used in England as a dietary supplement during Second World War rationing. Mice are a delicacy throughout eastern Zambia and northern Malawi, where they are a seasonal source of protein. Field rat is a popular food in Vietnam and neighboring countries. In many countries, however, mouse is no longer a food item.
Prescribed cures in Ancient Egypt included mice as medicine. In Ancient Egypt, when infants were ill, mice were eaten as treatment by their mothers. It was believed that mouse eating by the mother would help heal the baby who was ill.
| Biology and health sciences | Rodents | null |
18857 | https://en.wikipedia.org/wiki/Mustelidae | Mustelidae | The Mustelidae (; from Latin , weasel) are a diverse family of carnivoran mammals, including weasels, badgers, otters, polecats, martens, grisons, and wolverines. Otherwise known as mustelids (), they form the largest family in the suborder Caniformia of the order Carnivora with about 66 to 70 species in nine subfamilies.
Variety
Mustelids vary greatly in size and behaviour. The smaller variants of the least weasel can be under in length, while the giant otter of Amazonian South America can measure up to and sea otters can exceed in weight. Wolverines can crush bones as thick as the femur of a moose to get at the marrow, and have been seen attempting to drive bears away from their kills. The sea otter uses rocks to break open shellfish to eat. Martens are largely arboreal, while European badgers dig extensive tunnel networks, called setts. Only one mustelid has been domesticated; the ferret. Tayra are also kept as pets (although they require a Dangerous Wild Animals licence in the UK), or as working animals for hunting or vermin control. Others have been important in the fur trade—the mink is often raised for its fur.
Being one of the most species-rich families in the order Carnivora, the family Mustelidae also is one of the oldest. Mustelid-like forms first appeared about 40 million years ago (Mya), roughly coinciding with the appearance of rodents. The common ancestor of modern mustelids appeared about 18 Mya.
Characteristics
Within a large range of variation, the mustelids exhibit some common characteristics. They are typically small animals with elongated bodies, short legs, short skulls, short, round ears, and thick fur. Mustelids' long, slender body structure is adapted to three main lifestyles: terrestrial, arboreal, and aquatic/semi-aquatic. They exhibit digitigrade or plantigrade locomotion, with five toes on each foot, enabling them to move in different ways (i.e. digging, climbing, swimming). Most mustelids are solitary, nocturnal animals, and are active year-round. Their dense fur, often serving as natural camouflage, undergoes seasonal changes to help them adjust to varying environmental conditions.
With the exception of the sea otter they have anal scent glands that produce a strong-smelling secretion the animals use for sexual signalling and marking territory.
Mustelids exhibit sexual dimorphism, with males being larger than females, but degree varies between species as well as geographically within species. Male mustelids have a bifurcated penis and baculum. Most mustelid reproduction involves embryonic diapause. The embryo does not immediately implant in the uterus, but remains dormant for some time. No development takes place as long as the embryo remains unattached to the uterine lining. As a result, the normal gestation period is extended, sometimes up to a year. This allows the young to be born under favourable environmental conditions. Reproduction has a large energy cost, so it is to a female's benefit to have available food and mild weather. The young are more likely to survive if birth occurs after previous offspring have been weaned.
Mustelids are predominantly carnivorous, although some eat vegetable matter at times. While not all mustelids share an identical dentition, they all possess teeth adapted for eating flesh, including the presence of shearing carnassials. One characteristic trait is a meat-shearing upper-back molar that is rotated 90°, towards the inside of the mouth. With variation between species, the most common dental formula is .
Ecology
The fisher, tayra, and martens are partially arboreal, while badgers are fossorial. A number of mustelids have aquatic lifestyles, ranging from semiaquatic minks and river otters to the fully aquatic sea otter, which is one of the few nonprimate mammals known to use tools while foraging. It uses "anvil" stones to crack open the shellfish that form a significant part of its diet. It is a "keystone species", keeping its prey populations in balance so some do not outcompete the others and destroy the kelp in which they live.
The black-footed ferret is entirely dependent on another keystone species, the prairie dog. A family of four ferrets eats 250 prairie dogs in a year; this requires a stable population of prairie dogs from an area of some .
Animals of similar appearance
Skunks were previously included as a subfamily of the mustelids, but DNA research placed them in their own separate family (Mephitidae). Mongooses bear a striking resemblance to many mustelids, but belong to a distinctly different suborder—the Feliformia (all those carnivores sharing more recent origins with the cats) and not the Caniformia (those sharing more recent origins with the dogs). Because mongooses and mustelids occupy similar ecological niches, convergent evolution has led to similarity in form and behavior.
Human uses
Several mustelids, including the mink, the sable (a type of marten), and the stoat (ermine), possess furs that are considered beautiful and valuable, so have been hunted since prehistoric times. From the early Middle Ages, the trade in furs was of great economic importance for northern and eastern European nations with large native populations of fur-bearing mustelids, and was a major economic impetus behind Russian expansion into Siberia and French and English expansion in North America. In recent centuries fur farming, notably of mink, has also become widespread and provides the majority of the fur brought to market.
One species, the sea mink (Neogale macrodon) of New England and Canada, was driven to extinction by fur trappers. Its appearance and habits are almost unknown today because no complete specimens can be found and no systematic contemporary studies were conducted.
The sea otter, which has the densest fur of any animal, narrowly escaped the fate of the sea mink. The discovery of large populations in the North Pacific was the major economic driving force behind Russian expansion into Kamchatka, the Aleutian Islands, and Alaska, as well as a cause for conflict with Japan and foreign hunters in the Kuril Islands. Together with widespread hunting in California and British Columbia, the species was brought to the brink of extinction until an international moratorium came into effect in 1911.
Today, some mustelids are threatened for other reasons. Sea otters are vulnerable to oil spills and the indirect effects of overfishing; the black-footed ferret, a relative of the European polecat, suffers from the loss of American prairie; and wolverine populations are slowly declining because of habitat destruction and persecution. The rare European mink (Mustela lutreola) is one of the most endangered mustelid species.
The ferret, a domesticated European polecat, is a fairly common pet.
Evolution and systematics
Mustelidae is a subfamily in Musteloidia, a superfamily of mammals that is united by shared skull and teeth characteristics. Mustelids are believed to have separated from their next closest related family, Procyonidae, around 29 million years ago. The oldest known mustelid from North America is Corumictis wolsani from the early and late Oligocene (early and late Arikareean, Ar1–Ar3) of Oregon. Middle Oligocene Mustelictis from Europe might be a mustelid, as well. Other early fossils of the mustelids were dated at the end of the Oligocene to the beginning of the Miocene. Which of these forms are Mustelidae ancestors and which should be considered the first mustelids is unclear.
The fossil record indicates that mustelids appeared in the late Oligocene period (33 Mya) in Eurasia and migrated to every continent except Antarctica and Australia (all the continents that were connected during or since the early Miocene). They reached the Americas via the Bering land bridge.
The 68 recent mustelids (66 extant species) are classified into eight subfamilies in 22 genera:
Subfamily Taxidiinae
Genus Taxidea
American badger, T. taxus
Subfamily Mellivorinae
Genus Mellivora
Honey badger, M. capensis
Subfamily Melinae
Genus Arctonyx
Northern hog badger, A. albogularis
Greater hog badger, A. collaris
Sumatran hog badger, A. hoevenii
Genus Meles
Japanese badger, M. anakuma
Asian badger, M. leucurus
European badger, M. meles
Caucasian badger, M. canescens
Subfamily Helictidinae
Genus Melogale
Vietnam ferret-badger, M. cucphuongensis
Bornean ferret-badger, M. everetti
Chinese ferret-badger, M. moschata
Javan ferret-badger, M. orientalis
Burmese ferret-badger, M. personata
Formosan ferret-badger, M. subaurantiaca
Subfamily Guloninae
Genus Eira
Tayra, E. barbara
Genus Gulo
Wolverine, G. gulo
Genus Martes
American marten, M. americana
Pacific marten, M. caurina
Yellow-throated marten, M. flavigula
Beech marten, M. foina
Nilgiri marten, M. gwatkinsii
European pine marten, M. martes
Japanese marten, M. melampus
Sable, M. zibellina
Genus Pekania
Fisher, P. pennanti
Subfamily Ictonychinae
Genus Galictis
Lesser grison, G. cuja
Greater grison, G. vittata
Genus Ictonyx
Saharan striped polecat, I. libycus
Striped polecat, I. striatus
Genus Lyncodon
Patagonian weasel, L. patagonicus
Genus Poecilogale
African striped weasel, P. albinucha
Genus Vormela
Marbled polecat, V. peregusna
Subfamily Lutrinae (otters)
Genus Aonyx
African clawless otter, A. capensis
Asian small-clawed otter, A. cinerea
Congo clawless otter, A. congicus
Genus Enhydra
Sea otter, E. lutris
Genus Lontra
North American river otter, L. canadensis
Marine otter, L. felina
Neotropical otter, L. longicaudis
Southern river otter, L. provocax
Genus Lutra
Eurasian otter, L. lutra
Hairy-nosed otter, L. sumatrana
Japanese otter. L. nippon
Genus Hydrictis
Spotted-necked otter, H. maculicollis
Genus Lutrogale
Smooth-coated otter, L. perspicillata
Genus Pteronura
Giant otter, P. brasiliensis
Subfamily Mustelinae (weasels, ferrets, and mink)
Genus Mustela
Mountain weasel, M. altaica
Stoat (Beringian ermine), M. erminea
Steppe polecat, M. eversmannii
Domestic ferret, M. furo
Haida ermine, M. haidarum
Japanese weasel, M. itatsi
Yellow-bellied weasel, M. kathiah
European mink, M. lutreola
Indonesian mountain weasel, M. lutreolina
Black-footed ferret, M. nigripes
Least weasel, M. nivalis
Malayan weasel, M. nudipes
European polecat, M. putorius
American ermine, M. richardsonii
Siberian weasel, M. sibirica
Back-striped weasel, M. strigidorsa
Genus Neogale
Amazon weasel, N. africana
Colombian weasel, N. felipei
Long-tailed weasel, N. frenata
American mink, N. vison
Sea mink, N. macrodon
Fossil mustelids
Extinct genera of the family Mustelidae include:
Brachypsalis
Chamitataxus
Corumictis
Cyrnaonyx
Ekorus
Enhydriodon
Eomellivora
Hoplictis
Megalictis
Oligobunis
Plesictis
Sthenictis
Teruelictis
Trochictis
Phylogeny
Multigene phylogenies constructed by Koepfli et al. (2008) and Law et al. (2018) found that Mustelidae comprises eight living subfamilies. The early mustelids appear to have undergone two rapid bursts of diversification in Eurasia, with the resulting species spreading to other continents only later.
Mustelid species diversity is often attributed to an adaptive radiation coinciding with the mid-Miocene climate transition. Contrary to expectations, Law et al. (2018) found no evidence for rapid bursts of lineage diversification at the origin of the Mustelidae, and further analyses of lineage diversification rates using molecular and fossil-based methods did not find associations between rates of lineage diversification and mid-Miocene climate transition as previously hypothesized.
| Biology and health sciences | Carnivora | null |
18881 | https://en.wikipedia.org/wiki/Mathematical%20induction | Mathematical induction | Mathematical induction is a method for proving that a statement is true for every natural number , that is, that the infinitely many cases all hold. This is done by first proving a simple case, then also showing that if we assume the claim is true for a given case, then the next case is also true. Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder:
A proof by induction consists of two cases. The first, the base case, proves the statement for without assuming any knowledge of other cases. The second case, the induction step, proves that if the statement holds for any given case , then it must also hold for the next case . These two steps establish that the statement holds for every natural number . The base case does not necessarily begin with , but often with , and possibly with any fixed natural number , establishing the truth of the statement for all natural numbers .
The method can be extended to prove statements about more general well-founded structures, such as trees; this generalization, known as structural induction, is used in mathematical logic and computer science. Mathematical induction in this extended sense is closely related to recursion. Mathematical induction is an inference rule used in formal proofs, and is the foundation of most correctness proofs for computer programs.
Despite its name, mathematical induction differs fundamentally from inductive reasoning as used in philosophy, in which the examination of many cases results in a probable conclusion. The mathematical method examines infinitely many cases to prove a general statement, but it does so by a finite chain of deductive reasoning involving the variable , which can take infinitely many values. The result is a rigorous proof of the statement, not an assertion of its probability.
History
In 370 BC, Plato's Parmenides may have contained traces of an early example of an implicit inductive proof, however, the earliest implicit proof by mathematical induction was written by al-Karaji around 1000 AD, who applied it to arithmetic sequences to prove the binomial theorem and properties of Pascal's triangle. Whilst the original work was lost, it was later referenced by Al-Samawal al-Maghribi in his treatise al-Bahir fi'l-jabr (The Brilliant in Algebra) in around 1150 AD.
Katz says in his history of mathematics
In India, early implicit proofs by mathematical induction appear in Bhaskara's "cyclic method".
None of these ancient mathematicians, however, explicitly stated the induction hypothesis. Another similar case (contrary to what Vacca has written, as Freudenthal carefully showed) was that of Francesco Maurolico in his Arithmeticorum libri duo (1575), who used the technique to prove that the sum of the first odd integers is .
The earliest rigorous use of induction was by Gersonides (1288–1344). The first explicit formulation of the principle of induction was given by Pascal in his Traité du triangle arithmétique (1665). Another Frenchman, Fermat, made ample use of a related principle: indirect proof by infinite descent.
The induction hypothesis was also employed by the Swiss Jakob Bernoulli, and from then on it became well known. The modern formal treatment of the principle came only in the 19th century, with George Boole, Augustus De Morgan, Charles Sanders Peirce, Giuseppe Peano, and Richard Dedekind.
Description
The simplest and most common form of mathematical induction infers that a statement involving a natural number (that is, an integer or 1) holds for all values of . The proof consists of two steps:
The (or initial case): prove that the statement holds for 0, or 1.
The (or inductive step, or step case): prove that for every , if the statement holds for , then it holds for . In other words, assume that the statement holds for some arbitrary natural number , and prove that the statement holds for .
The hypothesis in the induction step, that the statement holds for a particular , is called the induction hypothesis or inductive hypothesis. To prove the induction step, one assumes the induction hypothesis for and then uses this assumption to prove that the statement holds for .
Authors who prefer to define natural numbers to begin at 0 use that value in the base case; those who define natural numbers to begin at 1 use that value.
Examples
Sum of consecutive natural numbers
Mathematical induction can be used to prove the following statement for all natural numbers .
This states a general formula for the sum of the natural numbers less than or equal to a given number; in fact an infinite sequence of statements: , , , etc.
Proposition. For every ,
Proof. Let be the statement We give a proof by induction on .
Base case: Show that the statement holds for the smallest natural number .
is clearly true:
Induction step: Show that for every , if holds, then also holds.
Assume the induction hypothesis that for a particular , the single case holds, meaning is true:
It follows that:
Algebraically, the right hand side simplifies as:
Equating the extreme left hand and right hand sides, we deduce that: That is, the statement also holds true, establishing the induction step.
Conclusion: Since both the base case and the induction step have been proved as true, by mathematical induction the statement holds for every natural number . Q.E.D.
A trigonometric inequality
Induction is often used to prove inequalities. As an example, we prove that for any real number and natural number .
At first glance, it may appear that a more general version, for any real numbers , could be proven without induction; but the case shows it may be false for non-integer values of . This suggests we examine the statement specifically for natural values of , and induction is the readiest tool.
Proposition. For any and , .
Proof. Fix an arbitrary real number , and let be the statement . We induce on .
Base case: The calculation verifies .
Induction step: We show the implication for any natural number . Assume the induction hypothesis: for a given value , the single case is true. Using the angle addition formula and the triangle inequality, we deduce:
The inequality between the extreme left-hand and right-hand quantities shows that is true, which completes the induction step.
Conclusion: The proposition holds for all natural numbers Q.E.D.
Variants
In practice, proofs by induction are often structured differently, depending on the exact nature of the property to be proven.
All variants of induction are special cases of transfinite induction; see below.
Base case other than 0 or 1
If one wishes to prove a statement, not for all natural numbers, but only for all numbers greater than or equal to a certain number , then the proof by induction consists of the following:
Showing that the statement holds when .
Showing that if the statement holds for an arbitrary number , then the same statement also holds for .
This can be used, for example, to show that for .
In this way, one can prove that some statement holds for all , or even for all . This form of mathematical induction is actually a special case of the previous form, because if the statement to be proved is then proving it with these two rules is equivalent with proving for all natural numbers with an induction base case .
Example: forming dollar amounts by coins
Assume an infinite supply of 4- and 5-dollar coins. Induction can be used to prove that any whole amount of dollars greater than or equal to can be formed by a combination of such coins. Let denote the statement " dollars can be formed by a combination of 4- and 5-dollar coins". The proof that is true for all can then be achieved by induction on as follows:
Base case: Showing that holds for is simple: take three 4-dollar coins.
Induction step: Given that holds for some value of (induction hypothesis), prove that holds, too. Assume is true for some arbitrary . If there is a solution for dollars that includes at least one 4-dollar coin, replace it by a 5-dollar coin to make dollars. Otherwise, if only 5-dollar coins are used, must be a multiple of 5 and so at least 15; but then we can replace three 5-dollar coins by four 4-dollar coins to make dollars. In each case, is true.
Therefore, by the principle of induction, holds for all , and the proof is complete.
In this example, although also holds for , the above proof cannot be modified to replace the minimum amount of dollar to any lower value . For , the base case is actually false; for , the second case in the induction step (replacing three 5- by four 4-dollar coins) will not work; let alone for even lower .
Induction on more than one counter
It is sometimes desirable to prove a statement involving two natural numbers, and , by iterating the induction process. That is, one proves a base case and an induction step for , and in each of those proves a base case and an induction step for . See, for example, the proof of commutativity accompanying addition of natural numbers. More complicated arguments involving three or more counters are also possible.
Infinite descent
The method of infinite descent is a variation of mathematical induction which was used by Pierre de Fermat. It is used to show that some statement is false for all natural numbers . Its traditional form consists of showing that if is true for some natural number , it also holds for some strictly smaller natural number . Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) that cannot be true for any .
The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statement defined as " is false for all natural numbers less than or equal to ", it follows that holds for all , which means that is false for every natural number .
Limited mathematical induction
If one wishes to prove that a property holds for all natural numbers less than or equal to , proving satisfies the following conditions suffices:
holds for 0,
For any natural number less than , if holds for , then holds for
Prefix induction
The most common form of proof by mathematical induction requires proving in the induction step that
whereupon the induction principle "automates" applications of this step in getting from to . This could be called "predecessor induction" because each step proves something about a number from something about that number's predecessor.
A variant of interest in computational complexity is "prefix induction", in which one proves the following statement in the induction step:
or equivalently
The induction principle then "automates" log2 n applications of this inference in getting from to . In fact, it is called "prefix induction" because each step proves something about a number from something about the "prefix" of that number — as formed by truncating the low bit of its binary representation. It can also be viewed as an application of traditional induction on the length of that binary representation.
If traditional predecessor induction is interpreted computationally as an -step loop, then prefix induction would correspond to a log--step loop. Because of that, proofs using prefix induction are "more feasibly constructive" than proofs using predecessor induction.
Predecessor induction can trivially simulate prefix induction on the same statement. Prefix induction can simulate predecessor induction, but only at the cost of making the statement more syntactically complex (adding a bounded universal quantifier), so the interesting results relating prefix induction to polynomial-time computation depend on excluding unbounded quantifiers entirely, and limiting the alternation of bounded universal and existential quantifiers allowed in the statement.
One can take the idea a step further: one must prove
whereupon the induction principle "automates" applications of this inference in getting from to . This form of induction has been used, analogously, to study log-time parallel computation.
Complete (strong) induction
Another variant, called complete induction, course of values induction or strong induction (in contrast to which the basic form of induction is sometimes known as weak induction), makes the induction step easier to prove by using a stronger hypothesis: one proves the statement under the assumption that holds for all natural numbers less than ; by contrast, the basic form only assumes . The name "strong induction" does not mean that this method can prove more than "weak induction", but merely refers to the stronger hypothesis used in the induction step.
In fact, it can be shown that the two methods are actually equivalent, as explained below. In this form of complete induction, one still has to prove the base case, , and it may even be necessary to prove extra-base cases such as before the general argument applies, as in the example below of the Fibonacci number .
Although the form just described requires one to prove the base case, this is unnecessary if one can prove (assuming for all lower ) for all . This is a special case of transfinite induction as described below, although it is no longer equivalent to ordinary induction. In this form the base case is subsumed by the case , where is proved with no other assumed; this case may need to be handled separately, but sometimes the same argument applies for and , making the proof simpler and more elegant.
In this method, however, it is vital to ensure that the proof of does not implicitly assume that , e.g. by saying "choose an arbitrary ", or by assuming that a set of elements has an element.
Equivalence with ordinary induction
Complete induction is equivalent to ordinary mathematical induction as described above, in the sense that a proof by one method can be transformed into a proof by the other. Suppose there is a proof of by complete induction. Then, this proof can be transformed into an ordinary induction proof by assuming a stronger inductive hypothesis. Let be the statement " holds for all such that "—this becomes the inductive hypothesis for ordinary induction. We can then show and for assuming only and show that implies .
If, on the other hand, had been proven by ordinary induction, the proof would already effectively be one by complete induction: is proved in the base case, using no assumptions, and is proved in the induction step, in which one may assume all earlier cases but need only use the case .
Example: Fibonacci numbers
Complete induction is most useful when several instances of the inductive hypothesis are required for each induction step. For example, complete induction can be used to show that
where is the -th Fibonacci number, and (the golden ratio) and are the roots of the polynomial . By using the fact that for each , the identity above can be verified by direct calculation for if one assumes that it already holds for both and . To complete the proof, the identity must be verified in the two base cases: and .
Example: prime factorization
Another proof by complete induction uses the hypothesis that the statement holds for all smaller more thoroughly. Consider the statement that "every natural number greater than 1 is a product of (one or more) prime numbers", which is the "existence" part of the fundamental theorem of arithmetic. For proving the induction step, the induction hypothesis is that for a given the statement holds for all smaller . If is prime then it is certainly a product of primes, and if not, then by definition it is a product: , where neither of the factors is equal to 1; hence neither is equal to , and so both are greater than 1 and smaller than . The induction hypothesis now applies to and , so each one is a product of primes. Thus is a product of products of primes, and hence by extension a product of primes itself.
Example: dollar amounts revisited
We shall look to prove the same example as above, this time with strong induction. The statement remains the same:
However, there will be slight differences in the structure and the assumptions of the proof, starting with the extended base case.
Proof.
Base case: Show that holds for .
The base case holds.
Induction step: Given some , assume holds for all with . Prove that holds.
Choosing , and observing that shows that holds, by the inductive hypothesis. That is, the sum can be formed by some combination of and dollar coins. Then, simply adding a dollar coin to that combination yields the sum . That is, holds Q.E.D.
Forward-backward induction
Sometimes, it is more convenient to deduce backwards, proving the statement for , given its validity for . However, proving the validity of the statement for no single number suffices to establish the base case; instead, one needs to prove the statement for an infinite subset of the natural numbers. For example, Augustin Louis Cauchy first used forward (regular) induction to prove the
inequality of arithmetic and geometric means for all powers of 2, and then used backwards induction to show it for all natural numbers.
Example of error in the induction step
The induction step must be proved for all values of . To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction that all horses are of the same color:
Base case: in a set of only one horse, there is only one color.
Induction step: assume as induction hypothesis that within any set of horses, there is only one color. Now look at any set of horses. Number them: . Consider the sets and . Each is a set of only horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among all horses.
The base case is trivial, and the induction step is correct in all cases . However, the argument used in the induction step is incorrect for , because the statement that "the two sets overlap" is false for and .
Formalization
In second-order logic, one can write down the "axiom of induction" as follows:
where is a variable for predicates involving one natural number and and are variables for natural numbers.
In words, the base case and the induction step (namely, that the induction hypothesis implies ) together imply that for any natural number . The axiom of induction asserts the validity of inferring that holds for any natural number from the base case and the induction step.
The first quantifier in the axiom ranges over predicates rather than over individual numbers. This is a second-order quantifier, which means that this axiom is stated in second-order logic. Axiomatizing arithmetic induction in first-order logic requires an axiom schema containing a separate axiom for each possible predicate. The article Peano axioms contains further discussion of this issue.
The axiom of structural induction for the natural numbers was first formulated by Peano, who used it to specify the natural numbers together with the following four other axioms:
0 is a natural number.
The successor function of every natural number yields a natural number .
The successor function is injective.
0 is not in the range of .
In first-order ZFC set theory, quantification over predicates is not allowed, but one can still express induction by quantification over sets:
may be read as a set representing a proposition, and containing natural numbers, for which the proposition holds. This is not an axiom, but a theorem, given that natural numbers are defined in the language of ZFC set theory by axioms, analogous to Peano's. See construction of the natural numbers using the axiom of infinity and axiom schema of specification.
Transfinite induction
One variation of the principle of complete induction can be generalized for statements about elements of any well-founded set, that is, a set with an irreflexive relation < that contains no infinite descending chains. Every set representing an ordinal number is well-founded, the set of natural numbers is one of them.
Applied to a well-founded set, transfinite induction can be formulated as a single step. To prove that a statement holds for each ordinal number:
Show, for each ordinal number , that if holds for all , then also holds.
This form of induction, when applied to a set of ordinal numbers (which form a well-ordered and hence well-founded class), is called transfinite induction. It is an important proof technique in set theory, topology and other fields.
Proofs by transfinite induction typically distinguish three cases:
when is a minimal element, i.e. there is no element smaller than ;
when has a direct predecessor, i.e. the set of elements which are smaller than has a largest element;
when has no direct predecessor, i.e. is a so-called limit ordinal.
Strictly speaking, it is not necessary in transfinite induction to prove a base case, because it is a vacuous special case of the proposition that if is true of all , then is true of . It is vacuously true precisely because there are no values of that could serve as counterexamples. So the special cases are special cases of the general case.
Relationship to the well-ordering principle
The principle of mathematical induction is usually stated as an axiom of the natural numbers; see Peano axioms. It is strictly stronger than the well-ordering principle in the context of the other Peano axioms. Suppose the following:
The trichotomy axiom: For any natural numbers and , is less than or equal to if and only if is not less than .
For any natural number , is greater .
For any natural number , no natural number is and .
No natural number is less than zero.
It can then be proved that induction, given the above-listed axioms, implies the well-ordering principle. The following proof uses complete induction and the first and fourth axioms.
Proof. Suppose there exists a non-empty set, , of natural numbers that has no least element. Let be the assertion that is not in . Then is true, for if it were false then 0 is the least element of . Furthermore, let be a natural number, and suppose is true for all natural numbers less than . Then if is false is in , thus being a minimal element in , a contradiction. Thus is true. Therefore, by the complete induction principle, holds for all natural numbers ; so is empty, a contradiction. Q.E.D.
On the other hand, the set , shown in the picture, is well-ordered by the lexicographic order.
Moreover, except for the induction axiom, it satisfies all Peano axioms, where Peano's constant 0 is interpreted as the pair (0, 0), and Peano's successor function is defined on pairs by for all and .
As an example for the violation of the induction axiom, define the predicate as or for some and . Then the base case is trivially true, and so is the induction step: if , then . However, is not true for all pairs in the set, since is false.
Peano's axioms with the induction principle uniquely model the natural numbers. Replacing the induction principle with the well-ordering principle allows for more exotic models that fulfill all the axioms.
It is mistakenly printed in several books and sources that the well-ordering principle is equivalent to the induction axiom. In the context of the other Peano axioms, this is not the case, but in the context of other axioms, they are equivalent; specifically, the well-ordering principle implies the induction axiom in the context of the first two above listed axioms and
Every natural number is either 0 or for some natural number .
A common mistake in many erroneous proofs is to assume that is a unique and well-defined natural number, a property which is not implied by the other Peano axioms.
| Mathematics | Discrete mathematics | null |
18890 | https://en.wikipedia.org/wiki/Microsoft%20Windows | Microsoft Windows | Windows is a product line of proprietary graphical operating systems developed and marketed by Microsoft. It is grouped into families and subfamilies that cater to particular sectors of the computing industry – Windows (unqualified) for a consumer or corporate workstation, Windows Server for a server and Windows IoT for an embedded system. Windows is sold as either a consumer retail product or licensed to third-party hardware manufacturers who sell products bundled with Windows.
The first version of Windows, Windows 1.0, was released on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces (GUIs). The name "Windows" is a reference to the windowing system in GUIs. The 1990 release of Windows 3.0 catapulted its market success and led to various other product families, including the now-defunct Windows 9x, Windows Mobile, Windows Phone, and Windows CE/Embedded Compact. Windows is the most popular desktop operating system in the world, with a 70% market share , according to StatCounter; however when including mobile OSes, it is not the most used, in favor of Android.
The most recent version of Windows is Windows 11 for consumer PCs and tablets, Windows 11 Enterprise for corporations, and Windows Server 2025 for servers. Still supported are some editions of Windows 10, Windows Server 2016 or later (and exceptionally with paid support down to Windows Server 2008).
Product line
the only active top-level family is Windows NT. The first version, Windows NT 3.1, was intended for server computing and corporate workstations. It grew into a product line of its own and now consists of four sub-families that tend to be released almost simultaneously and share the same kernel.
Windows (unqualified): For a consumer or corporate workstation or tablet. The latest version is Windows 11. Its main competitors are macOS by Apple and Linux for personal computers and iPadOS and Android for tablets (c.f. ).
Of note: "Windows" refers to both the overall product line and this sub-family of it.
Windows Server: For a server computer. The latest version is Windows Server 2025. Unlike its client sibling, it has adopted a strong naming scheme. The main competitor of this family is Linux. (c.f. )
Windows PE: A lightweight version of its Windows sibling, meant to operate as a live operating system, used for installing Windows on bare-metal computers (especially on many computers at once), recovery, or troubleshooting purposes. The latest version is Windows PE 10.
Windows IoT (previously Windows Embedded): For IoT and embedded computers. The latest version is Windows 11 IoT Enterprise. Like Windows Server, the main competitor of this family is Linux. (c.f. )
These top-level Windows families are no longer actively developed:
Windows 9x: Intended exclusively for the consumer market. The first version was Windows 95, which was followed by Windows 98. The last version was Windows Me (which was infamously known as the worst operating systems of all time, with PC World labeling it as "Mistake Edition" and placing it 4th in their list of Worst Tech Products in 2006). All versions of the Windows 9x family have a monolithic kernel that uses MS-DOS as a foundation alongside the kernel first used with Windows 95. This line has since been defunct, with Microsoft now catering to the consumer market with Windows NT starting with Windows XP.
Windows Mobile: The predecessor to Windows Phone, a mobile phone and PDA operating system. The first version was called Pocket PC 2000. The third version, Windows Mobile 2003, was the first version to adopt the Windows Mobile trademark. The last version was Windows Mobile 6.5.
Windows Phone: Sold only to smartphone manufacturers. The first version was Windows Phone 7, followed by Windows Phone 8 and Windows Phone 8.1. It was succeeded by Windows 10 Mobile, which is also defunct.
Windows Embedded Compact: Most commonly known by its former name, Windows CE, it is a hybrid kernel operating system optimized for low power and memory systems, with OEMs able to modify the UI to suit their needs. The final version was Windows Embedded Compact 2013, and it is succeeded by Windows IoT.
Version history
The term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are generally categorized as follows:
Early versions
The history of Windows dates back to 1981 when Microsoft started work on a program called "Interface Manager". The name "Windows" comes from the fact that the system was one of the first to use graphical boxes to represent programs; in the industry, at the time, these were called "windows" and the underlying software was called "windowing software." It was announced in November 1983 (after the Apple Lisa, but before the Macintosh) under the name "Windows", but Windows 1.0 was not released until November 1985. Windows 1.0 was to compete with Apple's operating system, but achieved little popularity. Windows 1.0 is not a complete operating system; rather, it extends MS-DOS. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Calendar, Cardfile, Clipboard Viewer, Clock, Control Panel, Notepad, Paint, Reversi, Terminal and Write. Windows 1.0 does not allow overlapping windows. Instead, all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples.
Windows 2.0 was released in December 1987, and was more popular than its predecessor. It features several improvements to the user interface and memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights (eventually settled in court in Microsoft's favor in 1993). Windows 2.0 also introduced more sophisticated keyboard shortcuts and could make use of expanded memory.
Windows 2.1 was released in two different versions: Windows/286 and Windows/386. Windows/386 uses the virtual 8086 mode of the Intel 80386 to multitask several DOS programs and the paged memory model to emulate expanded memory using available extended memory. Windows/286, in spite of its name, runs on both Intel 8086 and Intel 80286 processors. It runs in real mode but can make use of the high memory area.
In addition to full Windows packages, there were runtime-only versions that shipped with early Windows software from third parties and made it possible to run their Windows software on MS-DOS and without the full Windows feature set.
The early versions of Windows are often thought of as graphical shells, mostly because they ran on top of MS-DOS and used it for file system services. However, even the earliest Windows versions already assumed many typical operating system functions; notably, having their own executable file format and providing their own device drivers (timer, graphics, printer, mouse, keyboard and sound). Unlike MS-DOS, Windows allowed users to execute multiple graphical applications at the same time, through cooperative multitasking. Windows implemented an elaborate, segment-based, software virtual memory scheme, which allows it to run applications larger than available memory: code segments and resources are swapped in and thrown away when memory became scarce; data segments moved in memory when a given application had relinquished processor control.
Windows 3.x
Windows 3.0, released in 1990, improved the design, mostly because of virtual memory and loadable virtual device drivers (VxDs) that allow Windows to share arbitrary devices between multi-tasked DOS applications. Windows 3.0 applications can run in protected mode, which gives them access to several megabytes of memory without the obligation to participate in the software virtual memory scheme. They run inside the same address space, where the segmented memory provides a degree of protection. Windows 3.0 also featured improvements to the user interface. Microsoft rewrote critical operations from C into assembly. Windows 3.0 was the first version of Windows to achieve broad commercial success, selling 2 million copies in the first six months.
Windows 3.1, made generally available on March 1, 1992, featured a facelift. In October 1992, Windows for Workgroups, a special version with integrated peer-to-peer networking features, was released. It was sold along with Windows 3.1. Support for Windows 3.1 ended on December 31, 2001.
Windows 3.2, released in 1994, is an updated version of the Chinese version of Windows 3.1. The update was limited to this language version, as it fixed only issues related to the complex writing system of the Chinese language. Windows 3.2 was generally sold by computer manufacturers with a ten-disk version of MS-DOS that also had Simplified Chinese characters in basic output and some translated utilities.
Windows 9x
The next major consumer-oriented release of Windows, Windows 95, was released on August 24, 1995. While still remaining MS-DOS-based, Windows 95 introduced support for native 32-bit applications, plug and play hardware, preemptive multitasking, long file names of up to 255 characters, and provided increased stability over its predecessors. Windows 95 also introduced a redesigned, object oriented user interface, replacing the previous Program Manager with the Start menu, taskbar, and Windows Explorer shell. Windows 95 was a major commercial success for Microsoft; Ina Fried of CNET remarked that "by the time Windows 95 was finally ushered off the market in 2001, it had become a fixture on computer desktops around the world." Microsoft published four OEM Service Releases (OSR) of Windows 95, each of which was roughly equivalent to a service pack. The first OSR of Windows 95 was also the first version of Windows to be bundled with Microsoft's web browser, Internet Explorer. Mainstream support for Windows 95 ended on December 31, 2000, and extended support for Windows 95 ended on December 31, 2001.
Windows 95 was followed up with the release of Windows 98 on June 25, 1998, which introduced the Windows Driver Model, support for USB composite devices, support for ACPI, hibernation, and support for multi-monitor configurations. Windows 98 also included integration with Internet Explorer 4 through Active Desktop and other aspects of the Windows Desktop Update (a series of enhancements to the Explorer shell which was also made available for Windows 95). In May 1999, Microsoft released Windows 98 Second Edition, an updated version of Windows 98. Windows 98 SE added Internet Explorer 5.0 and Windows Media Player 6.2 amongst other upgrades. Mainstream support for Windows 98 ended on June 30, 2002, and extended support for Windows 98 ended on July 11, 2006.
On September 14, 2000, Microsoft released Windows Me (Millennium Edition), the last DOS-based version of Windows. Windows Me incorporated visual interface enhancements from its Windows NT-based counterpart Windows 2000, had faster boot times than previous versions (which however, required the removal of the ability to access a real mode DOS environment, removing compatibility with some older programs), expanded multimedia functionality (including Windows Media Player 7, Windows Movie Maker, and the Windows Image Acquisition framework for retrieving images from scanners and digital cameras), additional system utilities such as System File Protection and System Restore, and updated home networking tools. However, Windows Me was faced with criticism for its speed and instability, along with hardware compatibility issues and its removal of real mode DOS support. PC World considered Windows Me to be one of the worst operating systems Microsoft had ever released, and the fourth worst tech product of all time.
Windows NT
Version history
Early versions (Windows NT 3.1/3.5/3.51/4.0/2000)
In November 1988, a new development team within Microsoft (which included former Digital Equipment Corporation developers Dave Cutler and Mark Lucovsky) began work on a revamped version of IBM and Microsoft's OS/2 operating system known as "NT OS/2". NT OS/2 was intended to be a secure, multi-user operating system with POSIX compatibility and a modular, portable kernel with preemptive multitasking and support for multiple processor architectures. However, following the successful release of Windows 3.0, the NT development team decided to rework the project to use an extended 32-bit port of the Windows API known as Win32 instead of those of OS/2. Win32 maintained a similar structure to the Windows APIs (allowing existing Windows applications to easily be ported to the platform), but also supported the capabilities of the existing NT kernel. Following its approval by Microsoft's staff, development continued on what was now Windows NT, the first 32-bit version of Windows. However, IBM objected to the changes, and ultimately continued OS/2 development on its own.
Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel.
The first release of the resulting operating system, Windows NT 3.1 (named to associate it with Windows 3.1) was released in July 1993, with versions for desktop workstations and servers. Windows NT 3.5 was released in September 1994, focusing on performance improvements and support for Novell's NetWare, and was followed up by Windows NT 3.51 in May 1995, which included additional improvements and support for the PowerPC architecture. Windows NT 4.0 was released in June 1996, introducing the redesigned interface of Windows 95 to the NT series. On February 17, 2000, Microsoft released Windows 2000, a successor to NT 4.0. The Windows NT name was dropped at this point in order to put a greater focus on the Windows brand.
Windows XP
The next major version of Windows NT, Windows XP, was released to manufacturing (RTM) on August 24, 2001, and to the general public on October 25, 2001. The introduction of Windows XP aimed to unify the consumer-oriented Windows 9x series with the architecture introduced by Windows NT, a change which Microsoft promised would provide better performance over its DOS-based predecessors. Windows XP would also introduce a redesigned user interface (including an updated Start menu and a "task-oriented" Windows Explorer), streamlined multimedia and networking features, Internet Explorer 6, integration with Microsoft's .NET Passport services, a "compatibility mode" to help provide backwards compatibility with software designed for previous versions of Windows, and Remote Assistance functionality.
At retail, Windows XP was marketed in two main editions: the "Home" edition was targeted towards consumers, while the "Professional" edition was targeted towards business environments and power users, and included additional security and networking features. Home and Professional were later accompanied by the "Media Center" edition (designed for home theater PCs, with an emphasis on support for DVD playback, TV tuner cards, DVR functionality, and remote controls), and the "Tablet PC" edition (designed for mobile devices meeting its specifications for a tablet computer, with support for stylus pen input and additional pen-enabled applications). Mainstream support for Windows XP ended on April 14, 2009. Extended support ended on April 8, 2014.
After Windows 2000, Microsoft also changed its release schedules for server operating systems; the server counterpart of Windows XP, Windows Server 2003, was released in April 2003. It was followed in December 2005, by Windows Server 2003 R2.
Windows Vista
After a lengthy development process, Windows Vista was released on November 30, 2006, for volume licensing and January 30, 2007, for consumers. It contained a number of new features, from a redesigned shell and user interface to significant technical changes, with a particular focus on security features. It was available in a number of different editions, and has been subject to some criticism, such as drop of performance, longer boot time, criticism of new UAC, and stricter license agreement. Vista's server counterpart, Windows Server 2008 was released in early 2008.
Windows 7
On July 22, 2009, Windows 7 and Windows Server 2008 R2 were released to manufacturing (RTM) and released to the public three months later on October 22, 2009. Unlike its predecessor, Windows Vista, which introduced a large number of new features, Windows 7 was intended to be a more focused, incremental upgrade to the Windows line, with the goal of being compatible with applications and hardware with which Windows Vista was already compatible. Windows 7 has multi-touch support, a redesigned Windows shell with an updated taskbar with revealable jump lists that contain shortcuts to files frequently used with specific applications and shortcuts to tasks within the application, a home networking system called HomeGroup, and performance improvements.
Windows 8 and 8.1
Windows 8, the successor to Windows 7, was released generally on October 26, 2012. A number of significant changes were made on Windows 8, including the introduction of a user interface based around Microsoft's Metro design language with optimizations for touch-based devices such as tablets and all-in-one PCs. These changes include the Start screen, which uses large tiles that are more convenient for touch interactions and allow for the display of continually updated information, and a new class of apps which are designed primarily for use on touch-based devices. The new Windows version required a minimum resolution of 1024×768 pixels, effectively making it unfit for netbooks with 800×600-pixel screens.
Other changes include increased integration with cloud services and other online platforms (such as social networks and Microsoft's own OneDrive (formerly SkyDrive) and Xbox Live services), the Windows Store service for software distribution, and a new variant known as Windows RT for use on devices that utilize the ARM architecture, and a new keyboard shortcut for screenshots. An update to Windows 8, called Windows 8.1, was released on October 17, 2013, and includes features such as new live tile sizes, deeper OneDrive integration, and many other revisions. Windows 8 and Windows 8.1 have been subject to some criticism, such as the removal of the Start menu.
Windows 10
On September 30, 2014, Microsoft announced Windows 10 as the successor to Windows 8.1. It was released on July 29, 2015, and addresses shortcomings in the user interface first introduced with Windows 8. Changes on PC include the return of the Start Menu, a virtual desktop system, and the ability to run Windows Store apps within windows on the desktop rather than in full-screen mode. Windows 10 is said to be available to update from qualified Windows 7 with SP1, Windows 8.1 and Windows Phone 8.1 devices from the Get Windows 10 Application (for Windows 7, Windows 8.1) or Windows Update (Windows 7).
In February 2017, Microsoft announced the migration of its Windows source code repository from Perforce to Git. This migration involved 3.5 million separate files in a 300-gigabyte repository. By May 2017, 90 percent of its engineering team was using Git, in about 8500 commits and 1760 Windows builds per day.
In June 2021, shortly before Microsoft's announcement of Windows 11, Microsoft updated their lifecycle policy pages for Windows 10, revealing that support for their last release of Windows 10 will end on October 14, 2025. On April 27, 2023, Microsoft announced that version 22H2 would be the last of Windows 10.
Windows 11
On June 24, 2021, Windows 11 was announced as the successor to Windows 10 during a livestream. The new operating system was designed to be more user-friendly and understandable. It was released on October 5, 2021. Windows 11 is a free upgrade to Windows 10 users who meet the system requirements.
Windows 365
In July 2021, Microsoft announced it will start selling subscriptions to virtualized Windows desktops as part of a new Windows 365 service in the following month. The new service will allow for cross-platform usage, aiming to make the operating system available for both Apple and Android users. It is a separate service and offers several variations including Windows 365 Frontline, Windows 365 Boot, and the Windows 365 app. The subscription service will be accessible through any operating system with a web browser. The new service is an attempt at capitalizing on the growing trend, fostered during the COVID-19 pandemic, for businesses to adopt a hybrid remote work environment, in which "employees split their time between the office and home". As the service will be accessible through web browsers, Microsoft will be able to bypass the need to publish the service through Google Play or the Apple App Store.
Microsoft announced Windows 365 availability to business and enterprise customers on August 2, 2021.
Multilingual support
Multilingual support has been built into Windows since Windows 3.0. The language for both the keyboard and the interface can be changed through the Region and Language Control Panel. Components for all supported input languages, such as Input Method Editors, are automatically installed during Windows installation (in Windows XP and earlier, files for East Asian languages, such as Chinese, and files for right-to-left scripts, such as Arabic, may need to be installed separately, also from the said Control Panel). Third-party IMEs may also be installed if a user feels that the provided one is insufficient for their needs. Since Windows 2000, English editions of Windows NT have East Asian IMEs (such as Microsoft Pinyin IME and Microsoft Japanese IME) bundled, but files for East Asian languages may be manually installed on Control Panel.
Interface languages for the operating system are free for download, but some languages are limited to certain editions of Windows. Language Interface Packs (LIPs) are redistributable and may be downloaded from Microsoft's Download Center and installed for any edition of Windows (XP or later)they translate most, but not all, of the Windows interface, and require a certain base language (the language which Windows originally shipped with). This is used for most languages in emerging markets. Full Language Packs, which translate the complete operating system, are only available for specific editions of Windows (Ultimate and Enterprise editions of Windows Vista and 7, and all editions of Windows 8, 8.1 and RT except Single Language). They do not require a specific base language and are commonly used for more popular languages such as French or Chinese. These languages cannot be downloaded through the Download Center, but are available as optional updates through the Windows Update service (except Windows 8).
The interface language of installed applications is not affected by changes in the Windows interface language. The availability of languages depends on the application developers themselves.
Windows 8 and Windows Server 2012 introduce a new Language Control Panel where both the interface and input languages can be simultaneously changed, and language packs, regardless of type, can be downloaded from a central location. The PC Settings app in Windows 8.1 and Windows Server 2012 R2 also includes a counterpart settings page for this. Changing the interface language also changes the language of preinstalled Windows Store apps (such as Mail, Maps and News) and certain other Microsoft-developed apps (such as Remote Desktop). The above limitations for language packs are however still in effect, except that full language packs can be installed for any edition except Single Language, which caters to emerging markets.
Platform support
Windows NT included support for several platforms before the x86-based personal computer became dominant in the professional world. Windows NT 4.0 and its predecessors supported PowerPC, DEC Alpha and MIPS R4000 (although some of the platforms implement 64-bit computing, the OS treated them as 32-bit). Windows 2000 dropped support for all platforms, except the third generation x86 (known as IA-32) or newer in 32-bit mode. The client line of the Windows NT family still ran on IA-32 up to Windows 10 (the server line of the Windows NT family still ran on IA-32 up to Windows Server 2008).
With the introduction of the Intel Itanium architecture (IA-64), Microsoft released new versions of Windows to support it. Itanium versions of Windows XP and Windows Server 2003 were released at the same time as their mainstream x86 counterparts. Windows XP 64-Bit Edition (Version 2003), released in 2003, is the last Windows client operating system to support Itanium. Windows Server line continues to support this platform until Windows Server 2012; Windows Server 2008 R2 is the last Windows operating system to support Itanium architecture.
On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003 x64 editions to support x86-64 (or simply x64), the 64-bit version of x86 architecture. Windows Vista was the first client version of Windows NT to be released simultaneously in IA-32 and x64 editions. As of 2024, x64 is still supported.
An edition of Windows 8 known as Windows RT was specifically created for computers with ARM architecture, and while ARM is still used for Windows smartphones with Windows 10, tablets with Windows RT will not be updated. Starting from Windows 10 Fall Creators Update (version 1709) and later includes support for ARM-based PCs.
Windows CE
Windows CE (officially known as Windows Embedded Compact), is an edition of Windows that runs on minimalistic computers, like satellite navigation systems and some mobile phones. Windows Embedded Compact is based on its own dedicated kernel, dubbed Windows CE kernel. Microsoft licenses Windows CE to OEMs and device makers. The OEMs and device makers can modify and create their own user interfaces and experiences, while Windows CE provides the technical foundation to do so.
Windows CE was used in the Dreamcast along with Sega's own proprietary OS for the console. Windows CE was the core from which Windows Mobile was derived. Its successor, Windows Phone 7, was based on components from both Windows CE 6.0 R3 and Windows CE 7.0. Windows Phone 8 however, is based on the same NT-kernel as Windows 8.
Windows Embedded Compact is not to be confused with Windows XP Embedded or Windows NT 4.0 Embedded, modular editions of Windows based on Windows NT kernel.
Xbox OS
Xbox OS is an unofficial name given to the version of Windows that runs on Xbox consoles. From Xbox One onwards it is an implementation with an emphasis on virtualization (using Hyper-V) as it is three operating systems running at once, consisting of the core operating system, a second implemented for games and a more Windows-like environment for applications.
Microsoft updates Xbox One's OS every month, and these updates can be downloaded from the Xbox Live service to the Xbox and subsequently installed, or by using offline recovery images downloaded via a PC. It was originally based on NT 6.2 (Windows 8) kernel, and the latest version runs on an NT 10.0 base. This system is sometimes referred to as "Windows 10 on Xbox One".
Xbox One and Xbox Series operating systems also allow limited (due to licensing restrictions and testing resources) backward compatibility with previous generation hardware, and the Xbox 360's system is backwards compatible with the original Xbox.
Version control system
Up to and including every version before Windows 2000, Microsoft used an in-house version control system named Source Library Manager (SLM). Shortly after Windows 2000 was released, Microsoft switched to a fork of Perforce named Source Depot. This system was used up until 2017 once the system could not keep up with the size of Windows. Microsoft had begun to integrate Git into Team Foundation Server in 2013, but Windows (and Office) continued to rely on Source Depot. The Windows code was divided among 65 different repositories with a kind of virtualization layer to produce unified view of all of the code.
In 2017 Microsoft announced that it would start using Git, an open source version control system created by Linus Torvalds, and in May 2017 they reported that the migration into a new Git repository was complete.
VFSForGit
Because of its large, decades-long history, however, the Windows codebase is not especially well suited to the decentralized nature of Linux development that Git was originally created to manage. Each Git repository contains a complete history of all the files, which proved unworkable for Windows developers because cloning the whole repository takes several hours. Microsoft has been working on a new project called the Virtual File System for Git (VFSForGit) to address these challenges.
In 2021 the VFS for Git was superseded by Scalar.
Timeline of releases
Usage share and device sales
Use of Windows 10 has exceeded Windows 7 globally since early 2018.
For desktop and laptop computers, according to Net Applications and StatCounter (which track the use of operating systems in devices that are active on the Web), Windows was the most used operating-system family in August 2021, with around 91% usage share according to Net Applications and around 76% usage share according to StatCounter.
Including personal computers of all kinds (e.g., desktops, laptops, mobile devices, and game consoles), Windows OSes accounted for 32.67% of usage share in August 2021, compared to Android (highest, at 46.03%), iOS's 13.76%, iPadOS's 2.81%, and macOS's 2.51%, according to Net Applications and 30.73% of usage share in August 2021, compared to Android (highest, at 42.56%), iOS/iPadOS's 16.53%, and macOS's 6.51%, according to StatCounter.
Those statistics do not include servers (including cloud computing, where Linux has significantly more market share than Windows) as Net Applications and StatCounter use web browsing as a proxy for all use.
Security
Early versions of Windows were designed at a time where malware and networking were less common, and had few built-in security features; they did not provide access privileges to allow a user to prevent other users from accessing their files, and they did not provide memory protection to prevent one process from reading or writing another process's address space or to prevent a process from code or data used by privileged-mode code.
While the Windows 9x series offered the option of having profiles for multiple users with separate profiles and home folders, it had no concept of access privileges, allowing any user to edit others' files. In addition, while it ran separate 32-bit applications in separate address spaces, protecting an application's code and data from being read or written by another application, it did not protect the first megabyte of memory from userland applications for compatibility reasons. This area of memory contains code critical to the functioning of the operating system, and by writing into this area of memory an application can crash or freeze the operating system. This was a source of instability as faulty applications could accidentally write into this region, potentially corrupting important operating system memory, which usually resulted in some form of system error and halt.
Windows NT was far more secure, implementing access privileges and full memory protection, and, while 32-bit programs meeting the DoD's C2 security rating, yet these advantages were nullified by the fact that, prior to Windows Vista, the default user account created during the setup process was an administrator account; the user, and any program the user launched, had full access to the machine. Though Windows XP did offer an option of turning administrator accounts into limited accounts, the majority of home users did not do so, partially due to the number of programs which required administrator rights to function properly. As a result, most home users still ran as administrator all the time. These architectural flaws, combined with Windows's very high popularity, made Windows a frequent target of computer worm and virus writers.
Furthermore, although Windows NT and its successors are designed for security (including on a network) and multi-user PCs, they were not initially designed with Internet security in mind as much, since, when it was first developed in the early 1990s, Internet use was less prevalent.
In a 2002 strategy memo entitled "Trustworthy computing" sent to every Microsoft employee, Bill Gates declared that security should become Microsoft's highest priority.
Windows Vista introduced a privilege elevation system called User Account Control. When logging in as a standard user, a logon session is created and a token containing only the most basic privileges is assigned. In this way, the new logon session is incapable of making changes that would affect the entire system. When logging in as a user in the Administrators group, two separate tokens are assigned. The first token contains all privileges typically awarded to an administrator, and the second is a restricted token similar to what a standard user would receive. User applications, including the Windows shell, are then started with the restricted token, resulting in a reduced privilege environment even under an Administrator account. When an application requests higher privileges or "Run as administrator" is clicked, UAC will prompt for confirmation and, if consent is given (including administrator credentials if the account requesting the elevation is not a member of the administrators group), start the process using the unrestricted token.
Leaked documents from 2013 to 2016 codenamed Vault 7 detail the capabilities of the CIA to perform electronic surveillance and cyber warfare, such as the ability to compromise operating systems such as Windows.
In August 2019, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from Windows XP to the then most recent Windows 10 versions; a patch to correct the flaw is available.
Microsoft releases security patches through its Windows Update service approximately once a month (usually the second Tuesday of the month), although critical updates are made available at shorter intervals when necessary. Versions subsequent to Windows 2000 SP3 and Windows XP implemented automatic download and installation of updates, substantially increasing the number of users installing security updates.
Windows integrates the Windows Defender antivirus, which is seen as one of the best available. Windows also implements Secure Boot, Control Flow Guard, ransomware protection, BitLocker disk encryption, a firewall, and Windows SmartScreen.
In July 2024, Microsoft signalled an intention to limit kernel access and improve overall security, following a highly publicised CrowdStrike update that caused 8.5 million Windows PCs to crash. Part of that initiative is to rewrite parts of Windows in Rust, a memory-safe language.
File permissions
All Windows versions from Windows NT 3 have been based on a file system permission system referred to as AGDLP (Accounts, Global, Domain Local, Permissions) in which file permissions are applied to the file/folder in the form of a 'local group' which then has other 'global groups' as members. These global groups then hold other groups or users depending on different Windows versions used. This system varies from other vendor products such as Linux and NetWare due to the 'static' allocation of permission being applied directly to the file or folder. However using this process of AGLP/AGDLP/AGUDLP allows a small number of static permissions to be applied and allows for easy changes to the account groups without reapplying the file permissions on the files and folders.
Vulnerabilities
Sticky keys and filter keys
Sticky keys and filter keys are a huge vulnerability of windows. It can allow someone to run any command on the lock screen, including making themselves administrator, just by changing the name of cmd to one of those two programs.
WinRE
Main article:Windows Preinstallation Environment
Windows RE, also known as Windows PE, is a big vulnerability because it allows people to edit just about any program and execute many commands
Alternative implementations
Owing to the operating system's popularity, a number of applications have been released that aim to provide compatibility with Windows applications, either as a compatibility layer for another operating system, or as a standalone system that can run software written for Windows out of the box. These include:
Wine – a free and open-source implementation of the Windows API, allowing one to run many Windows applications on x86-based platforms, including UNIX, Linux and macOS. Wine developers refer to it as a "compatibility layer" and use Windows-style APIs to emulate Windows environment.
CrossOver – a Wine package with licensed fonts. Its developers are regular contributors to Wine.
Proton – A fork of Wine by Valve to run Windows games on Linux and other Unix-like OS.
ReactOS – an open-source OS intended to run the same software as Windows, originally designed to simulate Windows NT 4.0, later aiming at Windows 7 compatibility. It has been in the development stage since 1996.
Freedows OS – an open-source attempt at creating a Windows clone for x86 platforms, intended to be released under the GNU General Public License. Started in 1996 by Reece K. Sellin, the project was never completed, getting only to the stage of design discussions which featured a number of novel concepts until it was suspended in 2002.
| Technology | Operating systems | null |
18899 | https://en.wikipedia.org/wiki/Mendelevium | Mendelevium | Mendelevium is a synthetic chemical element; it has symbol Md (formerly Mv) and atomic number 101. A metallic radioactive transuranium element in the actinide series, it is the first element by atomic number that currently cannot be produced in macroscopic quantities by neutron bombardment of lighter elements. It is the third-to-last actinide and the ninth transuranic element and the first transfermium. It can only be produced in particle accelerators by bombarding lighter elements with charged particles. Seventeen isotopes are known; the most stable is 258Md with half-life 51.59 days; however, the shorter-lived 256Md (half-life 77.7 minutes) is most commonly used in chemistry because it can be produced on a larger scale.
Mendelevium was discovered by bombarding einsteinium with alpha particles in 1955, the method still used to produce it today. It is named after Dmitri Mendeleev, the father of the periodic table. Using available microgram quantities of einsteinium-253, over a million mendelevium atoms may be made each hour. The chemistry of mendelevium is typical for the late actinides, with a preponderance of the +3 oxidation state but also an accessible +2 oxidation state. All known isotopes of mendelevium have short half-lives; there are currently no uses for it outside basic scientific research, and only small amounts are produced.
Discovery
Mendelevium was the ninth transuranic element to be synthesized. It was first synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory Robert Choppin, Bernard G. Harvey, and team leader Stanley G. Thompson in early 1955 at the University of California, Berkeley. The team produced 256Md (half-life of 77.7 minutes) when they bombarded an 253Es target consisting of only a billion (109) einsteinium atoms with alpha particles (helium nuclei) in the Berkeley Radiation Laboratory's 60-inch cyclotron, thus increasing the target's atomic number by two. 256Md thus became the first isotope of any element to be synthesized one atom at a time. In total, seventeen mendelevium atoms were produced. This discovery was part of a program, begun in 1952, that irradiated plutonium with neutrons to transmute it into heavier actinides. This method was necessary as the previous method used to synthesize transuranic elements, neutron capture, could not work because of a lack of known beta decaying isotopes of fermium that would produce isotopes of the next element, mendelevium, and also due to the very short half-life to spontaneous fission of 258Fm that thus constituted a hard limit to the success of the neutron capture process.
To predict if the production of mendelevium would be possible, the team made use of a rough calculation. The number of atoms that would be produced would be approximately equal to the product of the number of atoms of target material, the target's cross section, the ion beam intensity, and the time of bombardment; this last factor was related to the half-life of the product when bombarding for a time on the order of its half-life. This gave one atom per experiment. Thus under optimum conditions, the preparation of only one atom of element 101 per experiment could be expected. This calculation demonstrated that it was feasible to go ahead with the experiment. The target material, einsteinium-253, could be produced readily from irradiating plutonium: one year of irradiation would give a billion atoms, and its three-week half-life meant that the element 101 experiments could be conducted in one week after the produced einsteinium was separated and purified to make the target. However, it was necessary to upgrade the cyclotron to obtain the needed intensity of 1014 alpha particles per second; Seaborg applied for the necessary funds.
While Seaborg applied for funding, Harvey worked on the einsteinium target, while Thomson and Choppin focused on methods for chemical isolation. Choppin suggested using α-hydroxyisobutyric acid to separate the mendelevium atoms from those of the lighter actinides. The actual synthesis was done by a recoil technique, introduced by Albert Ghiorso. In this technique, the einsteinium was placed on the opposite side of the target from the beam, so that the recoiling mendelevium atoms would get enough momentum to leave the target and be caught on a catcher foil made of gold. This recoil target was made by an electroplating technique, developed by Alfred Chetham-Strode. This technique gave a very high yield, which was absolutely necessary when working with such a rare and valuable product as the einsteinium target material. The recoil target consisted of 109 atoms of 253Es which were deposited electrolytically on a thin gold foil. It was bombarded by 41 MeV alpha particles in the Berkeley cyclotron with a very high beam density of 6×1013 particles per second over an area of 0.05 cm2. The target was cooled by water or liquid helium, and the foil could be replaced.
Initial experiments were carried out in September 1954. No alpha decay was seen from mendelevium atoms; thus, Ghiorso suggested that the mendelevium had all decayed by electron capture to fermium and that the experiment should be repeated to search instead for spontaneous fission events. The repetition of the experiment happened in February 1955.
On the day of discovery, 19 February, alpha irradiation of the einsteinium target occurred in three three-hour sessions. The cyclotron was in the University of California campus, while the Radiation Laboratory was on the next hill. To deal with this situation, a complex procedure was used: Ghiorso took the catcher foils (there were three targets and three foils) from the cyclotron to Harvey, who would use aqua regia to dissolve it and pass it through an anion-exchange resin column to separate out the transuranium elements from the gold and other products. The resultant drops entered a test tube, which Choppin and Ghiorso took in a car to get to the Radiation Laboratory as soon as possible. There Thompson and Choppin used a cation-exchange resin column and the α-hydroxyisobutyric acid. The solution drops were collected on platinum disks and dried under heat lamps. The three disks were expected to contain respectively the fermium, no new elements, and the mendelevium. Finally, they were placed in their own counters, which were connected to recorders such that spontaneous fission events would be recorded as huge deflections in a graph showing the number and time of the decays. There thus was no direct detection, but by observation of spontaneous fission events arising from its electron-capture daughter 256Fm. The first one was identified with a "hooray" followed by a "double hooray" and a "triple hooray". The fourth one eventually officially proved the chemical identification of the 101st element, mendelevium. In total, five decays were reported up until 4 a.m. Seaborg was notified and the team left to sleep. Additional analysis and further experimentation showed the produced mendelevium isotope to have mass 256 and to decay by electron capture to fermium-256 with a half-life of 157.6 minutes.
Being the first of the second hundred of the chemical elements, it was decided that the element would be named "mendelevium" after the Russian chemist Dmitri Mendeleev, father of the periodic table. Because this discovery came during the Cold War, Seaborg had to request permission of the government of the United States to propose that the element be named for a Russian, but it was granted. The name "mendelevium" was accepted by the International Union of Pure and Applied Chemistry (IUPAC) in 1955 with symbol "Mv", which was changed to "Md" in the next IUPAC General Assembly (Paris, 1957).
Characteristics
Physical
In the periodic table, mendelevium is located to the right of the actinide fermium, to the left of the actinide nobelium, and below the lanthanide thulium. Mendelevium metal has not yet been prepared in bulk quantities, and bulk preparation is currently impossible. Nevertheless, a number of predictions and some preliminary experimental results have been done regarding its properties.
The lanthanides and actinides, in the metallic state, can exist as either divalent (such as europium and ytterbium) or trivalent (most other lanthanides) metals. The former have fns2 configurations, whereas the latter have fn−1d1s2 configurations. In 1975, Johansson and Rosengren examined the measured and predicted values for the cohesive energies (enthalpies of crystallization) of the metallic lanthanides and actinides, both as divalent and trivalent metals. The conclusion was that the increased binding energy of the [Rn]5f126d17s2 configuration over the [Rn]5f137s2 configuration for mendelevium was not enough to compensate for the energy needed to promote one 5f electron to 6d, as is true also for the very late actinides: thus einsteinium, fermium, mendelevium, and nobelium were expected to be divalent metals. The increasing predominance of the divalent state well before the actinide series concludes is attributed to the relativistic stabilization of the 5f electrons, which increases with increasing atomic number. Thermochromatographic studies with trace quantities of mendelevium by Zvara and Hübener from 1976 to 1982 confirmed this prediction. In 1990, Haire and Gibson estimated mendelevium metal to have an enthalpy of sublimation between 134 and 142 kJ/mol. Divalent mendelevium metal should have a metallic radius of around . Like the other divalent late actinides (except the once again trivalent lawrencium), metallic mendelevium should assume a face-centered cubic crystal structure. Mendelevium's melting point has been estimated at 800 °C, the same value as that predicted for the neighboring element nobelium. Its density is predicted to be around .
Chemical
The chemistry of mendelevium is mostly known only in solution, in which it can take on the +3 or +2 oxidation states. The +1 state has also been reported, but has not yet been confirmed.
Before mendelevium's discovery, Seaborg and Katz predicted that it should be predominantly trivalent in aqueous solution and hence should behave similarly to other tripositive lanthanides and actinides. After the synthesis of mendelevium in 1955, these predictions were confirmed, first in the observation at its discovery that it eluted just after fermium in the trivalent actinide elution sequence from a cation-exchange column of resin, and later the 1967 observation that mendelevium could form insoluble hydroxides and fluorides that coprecipitated with trivalent lanthanide salts. Cation-exchange and solvent extraction studies led to the conclusion that mendelevium was a trivalent actinide with an ionic radius somewhat smaller than that of the previous actinide, fermium. Mendelevium can form coordination complexes with 1,2-cyclohexanedinitrilotetraacetic acid (DCTA).
In reducing conditions, mendelevium(III) can be easily reduced to mendelevium(II), which is stable in aqueous solution. The standard reduction potential of the E°(Md3+→Md2+) couple was variously estimated in 1967 as −0.10 V or −0.20 V: later 2013 experiments established the value as . In comparison, E°(Md3+→Md0) should be around −1.74 V, and E°(Md2+→Md0) should be around −2.5 V. Mendelevium(II)'s elution behavior has been compared with that of strontium(II) and europium(II).
In 1973, mendelevium(I) was reported to have been produced by Russian scientists, who obtained it by reducing higher oxidation states of mendelevium with samarium(II). It was found to be stable in neutral water–ethanol solution and be homologous to caesium(I). However, later experiments found no evidence for mendelevium(I) and found that mendelevium behaved like divalent elements when reduced, not like the monovalent alkali metals. Nevertheless, the Russian team conducted further studies on the thermodynamics of cocrystallizing mendelevium with alkali metal chlorides, and concluded that mendelevium(I) had formed and could form mixed crystals with divalent elements, thus cocrystallizing with them. The status of the +1 oxidation state is still tentative.
The electrode potential E°(Md4+→Md3+) was predicted in 1975 to be +5.4 V; 1967 experiments with the strong oxidizing agent sodium bismuthate were unable to oxidize mendelevium(III) to mendelevium(IV).
Atomic
A mendelevium atom has 101 electrons. They are expected to be arranged in the configuration [Rn]5f137s2 (ground state term symbol 2F7/2), although experimental verification of this electron configuration had not yet been made as of 2006. The fifteen electrons in the 5f and 7s subshells are valence electrons. In forming compounds, three valence electrons may be lost, leaving behind a [Rn]5f12 core: this conforms to the trend set by the other actinides with their [Rn] 5fn electron configurations in the tripositive state. The first ionization potential of mendelevium was measured to be at most (6.58 ± 0.07) eV in 1974, based on the assumption that the 7s electrons would ionize before the 5f ones; this value has since not yet been refined further due to mendelevium's scarcity and high radioactivity. The ionic radius of hexacoordinate Md3+ had been preliminarily estimated in 1978 to be around 91.2 pm; 1988 calculations based on the logarithmic trend between distribution coefficients and ionic radius produced a value of 89.6 pm, as well as an enthalpy of hydration of . Md2+ should have an ionic radius of 115 pm and hydration enthalpy −1413 kJ/mol; Md+ should have ionic radius 117 pm.
Isotopes
Seventeen isotopes of mendelevium are known, with mass numbers from 244 to 260; all are radioactive. Additionally, 14 nuclear isomers are known. Of these, the longest-lived isotope is 258Md with a half-life of 51.59 days, and the longest-lived isomer is 258mMd with a half-life of 57.0 minutes. Nevertheless, the shorter-lived 256Md (half-life 1.295 hours) is more often used in chemical experimentation because it can be produced in larger quantities from alpha particle irradiation of einsteinium. After 258Md, the next most stable mendelevium isotopes are 260Md with a half-life of 27.8 days, 257Md with a half-life of 5.52 hours, 259Md with a half-life of 1.60 hours, and 256Md with a half-life of 1.295 hours. All of the remaining mendelevium isotopes have half-lives that are less than an hour, and the majority of these have half-lives that are less than 5 minutes.
The half-lives of mendelevium isotopes mostly increase smoothly from 244Md onwards, reaching a maximum at 258Md. Experiments and predictions suggest that the half-lives will then decrease, apart from 260Md with a half-life of 27.8 days, as spontaneous fission becomes the dominant decay mode due to the mutual repulsion of the protons posing a limit to the island of relative stability of long-lived nuclei in the actinide series. In addition, mendelevium is the element with the highest atomic number that has a known isotope with a half-life longer than one day.
Mendelevium-256, the chemically most important isotope of mendelevium, decays through electron capture 90% of the time and alpha decay 10% of the time. It is most easily detected through the spontaneous fission of its electron capture daughter fermium-256, but in the presence of other nuclides that undergo spontaneous fission, alpha decays at the characteristic energies for mendelevium-256 (7.205 and 7.139 MeV) can provide more useful identification.
Production and isolation
The lightest isotopes (244Md to 247Md) are mostly produced through bombardment of bismuth targets with argon ions, while slightly heavier ones (248Md to 253Md) are produced by bombarding plutonium and americium targets with ions of carbon and nitrogen. The most important and most stable isotopes are in the range from 254Md to 258Md and are produced through bombardment of einsteinium with alpha particles: einsteinium-253, −254, and −255 can all be used. 259Md is produced as a daughter of 259No, and 260Md can be produced in a transfer reaction between einsteinium-254 and oxygen-18. Typically, the most commonly used isotope 256Md is produced by bombarding either einsteinium-253 or −254 with alpha particles: einsteinium-254 is preferred when available because it has a longer half-life and therefore can be used as a target for longer. Using available microgram quantities of einsteinium, femtogram quantities of mendelevium-256 may be produced.
The recoil momentum of the produced mendelevium-256 atoms is used to bring them physically far away from the einsteinium target from which they are produced, bringing them onto a thin foil of metal (usually beryllium, aluminium, platinum, or gold) just behind the target in a vacuum. This eliminates the need for immediate chemical separation, which is both costly and prevents reusing of the expensive einsteinium target. The mendelevium atoms are then trapped in a gas atmosphere (frequently helium), and a gas jet from a small opening in the reaction chamber carries the mendelevium along. Using a long capillary tube, and including potassium chloride aerosols in the helium gas, the mendelevium atoms can be transported over tens of meters to be chemically analyzed and have their quantity determined. The mendelevium can then be separated from the foil material and other fission products by applying acid to the foil and then coprecipitating the mendelevium with lanthanum fluoride, then using a cation-exchange resin column with a 10% ethanol solution saturated with hydrochloric acid, acting as an eluant. However, if the foil is made of gold and thin enough, it is enough to simply dissolve the gold in aqua regia before separating the trivalent actinides from the gold using anion-exchange chromatography, the eluant being 6 M hydrochloric acid.
Mendelevium can finally be separated from the other trivalent actinides using selective elution from a cation-exchange resin column, the eluant being ammonia α-HIB. Using the gas-jet method often renders the first two steps unnecessary. The above procedure is the most commonly used one for the separation of transeinsteinium elements.
Another possible way to separate the trivalent actinides is via solvent extraction chromatography using bis-(2-ethylhexyl) phosphoric acid (abbreviated as HDEHP) as the stationary organic phase and nitric acid as the mobile aqueous phase. The actinide elution sequence is reversed from that of the cation-exchange resin column, so that the heavier actinides elute later. The mendelevium separated by this method has the advantage of being free of organic complexing agent compared to the resin column; the disadvantage is that mendelevium then elutes very late in the elution sequence, after fermium.
Another method to isolate mendelevium exploits the distinct elution properties of Md2+ from those of Es3+ and Fm3+. The initial steps are the same as above, and employs HDEHP for extraction chromatography, but coprecipitates the mendelevium with terbium fluoride instead of lanthanum fluoride. Then, 50 mg of chromium is added to the mendelevium to reduce it to the +2 state in 0.1 M hydrochloric acid with zinc or mercury. The solvent extraction then proceeds, and while the trivalent and tetravalent lanthanides and actinides remain on the column, mendelevium(II) does not and stays in the hydrochloric acid. It is then reoxidized to the +3 state using hydrogen peroxide and then isolated by selective elution with 2 M hydrochloric acid (to remove impurities, including chromium) and finally 6 M hydrochloric acid (to remove the mendelevium). It is also possible to use a column of cationite and zinc amalgam, using 1 M hydrochloric acid as an eluant, reducing Md(III) to Md(II) where it behaves like the alkaline earth metals. Thermochromatographic chemical isolation could be achieved using the volatile mendelevium hexafluoroacetylacetonate: the analogous fermium compound is also known and is also volatile.
Toxicity
Though few people come in contact with mendelevium, the International Commission on Radiological Protection has set annual exposure limits for the most stable isotope. For mendelevium-258, the ingestion limit was set at 9×105 becquerels (1 Bq = 1 decay per second). Given the half-life of this isotope, this is only 2.48 ng (nanograms). The inhalation limit is at 6000 Bq or 16.5 pg (picogram).
| Physical sciences | Actinides | Chemistry |
18900 | https://en.wikipedia.org/wiki/Modus%20ponens | Modus ponens | In propositional logic, (; MP), also known as (), implication elimination, or affirming the antecedent, is a deductive argument form and rule of inference. It can be summarized as "P implies Q. P is true. Therefore, Q must also be true."
Modus ponens is a mixed hypothetical syllogism and is closely related to another valid form of argument, modus tollens. Both have apparently similar but invalid forms: affirming the consequent and denying the antecedent. Constructive dilemma is the disjunctive version of modus ponens.
The history of modus ponens goes back to antiquity. The first to explicitly describe the argument form modus ponens was Theophrastus. It, along with modus tollens, is one of the standard patterns of inference that can be applied to derive chains of conclusions that lead to the desired goal.
Explanation
The form of a modus ponens argument is a mixed hypothetical syllogism, with two premises and a conclusion:
If P, then Q.
P.
Therefore, Q.
The first premise is a conditional ("if–then") claim, namely that P implies Q. The second premise is an assertion that P, the antecedent of the conditional claim, is the case. From these two premises it can be logically concluded that Q, the consequent of the conditional claim, must be the case as well.
An example of an argument that fits the form modus ponens:
If today is Tuesday, then John will go to work.
Today is Tuesday.
Therefore, John will go to work.
This argument is valid, but this has no bearing on whether any of the statements in the argument are actually true; for modus ponens to be a sound argument, the premises must be true for any true instances of the conclusion. An argument can be valid but nonetheless unsound if one or more premises are false; if an argument is valid and all the premises are true, then the argument is sound. For example, John might be going to work on Wednesday. In this case, the reasoning for John's going to work (because it is Wednesday) is unsound. The argument is only sound on Tuesdays (when John goes to work), but valid on every day of the week. A propositional argument using modus ponens is said to be deductive.
In single-conclusion sequent calculi, modus ponens is the Cut rule. The cut-elimination theorem for a calculus says that every proof involving Cut can be transformed (generally, by a constructive method) into a proof without Cut, and hence that Cut is admissible.
The Curry–Howard correspondence between proofs and programs relates modus ponens to function application: if f is a function of type P → Q and x is of type P, then f x is of type Q.
In artificial intelligence, modus ponens is often called forward chaining.
Formal notation
The modus ponens rule may be written in sequent notation as
where P, Q and P → Q are statements (or propositions) in a formal language and ⊢ is a metalogical symbol meaning that Q is a syntactic consequence of P and P → Q in some logical system.
Justification via truth table
The validity of modus ponens in classical two-valued logic can be clearly demonstrated by use of a truth table.
In instances of modus ponens we assume as premises that p → q is true and p is true. Only one line of the truth table—the first—satisfies these two conditions (p and p → q). On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true.
Status
While modus ponens is one of the most commonly used argument forms in logic it must not be mistaken for a logical law; rather, it is one of the accepted mechanisms for the construction of deductive proofs that includes the "rule of definition" and the "rule of substitution". Modus ponens allows one to eliminate a conditional statement from a logical proof or argument (the antecedents) and thereby not carry these antecedents forward in an ever-lengthening string of symbols; for this reason modus ponens is sometimes called the rule of detachment or the law of detachment. Enderton, for example, observes that "modus ponens can produce shorter formulas from longer ones", and Russell observes that "the process of the inference cannot be reduced to symbols. Its sole record is the occurrence of ⊦q [the consequent] ... an inference is the dropping of a true premise; it is the dissolution of an implication".
A justification for the "trust in inference is the belief that if the two former assertions [the antecedents] are not in error, the final assertion [the consequent] is not in error". In other words: if one statement or proposition implies a second one, and the first statement or proposition is true, then the second one is also true. If P implies Q and P is true, then Q is true.
Correspondence to other mathematical frameworks
Algebraic semantics
In mathematical logic, algebraic semantics treats every sentence as a name for an element in an ordered set. Typically, the set can be visualized as a lattice-like structure with a single element (the "always-true") at the top and another single element (the "always-false") at the bottom. Logical equivalence becomes identity, so that when and , for instance, are equivalent (as is standard), then . Logical implication becomes a matter of relative position: logically implies just in case , i.e., when either or else lies below and is connected to it by an upward path.
In this context, to say that and together imply —that is, to affirm modus ponens as valid—is to say that the highest point which lies below both and lies below , i.e., that . In the semantics for basic propositional logic, the algebra is Boolean, with construed as the material conditional: . Confirming that is then straightforward, because and . With other treatments of , the semantics becomes more complex, the algebra may be non-Boolean, and the validity of modus ponens cannot be taken for granted.
Probability calculus
If and , then must lie in the interval . For the special case , must equal .
Subjective logic
Modus ponens represents an instance of the binomial deduction operator in subjective logic expressed as:
where denotes the subjective opinion about as expressed by source , and the conditional opinion generalizes the logical implication . The deduced marginal opinion about is denoted by . The case where is an absolute TRUE opinion about is equivalent to source saying that is TRUE, and the case where is an absolute FALSE opinion about is equivalent to source saying that is FALSE. The deduction operator of subjective logic produces an absolute TRUE deduced opinion when the conditional opinion is absolute TRUE and the antecedent opinion is absolute TRUE. Hence, subjective logic deduction represents a generalization of both modus ponens and the Law of total probability.
Alleged cases of failure
Philosophers and linguists have identified a variety of cases where modus ponens appears to fail. Vann McGee, for instance, argued that modus ponens can fail for conditionals whose consequents are themselves conditionals. The following is an example:
Either Shakespeare or Hobbes wrote Hamlet.
If either Shakespeare or Hobbes wrote Hamlet, then if Shakespeare did not do it, Hobbes did.
Therefore, if Shakespeare did not write Hamlet, Hobbes did it.
Since Shakespeare did write Hamlet, the first premise is true. The second premise is also true, since starting with a set of possible authors limited to just Shakespeare and Hobbes and eliminating one of them leaves only the other. However, the conclusion is doubtful, since ruling out Shakespeare as the author of Hamlet would leave numerous possible candidates, many of them more plausible alternatives than Hobbes (if the if-thens in the inference are read as material conditionals, the conclusion comes out true simply by virtue of the false antecedent. This is one of the paradoxes of material implication).
The general form of McGee-type counterexamples to modus ponens is simply , therefore, ; it is not essential that be a disjunction, as in the example given. That these kinds of cases constitute failures of modus ponens remains a controversial view among logicians, but opinions vary on how the cases should be disposed of.
In deontic logic, some examples of conditional obligation also raise the possibility of modus ponens failure. These are cases where the conditional premise describes an obligation predicated on an immoral or imprudent action, e.g., "If Doe murders his mother, he ought to do so gently," for which the dubious unconditional conclusion would be "Doe ought to gently murder his mother." It would appear to follow that if Doe is in fact gently murdering his mother, then by modus ponens he is doing exactly what he should, unconditionally, be doing. Here again, modus ponens''' failure is not a popular diagnosis but is sometimes argued for.
Possible fallacies
The fallacy of affirming the consequent is a common misinterpretation of the modus ponens.
| Mathematics | Mathematical logic | null |
18901 | https://en.wikipedia.org/wiki/Modus%20tollens | Modus tollens | In propositional logic, modus tollens () (MT), also known as modus tollendo tollens (Latin for "mode that by denying denies") and denying the consequent, is a deductive argument form and a rule of inference. Modus tollens is a mixed hypothetical syllogism that takes the form of "If P, then Q. Not Q. Therefore, not P." It is an application of the general truth that if a statement is true, then so is its contrapositive. The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument.
The history of the inference rule modus tollens goes back to antiquity. The first to explicitly describe the argument form modus tollens was Theophrastus.
Modus tollens is closely related to modus ponens. There are two similar, but invalid, forms of argument: affirming the consequent and denying the antecedent. | Mathematics | Mathematical logic | null |
18902 | https://en.wikipedia.org/wiki/Mathematician | Mathematician | A mathematician is someone who uses an extensive knowledge of mathematics in their work, typically to solve mathematical problems. Mathematicians are concerned with numbers, data, quantity, structure, space, models, and change.
History
One of the earliest known mathematicians was Thales of Miletus (); he has been hailed as the first true mathematician and the first known individual to whom a mathematical discovery has been attributed. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales's theorem.
The number of known mathematicians grew when Pythagoras of Samos () established the Pythagorean school, whose doctrine it was that mathematics ruled the universe and whose motto was "All is number". It was the Pythagoreans who coined the term "mathematics", and with whom the study of mathematics for its own sake begins.
The first woman mathematician recorded by history was Hypatia of Alexandria ( – 415). She succeeded her father as librarian at the Great Library and wrote many works on applied mathematics. Because of a political dispute, the Christian community in Alexandria punished her, presuming she was involved, by stripping her naked and scraping off her skin with clamshells (some say roofing tiles).
Science and mathematics in the Islamic world during the Middle Ages followed various models and modes of funding varied based primarily on scholars. It was extensive patronage and strong intellectual policies implemented by specific rulers that allowed scientific knowledge to develop in many areas. Funding for translation of scientific texts in other languages was ongoing throughout the reign of certain caliphs, and it turned out that certain scholars became experts in the works they translated, and in turn received further support for continuing to develop certain sciences. As these sciences received wider attention from the elite, more scholars were invited and funded to study particular sciences. An example of a translator and mathematician who benefited from this type of support was Al-Khawarizmi. A notable feature of many scholars working under Muslim rule in medieval times is that they were often polymaths. Examples include the work on optics, maths and astronomy of Ibn al-Haytham.
The Renaissance brought an increased emphasis on mathematics and science to Europe. During this period of transition from a mainly feudal and ecclesiastical culture to a predominantly secular one, many notable mathematicians had other occupations: Luca Pacioli (founder of accounting); Niccolò Fontana Tartaglia (notable engineer and bookkeeper); Gerolamo Cardano (earliest founder of probability and binomial expansion); Robert Recorde (physician) and François Viète (lawyer).
As time passed, many mathematicians gravitated towards universities. An emphasis on free thinking and experimentation had begun in Britain's oldest universities beginning in the seventeenth century at Oxford with the scientists Robert Hooke and Robert Boyle, and at Cambridge where Isaac Newton was Lucasian Professor of Mathematics & Physics. Moving into the 19th century, the objective of universities all across Europe evolved from teaching the "regurgitation of knowledge" to "encourag[ing] productive thinking." In 1810, Alexander von Humboldt convinced the king of Prussia, Fredrick William III, to build a university in Berlin based on Friedrich Schleiermacher's liberal ideas; the goal was to demonstrate the process of the discovery of knowledge and to teach students to "take account of fundamental laws of science in all their thinking." Thus, seminars and laboratories started to evolve.
British universities of this period adopted some approaches familiar to the Italian and German universities, but as they already enjoyed substantial freedoms and autonomy the changes there had begun with the Age of Enlightenment, the same influences that inspired Humboldt. The Universities of Oxford and Cambridge emphasized the importance of research, arguably more authentically implementing Humboldt's idea of a university than even German universities, which were subject to state authority. Overall, science (including mathematics) became the focus of universities in the 19th and 20th centuries. Students could conduct research in seminars or laboratories and began to produce doctoral theses with more scientific content. According to Humboldt, the mission of the University of Berlin was to pursue scientific knowledge. The German university system fostered professional, bureaucratically regulated scientific research performed in well-equipped laboratories, instead of the kind of research done by private and individual scholars in Great Britain and France. In fact, Rüegg asserts that the German system is responsible for the development of the modern research university because it focused on the idea of "freedom of scientific research, teaching and study."
Required education
Mathematicians usually cover a breadth of topics within mathematics in their undergraduate education, and then proceed to specialize in topics of their own choice at the graduate level. In some universities, a qualifying exam serves to test both the breadth and depth of a student's understanding of mathematics; the students who pass are permitted to work on a doctoral dissertation.
Activities
Applied mathematics
Mathematicians involved with solving problems with applications in real life are called applied mathematicians. Applied mathematicians are mathematical scientists who, with their specialized knowledge and professional methodology, approach many of the imposing problems presented in related scientific fields. With professional focus on a wide variety of problems, theoretical systems, and localized constructs, applied mathematicians work regularly in the study and formulation of mathematical models. Mathematicians and applied mathematicians are considered to be two of the STEM (science, technology, engineering, and mathematics) careers.
The discipline of applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry; thus, "applied mathematics" is a mathematical science with specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on problems, often concrete but sometimes abstract. As professionals focused on problem solving, applied mathematicians look into the formulation, study, and use of mathematical models in science, engineering, business, and other areas of mathematical practice.
Pure mathematics
Pure mathematics is mathematics that studies entirely abstract concepts. From the eighteenth century onwards, this was a recognized category of mathematical activity, sometimes characterized as speculative mathematics, and at variance with the trend towards meeting the needs of navigation, astronomy, physics, economics, engineering, and other applications.
Another insightful view put forth is that pure mathematics is not necessarily applied mathematics: it is possible to study abstract entities with respect to their intrinsic nature, and not be concerned with how they manifest in the real world. Even though the pure and applied viewpoints are distinct philosophical positions, in practice there is much overlap in the activity of pure and applied mathematicians.
To develop accurate models for describing the real world, many applied mathematicians draw on tools and techniques that are often considered to be "pure" mathematics. On the other hand, many pure mathematicians draw on natural and social phenomena as inspiration for their abstract research.
Mathematics teaching
Many professional mathematicians also engage in the teaching of mathematics. Duties may include:
teaching university mathematics courses;
supervising undergraduate and graduate research; and
serving on academic committees.
Consulting
Many careers in mathematics outside of universities involve consulting. For instance, actuaries assemble and analyze data to estimate the probability and likely cost of the occurrence of an event such as death, sickness, injury, disability, or loss of property. Actuaries also address financial questions, including those involving the level of pension contributions required to produce a certain retirement income and the way in which a company should invest resources to maximize its return on investments in light of potential risk. Using their broad knowledge, actuaries help design and price insurance policies, pension plans, and other financial strategies in a manner which will help ensure that the plans are maintained on a sound financial basis.
As another example, mathematical finance will derive and extend the mathematical or numerical models without necessarily establishing a link to financial theory, taking observed market prices as input. Mathematical consistency is required, not compatibility with economic theory. Thus, for example, while a financial economist might study the structural reasons why a company may have a certain share price, a financial mathematician may take the share price as a given, and attempt to use stochastic calculus to obtain the corresponding value of derivatives of the stock (see: Valuation of options; Financial modeling).
Occupations
According to the Dictionary of Occupational Titles occupations in mathematics include the following.
Mathematician
Operations-Research Analyst
Mathematical Statistician
Mathematical Technician
Actuary
Applied Statistician
Weight Analyst
Prizes in mathematics
There is no Nobel Prize in mathematics, though sometimes mathematicians have won the Nobel Prize in a different field, such as economics or physics. Prominent prizes in mathematics include the Abel Prize, the Chern Medal, the Fields Medal, the Gauss Prize, the Nemmers Prize, the Balzan Prize, the Crafoord Prize, the Shaw Prize, the Steele Prize, the Wolf Prize, the Schock Prize, and the Nevanlinna Prize.
The American Mathematical Society, Association for Women in Mathematics, and other mathematical societies offer several prizes aimed at increasing the representation of women and minorities in the future of mathematics.
Mathematical autobiographies
Several well known mathematicians have written autobiographies in part to explain to a general audience what it is about mathematics that has made them want to devote their lives to its study. These provide some of the best glimpses into what it means to be a mathematician. The following list contains some works that are not autobiographies, but rather essays on mathematics and mathematicians with strong autobiographical elements.
The Book of My Life – Girolamo Cardano
A Mathematician's Apology - G.H. Hardy
A Mathematician's Miscellany (republished as Littlewood's miscellany) - J. E. Littlewood
I Am a Mathematician - Norbert Wiener
I Want to be a Mathematician - Paul R. Halmos
Adventures of a Mathematician - Stanislaw Ulam
Enigmas of Chance - Mark Kac
Random Curves - Neal Koblitz
Love and Math - Edward Frenkel
Mathematics Without Apologies - Michael Harris
| Mathematics | Basics | null |
18908 | https://en.wikipedia.org/wiki/Mersenne%20prime | Mersenne prime | In mathematics, a Mersenne prime is a prime number that is one less than a power of two. That is, it is a prime number of the form for some integer . They are named after Marin Mersenne, a French Minim friar, who studied them in the early 17th century. If is a composite number then so is . Therefore, an equivalent definition of the Mersenne primes is that they are the prime numbers of the form for some prime .
The exponents which give Mersenne primes are 2, 3, 5, 7, 13, 17, 19, 31, ... and the resulting Mersenne primes are 3, 7, 31, 127, 8191, 131071, 524287, 2147483647, ... .
Numbers of the form without the primality requirement may be called Mersenne numbers. Sometimes, however, Mersenne numbers are defined to have the additional requirement that should be prime.
The smallest composite Mersenne number with prime exponent n is .
Mersenne primes were studied in antiquity because of their close connection to perfect numbers: the Euclid–Euler theorem asserts a one-to-one correspondence between even perfect numbers and Mersenne primes. Many of the largest known primes are Mersenne primes because Mersenne numbers are easier to check for primality.
, 52 Mersenne primes are known. The largest known prime number, , is a Mersenne prime. Since 1997, all newly found Mersenne primes have been discovered by the Great Internet Mersenne Prime Search, a distributed computing project. In December 2020, a major milestone in the project was passed after all exponents below 100 million were checked at least once.
About Mersenne primes
Many fundamental questions about Mersenne primes remain unresolved. It is not even known whether the set of Mersenne primes is finite or infinite.
The Lenstra–Pomerance–Wagstaff conjecture claims that there are infinitely many Mersenne primes and predicts their order of growth and frequency: For every number n, there should on average be about ≈ 5.92 primes p with n decimal digits (i.e. 10n-1 < p < 10n) for which is prime. Here, γ is the Euler–Mascheroni constant.
It is also not known whether infinitely many Mersenne numbers with prime exponents are composite, although this would follow from widely believed conjectures about prime numbers, for example, the infinitude of Sophie Germain primes congruent to 3 (mod 4). For these primes , (which is also prime) will divide , for example, , , , , , , , and . Since for these primes , is congruent to 7 mod 8, so 2 is a quadratic residue mod , and the multiplicative order of 2 mod must divide . Since is a prime, it must be or 1. However, it cannot be 1 since and 1 has no prime factors, so it must be . Hence, divides and cannot be prime.
The first four Mersenne primes are , , and and because the first Mersenne prime starts at , all Mersenne primes are congruent to 3 (mod 4). Other than and , all other Mersenne numbers are also congruent to 3 (mod 4). Consequently, in the prime factorization of a Mersenne number ( ) there must be at least one prime factor congruent to 3 (mod 4).
A basic theorem about Mersenne numbers states that if is prime, then the exponent must also be prime. This follows from the identity
This rules out primality for Mersenne numbers with a composite exponent, such as .
Though the above examples might suggest that is prime for all primes , this is not the case, and the smallest counterexample is the Mersenne number
.
The evidence at hand suggests that a randomly selected Mersenne number is much more likely to be prime than an arbitrary randomly selected odd integer of similar size. Nonetheless, prime values of appear to grow increasingly sparse as increases. For example, eight of the first 11 primes give rise to a Mersenne prime (the correct terms on Mersenne's original list), while is prime for only 43 of the first two million prime numbers (up to 32,452,843).
Since Mersenne numbers grow very rapidly, the search for Mersenne primes is a difficult task, even though there is a simple efficient test to determine whether a given Mersenne number is prime: the Lucas–Lehmer primality test (LLT), which makes it much easier to test the primality of Mersenne numbers than that of most other numbers of the same size. The search for the largest known prime has somewhat of a cult following. Consequently, a large amount of computer power has been expended searching for new Mersenne primes, much of which is now done using distributed computing.
Arithmetic modulo a Mersenne number is particularly efficient on a binary computer, making them popular choices when a prime modulus is desired, such as the Park–Miller random number generator. To find a primitive polynomial of Mersenne number order requires knowing the factorization of that number, so Mersenne primes allow one to find primitive polynomials of very high order. Such primitive trinomials are used in pseudorandom number generators with very large periods such as the Mersenne twister, generalized shift register and Lagged Fibonacci generators.
Perfect numbers
Mersenne primes are closely connected to perfect numbers. In the 4th century BC, Euclid proved that if is prime, then ) is a perfect number. In the 18th century, Leonhard Euler proved that, conversely, all even perfect numbers have this form. This is known as the Euclid–Euler theorem. It is unknown whether there are any odd perfect numbers.
History
Mersenne primes take their name from the 17th-century French scholar Marin Mersenne, who compiled what was supposed to be a list of Mersenne primes with exponents up to 257. The exponents listed by Mersenne in 1644 were as follows:
2, 3, 5, 7, 13, 17, 19, 31, 67, 127, 257.
His list replicated the known primes of his time with exponents up to 19. His next entry, 31, was correct, but the list then became largely incorrect, as Mersenne mistakenly included and (which are composite) and omitted , , and (which are prime). Mersenne gave little indication of how he came up with his list.
Édouard Lucas proved in 1876 that is indeed prime, as Mersenne claimed. This was the largest known prime number for 75 years until 1951, when Aimé Ferrier found a larger prime, , using a desk calculating machine. was determined to be prime in 1883 by Ivan Mikheevich Pervushin, though Mersenne claimed it was composite, and for this reason it is sometimes called Pervushin's number. This was the second-largest known prime number, and it remained so until 1911. Lucas had shown another error in Mersenne's list in 1876 by demonstrating that was composite without finding a factor. No factor was found until a famous talk by Frank Nelson Cole in 1903. Without speaking a word, he went to a blackboard and raised 2 to the 67th power, then subtracted one, resulting in the number . On the other side of the board, he multiplied and got the same number, then returned to his seat (to applause) without speaking. He later said that the result had taken him "three years of Sundays" to find. A correct list of all Mersenne primes in this number range was completed and rigorously verified only about three centuries after Mersenne published his list.
Searching for Mersenne primes
Fast algorithms for finding Mersenne primes are available, and , the seven largest known prime numbers are Mersenne primes.
The first four Mersenne primes , , and were known in antiquity. The fifth, , was discovered anonymously before 1461; the next two ( and ) were found by Pietro Cataldi in 1588. After nearly two centuries, was verified to be prime by Leonhard Euler in 1772. The next (in historical, not numerical order) was , found by Édouard Lucas in 1876, then by Ivan Mikheevich Pervushin in 1883. Two more ( and ) were found early in the 20th century, by R. E. Powers in 1911 and 1914, respectively.
The most efficient method presently known for testing the primality of Mersenne numbers is the Lucas–Lehmer primality test. Specifically, it can be shown that for prime , is prime if and only if divides , where and for .
During the era of manual calculation, all previously untested exponents up to and including 257 were tested with the Lucas–Lehmer test and found to be composite. A notable contribution was made by retired Yale physics professor Horace Scudder Uhler, who did the calculations for exponents 157, 167, 193, 199, 227, and 229. Unfortunately for those investigators, the interval they were testing contains the largest known relative gap between Mersenne primes: the next Mersenne prime exponent, 521, would turn out to be more than four times as large as the previous record of 127.
The search for Mersenne primes was revolutionized by the introduction of the electronic digital computer. Alan Turing searched for them on the Manchester Mark 1 in 1949, but the first successful identification of a Mersenne prime, , by this means was achieved at 10:00 pm on January 30, 1952, using the U.S. National Bureau of Standards Western Automatic Computer (SWAC) at the Institute for Numerical Analysis at the University of California, Los Angeles (UCLA), under the direction of D. H. Lehmer, with a computer search program written and run by Prof. R. M. Robinson. It was the first Mersenne prime to be identified in thirty-eight years; the next one, , was found by the computer a little less than two hours later. Three more — , , and — were found by the same program in the next several months. was the first prime discovered with more than 1000 digits, was the first with more than 10,000, and was the first with more than a million. In general, the number of digits in the decimal representation of equals , where denotes the floor function (or equivalently ).
In September 2008, mathematicians at UCLA participating in the Great Internet Mersenne Prime Search (GIMPS) won part of a $100,000 prize from the Electronic Frontier Foundation for their discovery of a very nearly 13-million-digit Mersenne prime. The prize, finally confirmed in October 2009, is for the first known prime with at least 10 million digits. The prime was found on a Dell OptiPlex 745 on August 23, 2008. This was the eighth Mersenne prime discovered at UCLA.
On April 12, 2009, a GIMPS server log reported that a 47th Mersenne prime had possibly been found. The find was first noticed on June 4, 2009, and verified a week later. The prime is . Although it is chronologically the 47th Mersenne prime to be discovered, it is smaller than the largest known at the time, which was the 45th to be discovered.
On January 25, 2013, Curtis Cooper, a mathematician at the University of Central Missouri, discovered a 48th Mersenne prime, (a number with 17,425,170 digits), as a result of a search executed by a GIMPS server network.
On January 19, 2016, Cooper published his discovery of a 49th Mersenne prime, (a number with 22,338,618 digits), as a result of a search executed by a GIMPS server network. This was the fourth Mersenne prime discovered by Cooper and his team in the past ten years.
On September 2, 2016, the Great Internet Mersenne Prime Search finished verifying all tests below , thus officially confirming its position as the 45th Mersenne prime.
On January 3, 2018, it was announced that Jonathan Pace, a 51-year-old electrical engineer living in Germantown, Tennessee, had found a 50th Mersenne prime, (a number with 23,249,425 digits), as a result of a search executed by a GIMPS server network. The discovery was made by a computer in the offices of a church in the same town.
On December 21, 2018, it was announced that The Great Internet Mersenne Prime Search (GIMPS) discovered a new prime number, , having 24,862,048 digits. A computer volunteered by Patrick Laroche from Ocala, Florida made the find on December 7, 2018.
In late 2020, GIMPS began using a new technique to rule out potential Mersenne primes called the Probable prime (PRP) test, based on development from Robert Gerbicz in 2017, and a simple way to verify tests developed by Krzysztof Pietrzak in 2018. Due to the low error rate and ease of proof, this nearly halved the computing time to rule out potential primes over the Lucas-Lehmer test (as two users would no longer have to perform the same test to confirm the other's result), although exponents passing the PRP test still require one to confirm their primality.
On October 12, 2024, a user named Luke Durant from San Jose, California, found the current largest known Mersenne prime, , having 41,024,320 digits. This marks the first Mersenne prime with an exponent surpassing 8 digits. This was announced on October 21, 2024.
Theorems about Mersenne numbers
Mersenne numbers are 0, 1, 3, 7, 15, 31, 63, ... .
If and are natural numbers such that is prime, then or .
Proof: . Then , so . Thus . However, is prime, so or . In the former case, , hence (which is a contradiction, as neither −1 nor 0 is prime) or In the latter case, or . If , however, which is not prime. Therefore, .
If is prime, then is prime.
Proof: Suppose that is composite, hence can be written with and . Then so is composite. By contraposition, if is prime then p is prime.
If is an odd prime, then every prime that divides must be 1 plus a multiple of . This holds even when is prime.
For example, is prime, and . A composite example is , where and .
Proof: By Fermat's little theorem, is a factor of . Since is a factor of , for all positive integers , is also a factor of . Since is prime and is not a factor of , is also the smallest positive integer such that is a factor of . As a result, for all positive integers , is a factor of if and only if is a factor of . Therefore, since is a factor of , is a factor of so . Furthermore, since is a factor of , which is odd, is odd. Therefore, .
This fact leads to a proof of Euclid's theorem, which asserts the infinitude of primes, distinct from the proof written by Euclid: for every odd prime , all primes dividing are larger than ; thus there are always larger primes than any particular prime.
It follows from this fact that for every prime , there is at least one prime of the form less than or equal to , for some integer .
If is an odd prime, then every prime that divides is congruent to .
Proof: , so is a square root of . By quadratic reciprocity, every prime modulus in which the number 2 has a square root is congruent to .
A Mersenne prime cannot be a Wieferich prime.
Proof: We show if is a Mersenne prime, then the congruence does not hold. By Fermat's little theorem, . Therefore, one can write . If the given congruence is satisfied, then , therefore . Hence , and therefore which is impossible.
If and are natural numbers then and are coprime if and only if and are coprime. Consequently, a prime number divides at most one prime-exponent Mersenne number. That is, the set of pernicious Mersenne numbers is pairwise coprime.
If and are both prime (meaning that is a Sophie Germain prime), and is congruent to , then divides .
Example: 11 and 23 are both prime, and , so 23 divides .
Proof: Let be . By Fermat's little theorem, , so either or . Supposing latter true, then , so −2 would be a quadratic residue mod . However, since is congruent to , is congruent to and therefore 2 is a quadratic residue mod . Also since is congruent to , −1 is a quadratic nonresidue mod , so −2 is the product of a residue and a nonresidue and hence it is a nonresidue, which is a contradiction. Hence, the former congruence must be true and divides .
All composite divisors of prime-exponent Mersenne numbers are strong pseudoprimes to the base 2.
With the exception of 1, a Mersenne number cannot be a perfect power. That is, and in accordance with Mihăilescu's theorem, the equation has no solutions where , , and are integers with and .
The Mersenne number sequence is a member of the family of Lucas sequences. It is (3, 2). That is, Mersenne number with and .
List of known Mersenne primes
, the 52 known Mersenne primes are 2p − 1 for the following p:
2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609, 57885161, 74207281, 77232917, 82589933, 136279841.
Factorization of composite Mersenne numbers
Since they are prime numbers, Mersenne primes are divisible only by 1 and themselves. However, not all Mersenne numbers are Mersenne primes. Mersenne numbers are very good test cases for the special number field sieve algorithm, so often the largest number factorized with this algorithm has been a Mersenne number. , is the record-holder, having been factored with a variant of the special number field sieve that allows the factorization of several numbers at once. See integer factorization records for links to more information. The special number field sieve can factorize numbers with more than one large factor. If a number has only one very large factor then other algorithms can factorize larger numbers by first finding small factors and then running a primality test on the cofactor. , the largest completely factored number (with probable prime factors allowed) is , where is a 3,829,294-digit probable prime. It was discovered by a GIMPS participant with nickname "Funky Waddle". , the Mersenne number M1277 is the smallest composite Mersenne number with no known factors; it has no prime factors below 268, and is very unlikely to have any factors below 1065 (~2216).
The table below shows factorizations for the first 20 composite Mersenne numbers .
The number of factors for the first 500 Mersenne numbers can be found at .
Mersenne numbers in nature and elsewhere
In the mathematical problem Tower of Hanoi, solving a puzzle with an -disc tower requires steps, assuming no mistakes are made. The number of rice grains on the whole chessboard in the wheat and chessboard problem is .
The asteroid with minor planet number 8191 is named 8191 Mersenne after Marin Mersenne, because 8191 is a Mersenne prime.
In geometry, an integer right triangle that is primitive and has its even leg a power of 2 ( ) generates a unique right triangle such that its inradius is always a Mersenne number. For example, if the even leg is then because it is primitive it constrains the odd leg to be , the hypotenuse to be and its inradius to be .
Mersenne–Fermat primes
A Mersenne–Fermat number is defined as with prime, natural number, and can be written as . When , it is a Mersenne number. When , it is a Fermat number. The only known Mersenne–Fermat primes with are
and .
In fact, , where is the cyclotomic polynomial.
Generalizations
The simplest generalized Mersenne primes are prime numbers of the form , where is a low-degree polynomial with small integer coefficients. An example is , in this case, , and ; another example is , in this case, , and .
It is also natural to try to generalize primes of the form to primes of the form (for and ). However (see also theorems above), is always divisible by , so unless the latter is a unit, the former is not a prime. This can be remedied by allowing b to be an algebraic integer instead of an integer:
Complex numbers
In the ring of integers (on real numbers), if is a unit, then is either 2 or 0. But are the usual Mersenne primes, and the formula does not lead to anything interesting (since it is always −1 for all ). Thus, we can regard a ring of "integers" on complex numbers instead of real numbers, like Gaussian integers and Eisenstein integers.
Gaussian Mersenne primes
If we regard the ring of Gaussian integers, we get the case and , and can ask (WLOG) for which the number is a Gaussian prime which will then be called a Gaussian Mersenne prime.
is a Gaussian prime for the following :
2, 3, 5, 7, 11, 19, 29, 47, 73, 79, 113, 151, 157, 163, 167, 239, 241, 283, 353, 367, 379, 457, 997, 1367, 3041, 10141, 14699, 27529, 49207, 77291, 85237, 106693, 160423, 203789, 364289, 991961, 1203793, 1667321, 3704053, 4792057, ...
Like the sequence of exponents for usual Mersenne primes, this sequence contains only (rational) prime numbers.
As for all Gaussian primes, the norms (that is, squares of absolute values) of these numbers are rational primes:
5, 13, 41, 113, 2113, 525313, 536903681, 140737471578113, ... .
Eisenstein Mersenne primes
One may encounter cases where such a Mersenne prime is also an Eisenstein prime, being of the form and . In these cases, such numbers are called Eisenstein Mersenne primes.
is an Eisenstein prime for the following :
2, 5, 7, 11, 17, 19, 79, 163, 193, 239, 317, 353, 659, 709, 1049, 1103, 1759, 2029, 5153, 7541, 9049, 10453, 23743, 255361, 534827, 2237561, ...
The norms (that is, squares of absolute values) of these Eisenstein primes are rational primes:
7, 271, 2269, 176419, 129159847, 1162320517, ...
Divide an integer
Repunit primes
The other way to deal with the fact that is always divisible by , it is to simply take out this factor and ask which values of make
be prime. (The integer can be either positive or negative.) If, for example, we take , we get values of:
2, 19, 23, 317, 1031, 49081, 86453, 109297, 270343, ... ,corresponding to primes 11, 1111111111111111111, 11111111111111111111111, ... .
These primes are called repunit primes. Another example is when we take , we get values of:
2, 5, 11, 109, 193, 1483, 11353, 21419, 21911, 24071, 106859, 139739, ... ,corresponding to primes −11, 19141, 57154490053, ....
It is a conjecture that for every integer which is not a perfect power, there are infinitely many values of such that is prime. (When is a perfect power, it can be shown that there is at most one value such that is prime)
Least such that is prime are (starting with , if no such exists)
2, 3, 2, 3, 2, 5, 3, 0, 2, 17, 2, 5, 3, 3, 2, 3, 2, 19, 3, 3, 2, 5, 3, 0, 7, 3, 2, 5, 2, 7, 0, 3, 13, 313, 2, 13, 3, 349, 2, 3, 2, 5, 5, 19, 2, 127, 19, 0, 3, 4229, 2, 11, 3, 17, 7, 3, 2, 3, 2, 7, 3, 5, 0, 19, 2, 19, 5, 3, 2, 3, 2, ...
For negative bases , they are (starting with , if no such exists)
3, 2, 2, 5, 2, 3, 2, 3, 5, 5, 2, 3, 2, 3, 3, 7, 2, 17, 2, 3, 3, 11, 2, 3, 11, 0, 3, 7, 2, 109, 2, 5, 3, 11, 31, 5, 2, 3, 53, 17, 2, 5, 2, 103, 7, 5, 2, 7, 1153, 3, 7, 21943, 2, 3, 37, 53, 3, 17, 2, 7, 2, 3, 0, 19, 7, 3, 2, 11, 3, 5, 2, ... (notice this OEIS sequence does not allow )
Least base such that is prime are
2, 2, 2, 2, 5, 2, 2, 2, 10, 6, 2, 61, 14, 15, 5, 24, 19, 2, 46, 3, 11, 22, 41, 2, 12, 22, 3, 2, 12, 86, 2, 7, 13, 11, 5, 29, 56, 30, 44, 60, 304, 5, 74, 118, 33, 156, 46, 183, 72, 606, 602, 223, 115, 37, 52, 104, 41, 6, 338, 217, ...
For negative bases , they are
3, 2, 2, 2, 2, 2, 2, 2, 2, 7, 2, 16, 61, 2, 6, 10, 6, 2, 5, 46, 18, 2, 49, 16, 70, 2, 5, 6, 12, 92, 2, 48, 89, 30, 16, 147, 19, 19, 2, 16, 11, 289, 2, 12, 52, 2, 66, 9, 22, 5, 489, 69, 137, 16, 36, 96, 76, 117, 26, 3, ...
Other generalized Mersenne primes
Another generalized Mersenne number is
with , any coprime integers, and . (Since is always divisible by , the division is necessary for there to be any chance of finding prime numbers.) We can ask which makes this number prime. It can be shown that such must be primes themselves or equal to 4, and can be 4 if and only if and is prime. It is a conjecture that for any pair such that and are not both perfect th powers for any and is not a perfect fourth power, there are infinitely many values of such that is prime. However, this has not been proved for any single value of .
*Note: if and is even, then the numbers are not included in the corresponding OEIS sequence.
When , it is , a difference of two consecutive perfect th powers, and if is prime, then must be , because it is divisible by .
Least such that is prime are
2, 2, 2, 3, 2, 2, 7, 2, 2, 3, 2, 17, 3, 2, 2, 5, 3, 2, 5, 2, 2, 229, 2, 3, 3, 2, 3, 3, 2, 2, 5, 3, 2, 3, 2, 2, 3, 3, 2, 7, 2, 3, 37, 2, 3, 5, 58543, 2, 3, 2, 2, 3, 2, 2, 3, 2, 5, 3, 4663, 54517, 17, 3, 2, 5, 2, 3, 3, 2, 2, 47, 61, 19, ...
Least such that is prime are
1, 1, 1, 1, 5, 1, 1, 1, 5, 2, 1, 39, 6, 4, 12, 2, 2, 1, 6, 17, 46, 7, 5, 1, 25, 2, 41, 1, 12, 7, 1, 7, 327, 7, 8, 44, 26, 12, 75, 14, 51, 110, 4, 14, 49, 286, 15, 4, 39, 22, 109, 367, 22, 67, 27, 95, 80, 149, 2, 142, 3, 11, ...
| Mathematics | Prime numbers | null |
18909 | https://en.wikipedia.org/wiki/Magnesium | Magnesium | Magnesium is a chemical element; it has symbol Mg and atomic number 12. It is a shiny gray metal having a low density, low melting point and high chemical reactivity. Like the other alkaline earth metals (group 2 of the periodic table) it occurs naturally only in combination with other elements and almost always has an oxidation state of +2. It reacts readily with air to form a thin passivation coating of magnesium oxide that inhibits further corrosion of the metal. The free metal burns with a brilliant-white light. The metal is obtained mainly by electrolysis of magnesium salts obtained from brine. It is less dense than aluminium and is used primarily as a component in strong and lightweight alloys that contain aluminium.
In the cosmos, magnesium is produced in large, aging stars by the sequential addition of three helium nuclei to a carbon nucleus. When such stars explode as supernovas, much of the magnesium is expelled into the interstellar medium where it may recycle into new star systems. Magnesium is the eighth most abundant element in the Earth's crust and the fourth most common element in the Earth (after iron, oxygen and silicon), making up 13% of the planet's mass and a large fraction of the planet's mantle. It is the third most abundant element dissolved in seawater, after sodium and chlorine.
This element is the eleventh most abundant element by mass in the human body and is essential to all cells and some 300 enzymes. Magnesium ions interact with polyphosphate compounds such as ATP, DNA, and RNA. Hundreds of enzymes require magnesium ions to function. Magnesium compounds are used medicinally as common laxatives and antacids (such as milk of magnesia), and to stabilize abnormal nerve excitation or blood vessel spasm in such conditions as eclampsia.
Characteristics
Physical properties
Elemental magnesium is a gray-white lightweight metal, two-thirds the density of aluminium. Magnesium has the lowest melting () and the lowest boiling point () of all the alkaline earth metals.
Pure polycrystalline magnesium is brittle and easily fractures along shear bands. It becomes much more malleable when alloyed with small amounts of other metals, such as 1% aluminium. The malleability of polycrystalline magnesium can also be significantly improved by reducing its grain size to about 1 μm or less.
When finely powdered, magnesium reacts with water to produce hydrogen gas:
Mg(s) + 2 H2O(g) → Mg(OH)2(aq) + H2(g) + 1203.6 kJ/mol
However, this reaction is much less dramatic than the reactions of the alkali metals with water, because the magnesium hydroxide builds up on the surface of the magnesium metal and inhibits further reaction.
Chemical properties
Oxidation
The principal property of magnesium metal is its reducing power. One hint is that it tarnishes slightly when exposed to air, although, unlike the heavier alkaline earth metals, an oxygen-free environment is unnecessary for storage because magnesium is protected by a thin layer of oxide that is fairly impermeable and difficult to remove.
Direct reaction of magnesium with air or oxygen at ambient pressure forms only the "normal" oxide MgO. However, this oxide may be combined with hydrogen peroxide to form magnesium peroxide, MgO2, and at low temperature the peroxide may be further reacted with ozone to form magnesium superoxide Mg(O2)2.
Magnesium reacts with nitrogen in the solid state if it is powdered and heated to just below the melting point, forming Magnesium nitride Mg3N2.
Magnesium reacts with water at room temperature, though it reacts much more slowly than calcium, a similar group 2 metal. When submerged in water, hydrogen bubbles form slowly on the surface of the metal; this reaction happens much more rapidly with powdered magnesium. The reaction also occurs faster with higher temperatures (see ). Magnesium's reversible reaction with water can be harnessed to store energy and run a magnesium-based engine. Magnesium also reacts exothermically with most acids such as hydrochloric acid (HCl), producing magnesium chloride and hydrogen gas, similar to the HCl reaction with aluminium, zinc, and many other metals. Although it is difficult to ignite in mass or bulk, magnesium metal will ignite.
Magnesium may also be used as an igniter for thermite, a mixture of aluminium and iron oxide powder that ignites only at a very high temperature.
Organic chemistry
Organomagnesium compounds are widespread in organic chemistry. They are commonly found as Grignard reagents, formed by reaction of magnesium with haloalkanes. Examples of Grignard reagents are phenylmagnesium bromide and ethylmagnesium bromide. The Grignard reagents function as a common nucleophile, attacking the electrophilic group such as the carbon atom that is present within the polar bond of a carbonyl group.
A prominent organomagnesium reagent beyond Grignard reagents is magnesium anthracene, which is used as a source of highly active magnesium. The related butadiene-magnesium adduct serves as a source for the butadiene dianion.
Complexes of dimagnesium(I) have been observed.
Detection in solution
The presence of magnesium ions can be detected by the addition of ammonium chloride, ammonium hydroxide and monosodium phosphate to an aqueous or dilute HCl solution of the salt. The formation of a white precipitate indicates the presence of magnesium ions.
Azo violet dye can also be used, turning deep blue in the presence of an alkaline solution of magnesium salt. The color is due to the adsorption of azo violet by Mg(OH)2.
Forms
Alloys
As of 2013, magnesium alloys consumption was less than one million tonnes per year, compared with 50 million tonnes of aluminium alloys. Their use has been historically limited by the tendency of Mg alloys to corrode, creep at high temperatures, and combust.
Corrosion
In magnesium alloys, the presence of iron, nickel, copper, or cobalt strongly activates corrosion. In more than trace amounts, these metals precipitate as intermetallic compounds, and the precipitate locales function as active cathodic sites that reduce water, causing the loss of magnesium. Controlling the quantity of these metals improves corrosion resistance. Sufficient manganese overcomes the corrosive effects of iron. This requires precise control over composition, increasing costs. Adding a cathodic poison captures atomic hydrogen within the structure of a metal. This prevents the formation of free hydrogen gas, an essential factor of corrosive chemical processes. The addition of about one in three hundred parts arsenic reduces the corrosion rate of magnesium in a salt solution by a factor of nearly ten.
High-temperature creep and flammability
Magnesium's tendency to creep (gradually deform) at high temperatures is greatly reduced by alloying with zinc and rare-earth elements. Flammability is significantly reduced by a small amount of calcium in the alloy. By using rare-earth elements, it may be possible to manufacture magnesium alloys that are able to not catch fire at higher temperatures compared to magnesium's liquidus and in some cases potentially pushing it close to magnesium's boiling point.
Compounds
Magnesium forms a variety of compounds important to industry and biology, including magnesium carbonate, magnesium chloride, magnesium citrate, magnesium hydroxide (milk of magnesia), magnesium oxide, magnesium sulfate, and magnesium sulfate heptahydrate (Epsom salts).
As recently as 2020, magnesium hydride was under investigation as a way to store hydrogen.
Isotopes
Magnesium has three stable isotopes: , and . All are present in significant amounts in nature (see table of isotopes above). About 79% of Mg is . The isotope is radioactive and in the 1950s to 1970s was produced by several nuclear power plants for use in scientific experiments. This isotope has a relatively short half-life (21 hours) and its use was limited by shipping times.
The nuclide has found application in isotopic geology, similar to that of aluminium. is a radiogenic daughter product of , which has a half-life of 717,000 years. Excessive quantities of stable have been observed in the Ca-Al-rich inclusions of some carbonaceous chondrite meteorites. This anomalous abundance is attributed to the decay of its parent in the inclusions, and researchers conclude that such meteorites were formed in the solar nebula before the had decayed. These are among the oldest objects in the Solar System and contain preserved information about its early history.
It is conventional to plot / against an Al/Mg ratio. In an isochron dating plot, the Al/Mg ratio plotted is /. The slope of the isochron has no age significance, but indicates the initial / ratio in the sample at the time when the systems were separated from a common reservoir.
Production
Occurrence
Magnesium is the eighth-most-abundant element in the Earth's crust by mass and tied in seventh place with iron in molarity. It is found in large deposits of magnesite, dolomite, and other minerals, and in mineral waters, where magnesium ion is soluble.
Although magnesium is found in more than 60 minerals, only dolomite, magnesite, brucite, carnallite, talc, and olivine are of commercial importance.
The cation is the second-most-abundant cation in seawater (about the mass of sodium ions in a given sample), which makes seawater and sea salt attractive commercial sources for Mg. To extract the magnesium, calcium hydroxide is added to the seawater to precipitate magnesium hydroxide.
+ → +
Magnesium hydroxide (brucite) is poorly soluble in water and can be collected by filtration. It reacts with hydrochloric acid to magnesium chloride.
+ 2 HCl → + 2
From magnesium chloride, electrolysis produces magnesium.
Production quantities
World production was approximately 1,100 kt in 2017, with the bulk being produced in China (930 kt) and Russia (60 kt). The United States was in the 20th century the major world supplier of this metal, supplying 45% of world production even as recently as 1995. Since the Chinese mastery of the Pidgeon process the US market share is at 7%, with a single US producer left as of 2013: US Magnesium, a Renco Group company located on the shores of the Great Salt Lake.
In September 2021, China took steps to reduce production of magnesium as a result of a government initiative to reduce energy availability for manufacturing industries, leading to a significant price increase.
Pidgeon and Bolzano processes
The Pidgeon process and the Bolzano process are similar. In both, magnesium oxide is the precursor to magnesium metal. The magnesium oxide is produced as a solid solution with calcium oxide by calcining the mineral dolomite, which is a solid solution of calcium and magnesium carbonates:
Reduction occurs at high temperatures with silicon. A ferrosilicon alloy is used rather than pure silicon as it is more economical. The iron component has no bearing on the reaction, having the simplified equation:
The calcium oxide combines with silicon as the oxygen scavenger, yielding the very stable calcium silicate. The Mg/Ca ratio of the precursors can be adjusted by the addition of MgO or CaO.
The Pidgeon and the Bolzano process differ in the details of the heating and the configuration of the reactor. Both generate gaseous Mg that is condensed and collected. The Pidgeon process dominates the worldwide production. The Pidgeon method is less technologically complex and because of distillation/vapour deposition conditions, a high purity product is easily achievable. China is almost completely reliant on the silicothermic Pidgeon process.
Dow process
Besides the Pigeon process, the second most used process for magnesium production is electrolysis. This is a two step process. The first step is to prepare feedstock containing magnesium chloride and the second step is to dissociate the compound in electrolytic cells as magnesium metal and chlorine gas. The basic reaction is as follows:
The temperatures at which this reaction is operated is between 680 and 750 °C.
The magnesium chloride can be obtained using the Dow process, a process that mixes sea water and dolomite in a flocculator or by dehydration of magnesium chloride brines. The electrolytic cells are partially submerged in a molten salt electrolyte to which the produced magnesium chloride is added in concentrations between 6–18%. This process does have its share of disadvantages including production of harmful chlorine gas and the overall reaction being very energy intensive, creating environmental risks. The Pidgeon process is more advantageous regarding its simplicity, shorter construction period, low power consumption and overall good magnesium quality compared to the electrolysis method.
In the United States, magnesium was once obtained principally with the Dow process in Corpus Christi TX, by electrolysis of fused magnesium chloride from brine and sea water. A saline solution containing ions is first treated with lime (calcium oxide) and the precipitated magnesium hydroxide is collected:
(aq) + (s) + (l) → (aq) + (s)
The hydroxide is then converted to magnesium chloride by treatment with hydrochloric acid and heating of the product to eliminate water:
The salt is then electrolyzed in the molten state. At the cathode, the ion is reduced by two electrons to magnesium metal:
+ 2 → Mg
At the anode, each pair of ions is oxidized to chlorine gas, releasing two electrons to complete the circuit:
2 → (g) + 2
Carbothermic process
The carbothermic route to magnesium has been recognized as a low energy, yet high productivity path to magnesium extraction. The chemistry is as follows:
A disadvantage of this method is that slow cooling the vapour can cause the reaction to quickly revert. To prevent this from happening, the magnesium can be dissolved directly in a suitable metal solvent before reversion starts happening. Rapid quenching of the vapour can also be performed to prevent reversion.
YSZ process
A newer process, solid oxide membrane technology, involves the electrolytic reduction of MgO. At the cathode, ion is reduced by two electrons to magnesium metal. The electrolyte is yttria-stabilized zirconia (YSZ). The anode is a liquid metal. At the YSZ/liquid metal anode is oxidized. A layer of graphite borders the liquid metal anode, and at this interface carbon and oxygen react to form carbon monoxide. When silver is used as the liquid metal anode, there is no reductant carbon or hydrogen needed, and only oxygen gas is evolved at the anode. It was reported in 2011 that this method provides a 40% reduction in cost per pound over the electrolytic reduction method.
Rieke process
Rieke et al. developed a "general approach for preparing highly reactive metal powders by reducing metal salts in ethereal or hydrocarbon solvents using alkali metals as reducing agents" now known as the Rieke process. Rieke finalized the identification of Rieke metals in 1989, one of which was Rieke-magnesium, first produced in 1974.
History
The name magnesium originates from the Greek word for locations related to the tribe of the Magnetes, either a district in Thessaly called Magnesia or Magnesia ad Sipylum, now in Turkey. It is related to magnetite and manganese, which also originated from this area, and required differentiation as separate substances. See manganese for this history.
In 1618, a farmer at Epsom in England attempted to give his cows water from a local well. The cows refused to drink because of the water's bitter taste, but the farmer noticed that the water seemed to heal scratches and rashes. The substance obtained by evaporating the water became known as Epsom salts and its fame spread. It was eventually recognized as hydrated magnesium sulfate, ·7.
The metal itself was first isolated by Sir Humphry Davy in England in 1808. He used electrolysis on a mixture of magnesia and mercuric oxide. Antoine Bussy prepared it in coherent form in 1831. Davy's first suggestion for a name was 'magnium', but the name magnesium is now used in most European languages.
Uses
Magnesium metal
Magnesium is the third-most-commonly-used structural metal, following iron and aluminium. The main applications of magnesium are, in order: aluminium alloys, die-casting (alloyed with zinc), removing sulfur in the production of iron and steel, and the production of titanium in the Kroll process.
Magnesium is used in lightweight materials and alloys. For example, when infused with silicon carbide nanoparticles, it has extremely high specific strength.
Historically, magnesium was one of the main aerospace construction metals and was used for German military aircraft as early as World War I and extensively for German aircraft in World War II. The Germans coined the name "Elektron" for magnesium alloy, a term which is still used today. In the commercial aerospace industry, magnesium was generally restricted to engine-related components, due to fire and corrosion hazards. Magnesium alloy use in aerospace is increasing in the 21st century, driven by the importance of fuel economy. Magnesium alloys can act as replacements for aluminium and steel alloys in structural applications.
Aircraft
Wright Aeronautical used a magnesium crankcase in the WWII-era Wright R-3350 Duplex Cyclone aviation engine. This presented a serious problem for the earliest models of the Boeing B-29 Superfortress heavy bomber when an in-flight engine fire ignited the engine crankcase. The resulting combustion was as hot as 5,600 °F (3,100 °C) and could sever the wing spar from the fuselage.
Automotive
Mercedes-Benz used the alloy Elektron in the bodywork of an early model Mercedes-Benz 300 SLR; these cars competed in the 1955 World Sportscar Championship including a win at the Mille Miglia, and at Le Mans where one was involved in the 1955 Le Mans disaster when spectators were showered with burning fragments of elektron.
Porsche used magnesium alloy frames in the 917/053 that won Le Mans in 1971, and continues to use magnesium alloys for its engine blocks due to the weight advantage.
Volkswagen Group has used magnesium in its engine components for many years.
Mitsubishi Motors uses magnesium for its paddle shifters.
BMW used magnesium alloy blocks in their N52 engine, including an aluminium alloy insert for the cylinder walls and cooling jackets surrounded by a high-temperature magnesium alloy AJ62A. The engine was used worldwide between 2005 and 2011 in various 1, 3, 5, 6, and 7 series models; as well as the Z4, X1, X3, and X5.
Chevrolet used the magnesium alloy AE44 in the 2006 Corvette Z06.
Both AJ62A and AE44 are recent developments in high-temperature low-creep magnesium alloys. The general strategy for such alloys is to form intermetallic precipitates at the grain boundaries, for example by adding mischmetal or calcium.
Electronics
Because of low density and good mechanical and electrical properties, magnesium is used for manufacturing of mobile phones, laptop and tablet computers, cameras, and other electronic components. It was used as a premium feature because of its light weight in some 2020 laptops.
Source of light
When burning in air, magnesium produces a brilliant white light that includes strong ultraviolet wavelengths. Magnesium powder (flash powder) was used for subject illumination in the early days of photography. Later, magnesium filament was used in electrically ignited single-use photography flashbulbs. Magnesium powder is used in fireworks and marine flares where a brilliant white light is required. It was also used for various theatrical effects, such as lightning, pistol flashes, and supernatural appearances.
Magnesium is flammable, burning at a temperature of approximately , and the autoignition temperature of magnesium ribbon is approximately . Magnesium's high combustion temperature makes it a useful tool for starting emergency fires. Other uses include flash photography, flares, pyrotechnics, fireworks sparklers, and trick birthday candles. Magnesium is also often used to ignite thermite or other materials that require a high ignition temperature. Magnesium continues to be used as an incendiary element in warfare.
Flame temperatures of magnesium and magnesium alloys can reach , although flame height above the burning metal is usually less than . Once ignited, such fires are difficult to extinguish because they resist several substances commonly used to put out fires; combustion continues in nitrogen (forming magnesium nitride), in carbon dioxide (forming magnesium oxide and carbon), and in water (forming magnesium oxide and hydrogen, which also combusts due to heat in the presence of additional oxygen). This property was used in incendiary weapons during the firebombing of cities in World War II, where the only practical civil defense was to smother a burning flare under dry sand to exclude atmosphere from the combustion.
Chemical reagent
In the form of turnings or ribbons, to prepare Grignard reagents, which are useful in organic synthesis.
Other
As an additive agent in conventional propellants and the production of nodular graphite in cast iron.
As a reducing agent to separate uranium and other metals from their salts.
As a sacrificial (galvanic) anode to protect boats, underground tanks, pipelines, buried structures, and water heaters.
Alloyed with zinc to produce the zinc sheet used in photoengraving plates in the printing industry, dry-cell battery walls, and roofing.
Alloyed with aluminium with aluminium-magnesium alloys being used mainly for beverage cans, sports equipment such as golf clubs, fishing reels, and archery bows and arrows.
Many car and aircraft manufacturers have made engine and body parts from magnesium.
Magnesium batteries have been commercialized as primary batteries, and are an active topic of research for rechargeable batteries.
Compounds
Magnesium compounds, primarily magnesium oxide (MgO), are used as a refractory material in furnace linings for producing iron, steel, nonferrous metals, glass, and cement. Magnesium oxide and other magnesium compounds are also used in the agricultural, chemical, and construction industries. Magnesium oxide from calcination is used as an electrical insulator in fire-resistant cables.
Magnesium reacts with haloalkanes to give Grignard reagents, which are used for a wide variety of organic reactions forming carbon–carbon bonds.
Magnesium salts are included in various foods, fertilizers (magnesium is a component of chlorophyll), and microbe culture media.
Magnesium sulfite is used in the manufacture of paper (sulfite process).
Magnesium phosphate is used to fireproof wood used in construction.
Magnesium hexafluorosilicate is used for moth-proofing textiles.
Biological roles
Mechanism of action
The important interaction between phosphate and magnesium ions makes magnesium essential to the basic nucleic acid chemistry of all cells of all known living organisms. More than 300 enzymes require magnesium ions for their catalytic action, including all enzymes using or synthesizing ATP and those that use other nucleotides to synthesize DNA and RNA. The ATP molecule is normally found in a chelate with a magnesium ion.
Nutrition
Diet
Spices, nuts, cereals, cocoa and vegetables are good sources of magnesium. Green leafy vegetables such as spinach are also rich in magnesium.
Dietary recommendations
In the UK, the recommended daily values for magnesium are 300 mg for men and 270 mg for women. In the U.S. the Recommended Dietary Allowances (RDAs) are 400 mg for men ages 19–30 and 420 mg for older; for women 310 mg for ages 19–30 and 320 mg for older.
Supplementation
Numerous pharmaceutical preparations of magnesium and dietary supplements are available. In two human trials magnesium oxide, one of the most common forms in magnesium dietary supplements because of its high magnesium content per weight, was less bioavailable than magnesium citrate, chloride, lactate or aspartate.
Metabolism
An adult body has 22–26 grams of magnesium, with 60% in the skeleton, 39% intracellular (20% in skeletal muscle), and 1% extracellular. Serum levels are typically 0.7–1.0 mmol/L or 1.8–2.4 mEq/L. Serum magnesium levels may be normal even when intracellular magnesium is deficient. The mechanisms for maintaining the magnesium level in the serum are varying gastrointestinal absorption and renal excretion. Intracellular magnesium is correlated with intracellular potassium. Increased magnesium lowers calcium and can either prevent hypercalcemia or cause hypocalcemia depending on the initial level. Both low and high protein intake conditions inhibit magnesium absorption, as does the amount of phosphate, phytate, and fat in the gut. Unabsorbed dietary magnesium is excreted in feces; absorbed magnesium is excreted in urine and sweat.
Detection in serum and plasma
Magnesium status may be assessed by measuring serum and erythrocyte magnesium concentrations coupled with urinary and fecal magnesium content, but intravenous magnesium loading tests are more accurate and practical. A retention of 20% or more of the injected amount indicates deficiency. As of 2004, no biomarker has been established for magnesium.
Magnesium concentrations in plasma or serum may be monitored for efficacy and safety in those receiving the drug therapeutically, to confirm the diagnosis in potential poisoning victims, or to assist in the forensic investigation in a case of fatal overdose. The newborn children of mothers who received parenteral magnesium sulfate during labor may exhibit toxicity with normal serum magnesium levels.
Deficiency
Low plasma magnesium (hypomagnesemia) is common: it is found in 2.5–15% of the general population. From 2005 to 2006, 48 percent of the United States population consumed less magnesium than recommended in the Dietary Reference Intake. Other causes are increased renal or gastrointestinal loss, an increased intracellular shift, and proton-pump inhibitor antacid therapy. Most are asymptomatic, but symptoms referable to neuromuscular, cardiovascular, and metabolic dysfunction may occur. Alcoholism is often associated with magnesium deficiency. Chronically low serum magnesium levels are associated with metabolic syndrome, diabetes mellitus type 2, fasciculation, and hypertension.
Therapy
Intravenous magnesium is recommended by the ACC/AHA/ESC 2006 Guidelines for Management of Patients With Ventricular Arrhythmias and the Prevention of Sudden Cardiac Death for patients with ventricular arrhythmia associated with torsades de pointes who present with long QT syndrome; and for the treatment of patients with digoxin induced arrhythmias.
Intravenous magnesium sulfate is used for the management of pre-eclampsia and eclampsia.
Hypomagnesemia, including that caused by alcoholism, is reversible by oral or parenteral magnesium administration depending on the degree of deficiency.
There is limited evidence that magnesium supplementation may play a role in the prevention and treatment of migraine.
Sorted by type of magnesium salt, other therapeutic applications include:
Magnesium sulfate, as the heptahydrate called Epsom salts, is used as bath salts, a laxative, and a highly soluble fertilizer.
Magnesium hydroxide, suspended in water, is used in milk of magnesia antacids and laxatives.
Magnesium chloride, oxide, gluconate, malate, orotate, glycinate, ascorbate and citrate are all used as oral magnesium supplements.
Magnesium borate, magnesium salicylate, and magnesium sulfate are used as antiseptics.
Magnesium bromide is used as a mild sedative (this action is due to the bromide, not the magnesium).
Magnesium stearate is a slightly flammable white powder with lubricating properties. In pharmaceutical technology, it is used in pharmacological manufacture to prevent tablets from sticking to the equipment while compressing the ingredients into tablet form.
Magnesium carbonate powder is used by athletes such as gymnasts, weightlifters, and climbers to eliminate palm sweat, prevent sticking, and improve the grip on gymnastic apparatus, lifting bars, and climbing rocks.
Overdose
Overdose from dietary sources alone is unlikely because excess magnesium in the blood is promptly filtered by the kidneys, and overdose is more likely in the presence of impaired renal function. Overdose is not unlikely in case of excessive intake of supplements. Indeed, megadose therapy has caused death in a young child, and severe hypermagnesemia in a woman and a young girl who had healthy kidneys. The most common symptoms of overdose are nausea, vomiting, and diarrhea; other symptoms include hypotension, confusion, slowed heart and respiratory rates, deficiencies of other minerals, coma, cardiac arrhythmia, and death from cardiac arrest.
Function in plants
Plants require magnesium to synthesize chlorophyll, essential for photosynthesis. Magnesium in the center of the porphyrin ring in chlorophyll functions in a manner similar to the iron in the center of the porphyrin ring in heme. Magnesium deficiency in plants causes late-season yellowing between leaf veins, especially in older leaves, and can be corrected by either applying epsom salts (which is rapidly leached), or crushed dolomitic limestone, to the soil.
Safety precautions
Magnesium metal and its alloys can be explosive hazards; they are highly flammable in their pure form when molten or in powder or ribbon form. Burning or molten magnesium reacts violently with water. When working with powdered magnesium, safety glasses with eye protection and UV filters (such as welders use) are employed because burning magnesium produces ultraviolet light that can permanently damage the retina of a human eye.
Magnesium is capable of reducing water and releasing highly flammable hydrogen gas:
Mg(s) + 2 (l) → (s) + (g)
Therefore, water cannot extinguish magnesium fires. The hydrogen gas produced intensifies the fire. Dry sand is an effective smothering agent, but only on relatively level and flat surfaces.
Magnesium reacts with carbon dioxide exothermically to form magnesium oxide and carbon:
2 Mg(s) + (g) → 2 MgO(s) + C(s)
Hence, carbon dioxide fuels rather than extinguishes magnesium fires.
Burning magnesium can be quenched by using a Class D dry chemical fire extinguisher, or by covering the fire with sand or magnesium foundry flux to remove its air source.
| Physical sciences | Chemical elements_2 | null |
18947 | https://en.wikipedia.org/wiki/Metre | Metre | The metre (or meter in US spelling; symbol: m) is the base unit of length in the International System of Units (SI). Since 2019, the metre has been defined as the length of the path travelled by light in vacuum during a time interval of of a second, where the second is defined by a hyperfine transition frequency of caesium.
The metre was originally defined in 1791 by the French National Assembly as one ten-millionth of the distance from the equator to the North Pole along a great circle, so the Earth's polar circumference is approximately .
In 1799, the metre was redefined in terms of a prototype metre bar, the bar used was changed in 1889, and in 1960 the metre was redefined in terms of a certain number of wavelengths of a certain emission line of krypton-86. The current definition was adopted in 1983 and modified slightly in 2002 to clarify that the metre is a measure of proper length. From 1983 until 2019, the metre was formally defined as the length of the path travelled by light in vacuum in of a second. After the 2019 revision of the SI, this definition was rephrased to include the definition of a second in terms of the caesium frequency . This series of amendments did not alter the size of the metre significantly – today Earth's polar circumference measures , a change of about 200 parts per million from the original value of exactly , which also includes improvements in the accuracy of measuring the circumference.
Spelling
Metre is the standard spelling of the metric unit for length in nearly all English-speaking nations, the exceptions being the United States and the Philippines which use meter.
Measuring devices (such as ammeter, speedometer) are spelled "-meter" in all variants of English. The suffix "-meter" has the same Greek origin as the unit of length.
Etymology
The etymological roots of metre can be traced to the Greek verb () ((I) measure, count or compare) and noun () (a measure), which were used for physical measurement, for poetic metre and by extension for moderation or avoiding extremism (as in "be measured in your response"). This range of uses is also found in Latin (), French (), English and other languages. The Greek word is derived from the Proto-Indo-European root *meh₁- 'to measure'. The use of the word metre (for the French unit ) in English began at least as early as 1797.
History of definition
Universal measure: the metre linked to the figure of the Earth
Galileo discovered gravitational acceleration to explain the fall of bodies at the surface of the Earth. He also observed the regularity of the period of swing of the pendulum and that this period depended on the length of the pendulum.
Kepler's laws of planetary motion served both to the discovery of Newton's law of universal gravitation and to the determination of the distance from Earth to the Sun by Giovanni Domenico Cassini. They both also used a determination of the size of the Earth, then considered as a sphere, by Jean Picard through triangulation of Paris meridian. In 1671, Jean Picard also measured the length of a seconds pendulum at Paris Observatory and proposed this unit of measurement to be called the astronomical radius (French: Rayon Astronomique). In 1675, Tito Livio Burattini suggested the term meaning universal measure for this unit of length, but then it was discovered that the length of a seconds pendulum varies from place to place.
Christiaan Huygens found out the centrifugal force which explained variations of gravitational acceleration depending on latitude. He also mathematically formulated the link between the length of the simple pendulum and gravitational acceleration. According to Alexis Clairaut, the study of variations in gravitational acceleration was a way to determine the figure of the Earth, whose crucial parameter was the flattening of the Earth ellipsoid. In the 18th century, in addition of its significance for cartography, geodesy grew in importance as a means of empirically demonstrating the theory of gravity, which Émilie du Châtelet promoted in France in combination with Leibniz's mathematical work and because the radius of the Earth was the unit to which all celestial distances were to be referred. Indeed, Earth proved to be an oblate spheroid through geodetic surveys in Ecuador and Lapland and this new data called into question the value of Earth radius as Picard had calculated it.
After the Anglo-French Survey, the French Academy of Sciences commissioned an expedition led by Jean Baptiste Joseph Delambre and Pierre Méchain, lasting from 1792 to 1798, which measured the distance between a belfry in Dunkirk and Montjuïc castle in Barcelona at the longitude of the Paris Panthéon. When the length of the metre was defined as one ten-millionth of the distance from the North Pole to the Equator, the flattening of the Earth ellipsoid was assumed to be .
In 1841, Friedrich Wilhelm Bessel using the method of least squares calculated from several arc measurements a new value for the flattening of the Earth, which he determinated as . He also devised a new instrument for measuring gravitational acceleration which was first used in Switzerland by Emile Plantamour, Charles Sanders Peirce, and Isaac-Charles Élisée Cellérier (8.01.1818 – 2.10.1889), a Genevan mathematician soon independently discovered a mathematical formula to correct systematic errors of this device which had been noticed by Plantamour and Adolphe Hirsch. This allowed Friedrich Robert Helmert to determine a remarkably accurate value of for the flattening of the Earth when he proposed his ellipsoid of reference in 1901. This was also the result of the Metre Convention of 1875, when the metre was adopted as an international scientific unit of length for the convenience of continental European geodesists following the example of Ferdinand Rudolph Hassler.
Meridional definition
In 1790, one year before it was ultimately decided that the metre would be based on the Earth quadrant (a quarter of the Earth's circumference through its poles), Talleyrand proposed that the metre be the length of the seconds pendulum at a latitude of 45°. This option, with one-third of this length defining the foot, was also considered by Thomas Jefferson and others for redefining the yard in the United States shortly after gaining independence from the British Crown.
Instead of the seconds pendulum method, the commission of the French Academy of Sciences – whose members included Borda, Lagrange, Laplace, Monge, and Condorcet – decided that the new measure should be equal to one ten-millionth of the distance from the North Pole to the Equator, determined through measurements along the meridian passing through Paris. Apart from the obvious consideration of safe access for French surveyors, the Paris meridian was also a sound choice for scientific reasons: a portion of the quadrant from Dunkirk to Barcelona (about 1000 km, or one-tenth of the total) could be surveyed with start- and end-points at sea level, and that portion was roughly in the middle of the quadrant, where the effects of the Earth's oblateness were expected not to have to be accounted for. Improvements in the measuring devices designed by Borda and used for this survey also raised hopes for a more accurate determination of the length of this meridian arc.
The task of surveying the Paris meridian arc took more than six years (1792–1798). The technical difficulties were not the only problems the surveyors had to face in the convulsed period of the aftermath of the French Revolution: Méchain and Delambre, and later Arago, were imprisoned several times during their surveys, and Méchain died in 1804 of yellow fever, which he contracted while trying to improve his original results in northern Spain. In the meantime, the commission of the French Academy of Sciences calculated a provisional value from older surveys of 443.44 lignes. This value was set by legislation on 7 April 1795.
In 1799, a commission including Johan Georg Tralles, Jean Henri van Swinden, Adrien-Marie Legendre and Jean-Baptiste Delambre calculated the distance from Dunkirk to Barcelona using the data of the triangulation between these two towns and determined the portion of the distance from the North Pole to the Equator it represented. Pierre Méchain's and Jean-Baptiste Delambre's measurements were combined with the results of the Spanish-French geodetic mission and a value of was found for the Earth's flattening. However, French astronomers knew from earlier estimates of the Earth's flattening that different meridian arcs could have different lengths and that their curvature could be irregular. The distance from the North Pole to the Equator was then extrapolated from the measurement of the Paris meridian arc between Dunkirk and Barcelona and was determined as toises. As the metre had to be equal to one ten-millionth of this distance, it was defined as 0.513074 toise or 3 feet and 11.296 lines of the Toise of Peru, which had been constructed in 1735 for the French Geodesic Mission to the Equator. When the final result was known, a bar whose length was closest to the meridional definition of the metre was selected and placed in the National Archives on 22 June 1799 (4 messidor An VII in the Republican calendar) as a permanent record of the result.
Early adoption of the metre as a scientific unit of length: the forerunners
In 1816, Ferdinand Rudolph Hassler was appointed first Superintendent of the Survey of the Coast. Trained in geodesy in Switzerland, France and Germany, Hassler had brought a standard metre made in Paris to the United States in October 1805. He designed a baseline apparatus which instead of bringing different bars in actual contact during measurements, used only one bar calibrated on the metre and optical contact. Thus the metre became the unit of length for geodesy in the United States.
In 1830, Hassler became head of the Office of Weights and Measures, which became a part of the Survey of the Coast. He compared various units of length used in the United States at that time and measured coefficients of expansion to assess temperature effects on the measurements.
In 1832, Carl Friedrich Gauss studied the Earth's magnetic field and proposed adding the second to the basic units of the metre and the kilogram in the form of the CGS system (centimetre, gram, second). In 1836, he founded the Magnetischer Verein, the first international scientific association, in collaboration with Alexander von Humboldt and Wilhelm Edouard Weber. The coordination of the observation of geophysical phenomena such as the Earth's magnetic field, lightning and gravity in different points of the globe stimulated the creation of the first international scientific associations. The foundation of the Magnetischer Verein would be followed by that of the Central European Arc Measurement (German: Mitteleuropaïsche Gradmessung) on the initiative of Johann Jacob Baeyer in 1863, and by that of the International Meteorological Organisation whose president, the Swiss meteorologist and physicist, Heinrich von Wild would represent Russia at the International Committee for Weights and Measures (CIPM).
In 1834, Hassler, measured at Fire Island the first baseline of the Survey of the Coast, shortly before Louis Puissant declared to the French Academy of Sciences in 1836 that Jean Baptiste Joseph Delambre and Pierre Méchain had made errors in the meridian arc measurement, which had been used to determine the length of the metre. Errors in the method of calculating the length of the Paris meridian were taken into account by Bessel when he proposed his reference ellipsoid in 1841.
Egyptian astronomy has ancient roots which were revived in the 19th century by the modernist impetus of Muhammad Ali who founded in Sabtieh, Boulaq district, in Cairo an Observatory which he was keen to keep in harmony with the progress of this science still in progress. In 1858, a Technical Commission was set up to continue, by adopting the procedures instituted in Europe, the cadastre work inaugurated under Muhammad Ali. This Commission suggested to Viceroy Mohammed Sa'id Pasha the idea of buying geodetic devices which were ordered in France. While Mahmud Ahmad Hamdi al-Falaki was in charge, in Egypt, of the direction of the work of the general map, the viceroy entrusted to Ismail Mustafa al-Falaki the study, in Europe, of the precision apparatus calibrated against the metre intended to measure the geodesic bases and already built by Jean Brunner in Paris. Ismail Mustafa had the task to carry out the experiments necessary for determining the expansion coefficients of the two platinum and brass bars, and to compare the Egyptian standard with a known standard. The Spanish standard designed by Carlos Ibáñez e Ibáñez de Ibero and Frutos Saavedra Meneses was chosen for this purpose, as it had served as a model for the construction of the Egyptian standard. In addition, the Spanish standard had been compared with Borda's double-toise N° 1, which served as a comparison module for the measurement of all geodesic bases in France, and was also to be compared to the Ibáñez apparatus. In 1954, the connection of the southerly extension of the Struve Geodetic Arc with an arc running northwards from South Africa through Egypt would bring the course of a major meridian arc back to land where Eratosthenes had founded geodesy.
Seventeen years after Bessel calculated his ellipsoid of reference, some of the meridian arcs the German astronomer had used for his calculation had been enlarged. This was a very important circumstance because the influence of errors due to vertical deflections was minimized in proportion to the length of the meridian arcs: the longer the meridian arcs, the more precise the image of the Earth ellipsoid would be. After Struve Geodetic Arc measurement, it was resolved in the 1860s, at the initiative of Carlos Ibáñez e Ibáñez de Ibero who would become the first president of both the International Geodetic Association and the International Committee for Weights and Measure, to remeasure the arc of meridian from Dunkirk to Formentera and to extend it from Shetland to the Sahara. This did not pave the way to a new definition of the metre because it was known that the theoretical definition of the metre had been inaccessible and misleading at the time of Delambre and Mechain arc measurement, as the geoid is a ball, which on the whole can be assimilated to an oblate spheroid, but which in detail differs from it so as to prohibit any generalization and any extrapolation from the measurement of a single meridian arc. In 1859, Friedrich von Schubert demonstrated that several meridians had not the same length, confirming an hypothesis of Jean Le Rond d'Alembert. He also proposed an ellipsoid with three unequal axes. In 1860, Elie Ritter, a mathematician from Geneva, using Schubert's data computed that the Earth ellipsoid could rather be a spheroid of revolution accordingly to Adrien-Marie Legendre's model. However, the following year, resuming his calculation on the basis of all the data available at the time, Ritter came to the conclusion that the problem was only resolved in an approximate manner, the data appearing too scant, and for some affected by vertical deflections, in particular the latitude of Montjuïc in the French meridian arc which determination had also been affected in a lesser proportion by systematic errors of the repeating circle.
It was well known that by measuring the latitude of two stations in Barcelona, Méchain had found that the difference between these latitudes was greater than predicted by direct measurement of distance by triangulation and that he did not dare to admit this inaccuracy. This was later explained by clearance in the central axis of the repeating circle causing wear and consequently the zenith measurements contained significant systematic errors. Polar motion predicted by Leonhard Euler and later discovered by Seth Carlo Chandler also had an impact on accuracy of latitudes' determinations. Among all these sources of error, it was mainly an unfavourable vertical deflection that gave an inaccurate determination of Barcelona's latitude and a metre "too short" compared to a more general definition taken from the average of a large number of arcs.
As early as 1861, Johann Jacob Baeyer sent a memorandum to the King of Prussia recommending international collaboration in Central Europe with the aim of determining the shape and dimensions of the Earth. At the time of its creation, the association had sixteen member countries: Austrian Empire, Kingdom of Belgium, Denmark, seven German states (Grand Duchy of Baden, Kingdom of Bavaria, Kingdom of Hanover, Mecklenburg, Kingdom of Prussia, Kingdom of Saxony, Saxe-Coburg and Gotha), Kingdom of Italy, Netherlands, Russian Empire (for Poland), United Kingdoms of Sweden and Norway, as well as Switzerland. The Central European Arc Measurement created a Central Office, located at the Prussian Geodetic Institute, whose management was entrusted to Johann Jacob Baeyer.
Baeyer's goal was a new determination of anomalies in the shape of the Earth using precise triangulations, combined with gravity measurements. This involved determining the geoid by means of gravimetric and leveling measurements, in order to deduce the exact knowledge of the terrestrial spheroid while taking into account local variations. To resolve this problem, it was necessary to carefully study considerable areas of land in all directions. Baeyer developed a plan to coordinate geodetic surveys in the space between the parallels of Palermo and Freetown Christiana (Denmark) and the meridians of Bonn and Trunz (German name for Milejewo in Poland). This territory was covered by a triangle network and included more than thirty observatories or stations whose position was determined astronomically. Bayer proposed to remeasure ten arcs of meridians and a larger number of arcs of parallels, to compare the curvature of the meridian arcs on the two slopes of the Alps, in order to determine the influence of this mountain range on vertical deflection. Baeyer also planned to determine the curvature of the seas, the Mediterranean Sea and Adriatic Sea in the south, the North Sea and the Baltic Sea in the north. In his mind, the cooperation of all the States of Central Europe could open the field to scientific research of the highest interest, research that each State, taken in isolation, was not able to undertake.
Spain and Portugal joined the European Arc Measurement in 1866. French Empire hesitated for a long time before giving in to the demands of the Association, which asked the French geodesists to take part in its work. It was only after the Franco-Prussian War, that Charles-Eugène Delaunay represented France at the Congress of Vienna in 1871. In 1874, Hervé Faye was appointed member of the Permanent Commission which was presided by Carlos Ibáñez e Ibáñez de Ibero.
The International Geodetic Association gained global importance with the accession of Chile, Mexico and Japan in 1888; Argentina and United-States in 1889; and British Empire in 1898. The convention of the International Geodetic Association expired at the end of 1916. It was not renewed due to the First World War. However, the activities of the International Latitude Service were continued through an thanks to the efforts of H.G. van de Sande Bakhuyzen and Raoul Gautier (1854–1931), respectively directors of Leiden Observatory and Geneva Observatory.
International prototype metre bar
After the French Revolution, Napoleonic Wars led to the adoption of the metre in Latin America following independence of Brazil and Hispanic America, while the American Revolution prompted the foundation of the Survey of the Coast in 1807 and the creation of the Office of Standard Weights and Measures in 1830. In continental Europe, Napoleonic Wars fostered German nationalism which later led to unification of Germany in 1871. Meanwhile, most European countries had adopted the metre. In the 1870s, German Empire played a pivotal role in the unification of the metric system through the European Arc Measurement but its overwhelming influence was mitigated by that of neutral states. While the German astronomer Wilhelm Julius Foerster, director of the Berlin Observatory and director of the German Weights and Measures Service boycotted the Permanent Committee of the International Metre Commission, along with the Russian and Austrian representatives, in order to promote the foundation of a permanent International Bureau of Weights and Measures, the German born, Swiss astronomer, Adolphe Hirsch conformed to the opinion of Italy and Spain to create, in spite of French reluctance, the International Bureau of Weights and Measures in France as a permanent institution at the disadventage of the Conservatoire national des Arts et Métiers.
At that time, units of measurement were defined by primary standards, and unique artifacts made of different alloys with distinct coefficients of expansion were the legal basis of units of length. A wrought iron ruler, the Toise of Peru, also called Toise de l'Académie, was the French primary standard of the toise, and the metre was officially defined by an artifact made of platinum kept in the National Archives. Besides the latter, another platinum and twelve iron standards of the metre were made by Étienne Lenoir in 1799. One of them became known as the Committee Meter in the United States and served as standard of length in the United States Coast Survey until 1890. According to geodesists, these standards were secondary standards deduced from the Toise of Peru. In Europe, except Spain, surveyors continued to use measuring instruments calibrated on the Toise of Peru. Among these, the toise of Bessel and the apparatus of Borda were respectively the main references for geodesy in Prussia and in France. These measuring devices consisted of bimetallic rulers in platinum and brass or iron and zinc fixed together at one extremity to assess the variations in length produced by any change in temperature. The combination of two bars made of two different metals allowed to take thermal expansion into account without measuring the temperature. A French scientific instrument maker, Jean Nicolas Fortin, had made three direct copies of the Toise of Peru, one for Friedrich Georg Wilhelm von Struve, a second for Heinrich Christian Schumacher in 1821 and a third for Friedrich Bessel in 1823. In 1831, Henri-Prudence Gambey also realized a copy of the Toise of Peru which was kept at Altona Observatory.
In the second half of the 19th century, the creation of the International Geodetic Association would mark the adoption of new scientific methods. It then became possible to accurately measure parallel arcs, since the difference in longitude between their ends could be determined thanks to the invention of the electrical telegraph. Furthermore, advances in metrology combined with those of gravimetry have led to a new era of geodesy. If precision metrology had needed the help of geodesy, the latter could not continue to prosper without the help of metrology. It was then necessary to define a single unit to express all the measurements of terrestrial arcs and all determinations of the gravitational acceleration by means of pendulum.
In 1866, the most important concern was that the Toise of Peru, the standard of the toise constructed in 1735 for the French Geodesic Mission to the Equator, might be so much damaged that comparison with it would be worthless, while Bessel had questioned the accuracy of copies of this standard belonging to Altona and Koenigsberg Observatories, which he had compared to each other about 1840. This assertion was particularly worrying, because when the primary Imperial yard standard had partially been destroyed in 1834, a new standard of reference was constructed using copies of the "Standard Yard, 1760", instead of the pendulum's length as provided for in the Weights and Measures Act 1824, because the pendulum method proved unreliable. Nevertheless Ferdinand Rudolph Hassler's use of the metre and the creation of the Office of Standard Weights and Measures as an office within the Coast Survey contributed to the introduction of the Metric Act of 1866 allowing the use of the metre in the United States, and preceded the choice of the metre as international scientific unit of length and the proposal by the European Arc Measurement (German: Europäische Gradmessung) to establish a "European international bureau for weights and measures".
In 1867 at the second General Conference of the International Association of Geodesy held in Berlin, the question of an international standard unit of length was discussed in order to combine the measurements made in different countries to determine the size and shape of the Earth. According to a preliminary proposal made in Neuchâtel the precedent year, the General Conference recommended the adoption of the metre in replacement of the toise of Bessel, the creation of an International Metre Commission, and the foundation of a World institute for the comparison of geodetic standards, the first step towards the creation of the International Bureau of Weights and Measures.
Hassler's metrological and geodetic work also had a favourable response in Russia. In 1869, the Saint Petersburg Academy of Sciences sent to the French Academy of Sciences a report drafted by Otto Wilhelm von Struve, Heinrich von Wild, and Moritz von Jacobi, whose theorem has long supported the assumption of an ellipsoid with three unequal axes for the figure of the Earth, inviting his French counterpart to undertake joint action to ensure the universal use of the metric system in all scientific work.
In the 1870s and in light of modern precision, a series of international conferences was held to devise new metric standards. When a conflict broke out regarding the presence of impurities in the metre-alloy of 1874, a member of the Preparatory Committee since 1870 and Spanish representative at the Paris Conference in 1875, Carlos Ibáñez e Ibáñez de Ibero intervened with the French Academy of Sciences to rally France to the project to create an International Bureau of Weights and Measures equipped with the scientific means necessary to redefine the units of the metric system according to the progress of sciences.
The Metre Convention (Convention du Mètre) of 1875 mandated the establishment of a permanent International Bureau of Weights and Measures (BIPM: ) to be located in Sèvres, France. This new organisation was to construct and preserve a prototype metre bar, distribute national metric prototypes, and maintain comparisons between them and non-metric measurement standards. The organisation distributed such bars in 1889 at the first General Conference on Weights and Measures (CGPM: ), establishing the International Prototype Metre as the distance between two lines on a standard bar composed of an alloy of 90% platinum and 10% iridium, measured at the melting point of ice.
Metrology and paradigm shift in physics
The comparison of the new prototypes of the metre with each other involved the development of special measuring equipment and the definition of a reproducible temperature scale. The BIPM's thermometry work led to the discovery of special alloys of iron–nickel, in particular invar, whose practically negligible coefficient of expansion made it possible to develop simpler baseline measurement methods, and for which its director, the Swiss physicist Charles-Edouard Guillaume, was granted the Nobel Prize in Physics in 1920. Guillaume's Nobel Prize marked the end of an era in which metrology was leaving the field of geodesy to become a technological application of physics.
In 1921, the Nobel Prize in Physics was awarded to another Swiss scientist, Albert Einstein, who following Michelson–Morley experiment had questioned the luminiferous aether in 1905, just as Newton had questioned Descartes' Vortex theory in 1687 after Jean Richer's pendulum experiment in Cayenne, French Guiana.
Furthermore, special relativity changed conceptions of time and mass, while general relativity changed that of space. According to Newton, space was Euclidean, infinite and without boundaries and bodies gravitated around each other without changing the structure of space. Einstein's theory of gravity states, on the contrary, that the mass of a body has an effect on all other bodies while modifying the structure of space. A massive body induces a curvature of the space around it in which the path of light is inflected, as was demonstrated by the displacement of the position of a star observed near the Sun during an eclipse in 1919.
Wavelength definition
In 1873, James Clerk Maxwell suggested that light emitted by an element be used as the standard both for the unit of length and for the second. These two quantities could then be used to define the unit of mass. About the unit of length he wrote:
Charles Sanders Peirce's work promoted the advent of American science at the forefront of global metrology. Alongside his intercomparisons of artifacts of the metre and contributions to gravimetry through improvement of reversible pendulum, Peirce was the first to tie experimentally the metre to the wave length of a spectral line. According to him the standard length might be compared with that of a wave of light identified by a line in the solar spectrum. Albert Michelson soon took up the idea and improved it.
In 1893, the standard metre was first measured with an interferometer by Albert A. Michelson, the inventor of the device and an advocate of using some particular wavelength of light as a standard of length. By 1925, interferometry was in regular use at the BIPM. However, the International Prototype Metre remained the standard until 1960, when the eleventh CGPM defined the metre in the new International System of Units (SI) as equal to wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in vacuum.
Speed of light definition
To further reduce uncertainty, the 17th CGPM in 1983 replaced the definition of the metre with its current definition, thus fixing the length of the metre in terms of the second and the speed of light:
The metre is the length of the path travelled by light in vacuum during a time interval of of a second.
This definition fixed the speed of light in vacuum at exactly metres per second (≈ or ≈1.079 billion km/hour). An intended by-product of the 17th CGPM's definition was that it enabled scientists to compare lasers accurately using frequency, resulting in wavelengths with one-fifth the uncertainty involved in the direct comparison of wavelengths, because interferometer errors were eliminated. To further facilitate reproducibility from lab to lab, the 17th CGPM also made the iodine-stabilised helium–neon laser "a recommended radiation" for realising the metre. For the purpose of delineating the metre, the BIPM currently considers the HeNe laser wavelength, , to be with an estimated relative standard uncertainty (U) of .
This uncertainty is currently one limiting factor in laboratory realisations of the metre, and it is several orders of magnitude poorer than that of the second, based upon the caesium fountain atomic clock (). Consequently, a realisation of the metre is usually delineated (not defined) today in labs as wavelengths of helium–neon laser light in vacuum, the error stated being only that of frequency determination. This bracket notation expressing the error is explained in the article on measurement uncertainty.
Practical realisation of the metre is subject to uncertainties in characterising the medium, to various uncertainties of interferometry, and to uncertainties in measuring the frequency of the source. A commonly used medium is air, and the National Institute of Standards and Technology (NIST) has set up an online calculator to convert wavelengths in vacuum to wavelengths in air. As described by NIST, in air, the uncertainties in characterising the medium are dominated by errors in measuring temperature and pressure. Errors in the theoretical formulas used are secondary.
By implementing a refractive index correction such as this, an approximate realisation of the metre can be implemented in air, for example, using the formulation of the metre as wavelengths of helium–neon laser light in vacuum, and converting the wavelengths in vacuum to wavelengths in air. Air is only one possible medium to use in a realisation of the metre, and any partial vacuum can be used, or some inert atmosphere like helium gas, provided the appropriate corrections for refractive index are implemented.
The metre is defined as the path length travelled by light in a given time, and practical laboratory length measurements in metres are determined by counting the number of wavelengths of laser light of one of the standard types that fit into the length, and converting the selected unit of wavelength to metres. Three major factors limit the accuracy attainable with laser interferometers for a length measurement:
uncertainty in vacuum wavelength of the source,
uncertainty in the refractive index of the medium,
least count resolution of the interferometer.
Of these, the last is peculiar to the interferometer itself. The conversion of a length in wavelengths to a length in metres is based upon the relation
which converts the unit of wavelength λ to metres using c, the speed of light in vacuum in m/s. Here n is the refractive index of the medium in which the measurement is made, and f is the measured frequency of the source. Although conversion from wavelengths to metres introduces an additional error in the overall length due to measurement error in determining the refractive index and the frequency, the measurement of frequency is one of the most accurate measurements available.
The CIPM issued a clarification in 2002:
Timeline
Early adoptions of the metre internationally
In France, the metre was adopted as an exclusive measure in 1801 under the Consulate. This continued under the First French Empire until 1812, when Napoleon decreed the introduction of the non-decimal mesures usuelles, which remained in use in France up to 1840 in the reign of Louis Philippe. Meanwhile, the metre was adopted by the Republic of Geneva. After the joining of the canton of Geneva to Switzerland in 1815, Guillaume Henri Dufour published the first official Swiss map, for which the metre was adopted as the unit of length.
Adoption dates by country
France: 1801–1812, then 1840
Republic of Geneva, Switzerland: 1813
Kingdom of the Netherlands: 1820
Kingdom of Belgium: 1830
Chile: 1848
Kingdom of Sardinia, Italy: 1850
Spain: 1852
Portugal: 1852
Colombia: 1853
Ecuador: 1856
Mexico: 1857
Brazil: 1862
Argentina: 1863
Italy: 1863
United States: 1866
German Empire, Germany: 1872
Austria, 1875
Switzerland: 1877
SI prefixed forms of metre
SI prefixes can be used to denote decimal multiples and submultiples of the metre, as shown in the table below. Long distances are usually expressed in km, astronomical units (149.6 Gm), light-years (10 Pm), or parsecs (31 Pm), rather than in Mm or larger multiples; "30 cm", "30 m", and "300 m" are more common than "3 dm", "3 dam", and "3 hm", respectively.
The terms micron and millimicron have been used instead of micrometre (μm) and nanometre (nm), respectively, but this practice is discouraged.
Equivalents in other units
Within this table, "inch" and "yard" mean "international inch" and "international yard" respectively, though approximate conversions in the left column hold for both international and survey units.
"≈" means "is approximately equal to";
"=" means "is exactly equal to".
One metre is exactly equivalent to inches and to yards.
A simple mnemonic to assist with conversion is "three 3s": 1 metre is nearly equivalent to 3 feet inches. This gives an overestimate of 0.125 mm.
The ancient Egyptian cubit was about 0.5 m (surviving rods are 523–529 mm). Scottish and English definitions of the ell (2 cubits) were 941 mm (0.941 m) and 1143 mm (1.143 m) respectively. The ancient Parisian toise (fathom) was slightly shorter than 2 m and was standardised at exactly 2 m in the mesures usuelles system, such that 1 m was exactly toise. The Russian verst was 1.0668 km. The Swedish mil was 10.688 km, but was changed to 10 km when Sweden converted to metric units.
| Physical sciences | Length and distance | null |
18955 | https://en.wikipedia.org/wiki/Mentha | Mentha | Mentha, also known as mint (from Greek , Linear B mi-ta), is a genus of flowering plants in the mint family, Lamiaceae. It is estimated that 13 to 24 species exist, but the exact distinction between species is unclear. Hybridization occurs naturally where some species' ranges overlap. Many hybrids and cultivars are known.
The genus has a subcosmopolitan distribution, growing best in wet environments and moist soils.
Description
Mints are aromatic, almost exclusively perennial herbs. They have wide-spreading underground and overground stolons and erect, square, branched stems. Mints will grow 10–120 cm (4–48 inches) tall and can spread over an indeterminate area. Due to their tendency to spread unchecked, some mints are considered invasive.
The leaves are arranged in opposite pairs, from oblong to lanceolate, often downy, and with a serrated margin. Leaf colors range from dark green and gray-green to purple, blue, and sometimes pale yellow.
The flowers are produced in long bracts from leaf axils. They are white to purple and produced in false whorls called verticillasters. The corolla is two-lipped with four subequal lobes, the upper lobe usually the largest. The fruit is a nutlet, containing one to four seeds.
Taxonomy
Mentha is a member of the tribe Mentheae in the subfamily Nepetoideae. The tribe contains about 65 genera, and relationships within it remain obscure. Authors have disagreed on the circumscription of Mentha. For example, M. cervina has been placed in Pulegium and Preslia, and M. cunninghamii has been placed in Micromeria. In 2004, a molecular phylogenetic study indicated that both M. cervina and M. cunninghamii should be included in Mentha. However, M. cunninghamii was excluded in a 2007 treatment of the genus.
More than 3,000 names have been published in the genus Mentha, at ranks from species to forms, the majority of which are regarded as synonyms or illegitimate names. The taxonomy of the genus is made difficult because many species hybridize readily, or are themselves derived from possibly ancient hybridization events. Seeds from hybrids give rise to variable offspring, which may spread through vegetative propagation. The variability has led to what has been described as "paroxysms of species and subspecific taxa"; for example, one taxonomist published 434 new mint taxa for central Europe alone between 1911 and 1916. Recent sources recognize between 18 and 24 species.
Species
, Plants of the World Online recognized the following species:
Mentha alaica Boriss.
Mentha aquatica L. – water mint, marsh mint
Mentha arvensis L. – corn mint, wild mint, Japanese peppermint, field mint, banana mint
Mentha atrolilacina B.J.Conn & D.J.Duval – slender mint
Mentha australis R.Br. – Australian mint
Mentha canadensis L. – Canada mint, American wild mint
Mentha cervina L. – Hart's pennyroyal
Mentha cunninghamii (Benth.) Benth. – New Zealand mint
Mentha dahurica Fisch. ex Benth. – Dahurian thyme
Mentha darvasica Boriss.
Mentha diemenica Spreng. – slender mint
Mentha gattefossei Maire
Mentha grandiflora Benth.
Mentha japonica (Miq.) Makino
Mentha laxiflora Benth. – forest mint
Mentha longifolia (L.) L. – horse mint
Mentha micrantha (Fisch. ex Benth.) Heinr.Braun
Mentha pamiroalaica Boriss.
Mentha pulegium L. – pennyroyal
Mentha requienii Benth. – Corsican mint
Mentha royleana Wall. ex Benth.
Mentha satureioides R.Br. – native pennyroyal
Mentha spicata L. – spearmint, garden mint (a cultivar of spearmint)
Mentha suaveolens Ehrh. – apple mint, pineapple mint (a variegated cultivar of apple mint)
Other species
There are a number of plants that have mint in the common English name but which do not belong to the genus Mentha:
Agastache sp. – known as horse mints
Calamintha sp. (syn. Clinopodium) – known as calamints
Clinopodium acinos (syn. Acinos arvensis) – known as backle mint
Elsholtzia ciliata – known as comb mint, crested late summer mint
Melissa officinalis – known as balm mint
Nepeta sp. – known as cat mint or catnip
Origanum sp. – known as rock mint
Persicaria odorata – known as Vietnamese mint
Sideritis montana – known as sider mint
Hybrids
The mint genus has a large grouping of recognized hybrids. Those accepted by Plants of the World Online are listed below. Parent species are taken from Tucker & Naczi (2007). Synonyms, along with cultivars and varieties where available, are included within the specific nothospecies.
Mentha × carinthiaca Host - M. arvensis × M. suaveolens
Mentha × dalmatica Tausch - M. arvensis × M. longifolia
Mentha × dumetorum Schult. - M. aquatica × M. longifolia
Mentha × gayeri Trautm. - M. longifolia × M. spicata × M. suaveolens
Mentha × gracilis Sole (syn. Mentha × gentilis) - M. arvensis × M. spicata – ginger mint, Scotch spearmint
Mentha × kuemmerlei Trautm. - M. aquatica × M. spicata × M. suaveolens
Mentha × locyana Borbás - M. longifolia × M. verticillata
Mentha × piperita L. - M. aquatica × M. spicata – peppermint, chocolate mint
Mentha × pyramidalis Ten. - M. aquatica × M. microphylla
Mentha × rotundifolia (L.) Huds. - M. longifolia × M. suaveolens – false apple mint
Mentha × suavis Guss. (syn. Mentha × amblardii, Mentha × lamiifolia, Mentha × langii, Mentha × mauponii, Mentha × maximilianea, Mentha × rodriguezii, Mentha × weissenburgensis) - M. aquatica × M. suaveolens
Mentha × verticillata L. - M. aquatica × M. arvensis
Mentha × villosa Huds. (syn. M. nemorosa) - M. spicata × M. suaveolens – large apple mint, foxtail mint, hairy mint, woolly mint, Cuban mint, mojito mint, and yerba buena in Cuba
Mentha × villosa-nervata Opiz - M. longifolia × M. spicata – sharp-toothed mint
Mentha × wirtgeniana F.W.Schultz (syn. Mentha × smithiana) - M. aquatica × M. arvensis × M. spicata – red raripila mint
Common names and cultivars
There are hundreds of common English names for species and cultivars of Mentha. These include:
Apple mint - Mentha suaveolens and Mentha × rotundifolia
Banana mint - Mentha arvensis 'Banana'
Bowles mint - Mentha villosa and Mentha × villosa 'Alopecuroides'
Canada mint - Mentha canadensis
Chocolate mint - Mentha × piperita 'Chocolate'
Corsican mint - Mentha requienii
Cuba mint - Mentha × villosa
Curly mint - Mentha spicata 'Curly'
Eau de Cologne mint - Mentha × piperita 'Citrata'
Field mint - Mentha arvensis
Flea mint - Mentha requienii
Ginger mint - Mentha × gracilis
Gray mint - Mentha longifolia
Green mint - Mentha spicata
Grey mint - Mentha longifolia
Japanese peppermint - Mentha arvensis var. piperascens
Japanese mint or Japanese medicine mint - Mentha spicata 'Abura'
Kiwi mint - Mentha cunninghamii
Lemon mint - Mentha × piperita var. citrata and Mentha × gentilis
Marsh mint - Mentha aquatica
Meadow mint - Mentha × gracilis and Mentha arvensis
Mojito mint - Mentha spicata 'Mojito'
Moroccan mint - Mentha spicata var. crispa 'Moroccan' and mints collected in Morocco
Pennyroyal - Mentha pulegium
Peppermint - Mentha × piperita and sometimes Mentha requienii
Pineapple mint - Mentha suaveolens 'Variegata' and Mentha suaveolens 'Pineapple'
Polemint - Mentha pulegium
Red raripila mint - Mentha × wirtgeniana
Round leaf mint - Mentha suaveolens
Spearmint - Mentha spicata
Strawberry mint - Mentha × piperita 'Strawberry'
Swiss mint - Mentha × piperita 'Swiss'
Tall mint - Mentha × wirtgeniana
Tea mint - Mentha × verticillata
Toothmint - Mentha × smithiana
Water mint - Mentha aquatica
Woolly mint - Mentha × rotundifolia
Distribution and habitat
The genus has a subcosmopolitan distribution across Europe, Africa – (Southern Africa), Asia, Australia – Oceania, North America and South America. Its species can be found in many environments, but most grow best in wet environments and moist soils.
Ecology
Mints are used as food by the larvae of some Lepidoptera species, including buff ermine moths, and by beetles, such as Chrysolina coerulans (blue mint beetle) and C. herbacea (mint leaf beetle).
Diseases
Cultivation
All mints thrive near pools of water, lakes, rivers, and cool moist spots in partial shade. In general, mints tolerate a wide range of conditions, and can also be grown in full sun. Mint grows all year round.
They are fast-growing, extending their reach along surfaces through a network of runners. Due to their speedy growth, one plant of each desired mint, along with a little care, will provide more than enough mint for home use. Some mint species are more invasive than others. Even with the less invasive mints, care should be taken when mixing any mint with any other plants, lest the mint take over. To control mints in an open environment, they should be planted in deep, bottomless containers sunk in the ground, or planted above ground in tubs and barrels.
Some mints can be propagated by seed, but growth from seed can be an unreliable method for raising mint for two reasons: mint seeds are highly variable (i.e. one might not end up with what was supposedly planted) and some mint varieties are sterile. It is more effective to take and plant cuttings from the runners of healthy mints.
The most common and popular mints for commercial cultivation are peppermint (Mentha × piperita), native spearmint (Mentha spicata), Scotch spearmint (Mentha x gracilis), and cornmint (Mentha arvensis); also (more recently) apple mint (Mentha suaveolens).
Mints are supposed to make good companion plants, repelling insect pests and attracting beneficial ones. They are susceptible to whitefly and aphids.
Harvesting of mint leaves can be done at any time. Fresh leaves should be used immediately or stored up to a few days in plastic bags in a refrigerator. Optionally, leaves can be frozen in ice cube trays. Dried mint leaves should be stored in an airtight container placed in a cool, dark, dry area.
Uses
Culinary
The leaf, fresh or dried, is the culinary source of mint. Fresh mint is usually preferred over dried mint when storage of the mint is not a problem. The leaves have a warm, fresh, aromatic, sweet flavor with a cool aftertaste, and are used in teas, beverages, jellies, syrups, candies, and ice creams. In Middle Eastern cuisine, mint is used in lamb dishes, while in British cuisine and American cuisine, mint sauce and mint jelly are used, respectively. Mint (pudina) is a staple in Indian cuisine, used for flavouring curries and other dishes.
Mint is a necessary ingredient in Touareg tea, a popular tea in northern African and Arab countries. Alcoholic drinks sometimes feature mint for flavor or garnish, such as the mint julep and the mojito. Crème de menthe is a mint-flavored liqueur used in drinks such as the grasshopper.
Mint essential oil and menthol are extensively used as flavorings in breath fresheners, drinks, antiseptic mouth rinses, toothpaste, chewing gum, desserts, and candies, such as mint (candy) and mint chocolate. The substances that give the mints their characteristic aromas and flavors are menthol (the main aroma of peppermint and Japanese peppermint) and pulegone (in pennyroyal and Corsican mint). The compound primarily responsible for the aroma and flavor of spearmint is L-carvone.
Traditional medicine and cosmetics
The ancient Greeks rubbed mint on their arms, believing it would make them stronger. Mint was originally used as a medicinal herb to treat stomach ache and chest pains. There are several uses in traditional medicine and preliminary research for possible use of peppermint in treating irritable bowel syndrome.
Menthol from mint essential oil (40–90%) is an ingredient of many cosmetics and some perfumes. Menthol and mint essential oil are also used in aromatherapy which may have clinical use to alleviate post-surgery nausea.
Allergic reaction
Although it is used in many consumer products, mint may cause allergic reactions in some people, inducing symptoms such as abdominal cramps, diarrhea, headaches, heartburn, tingling or numbing around the mouth, anaphylaxis, or contact dermatitis.
Insecticides
Mint oil is also used as an environmentally friendly insecticide for its ability to kill some common pests such as wasps, hornets, ants, and cockroaches.
Room scent and aromatherapy
Known in Greek mythology as the herb of hospitality, one of mint's first known uses in Europe was as a room deodorizer. The herb was strewn across floors to cover the smell of the hard-packed soil. Stepping on the mint helped to spread its scent through the room. Today, it is more commonly used for aromatherapy through the use of essential oils.
Etymology of "mint"
The word "mint" descends from the Latin word mentha or menta, which is rooted in the Greek words mintha, minthē or mintē meaning "spearmint". The plant was personified in Greek mythology as Minthe, a nymph who was beloved by Hades and was transformed into a mint plant by either Persephone or Demeter. This, in turn, ultimately derived from a proto-Indo-European root that is also the origin of the Sanskrit -mantha, mathana (premna serratifolia).
| Biology and health sciences | Lamiales | null |
18956 | https://en.wikipedia.org/wiki/Marjoram | Marjoram | Marjoram (, Origanum majorana) is a cold-sensitive perennial herb or undershrub with sweet pine and citrus flavours. In some Middle Eastern countries, marjoram is synonymous with oregano, and there the names sweet marjoram and knotted marjoram are used to distinguish it from other plants of the genus Origanum. It is also called pot marjoram, although this name is also used for other cultivated species of Origanum.
History
Marjoram is indigenous to Cyprus, the Mediterranean, Turkey, Western Asia, the Arabian Peninsula, and the Levant, and was known to the ancient Greeks and Romans as a symbol of happiness. It may have spread to the British Isles during the Middle Ages. Marjoram was not widely used in the United States until after World War II.
The name marjoram (Old French: majorane; ) does not directly derive from the Latin word (major).
Marjoram is related to Samhain, the Celtic pagan holiday that would eventually become Halloween. It has also been used in Sephardi Jewish tradition as a ritual medical practice. Ancient Greeks believed the plant was created by Aphrodite. In one myth, the royal perfumer of Cyprus, Amaracus, was transformed into marjoram. To the Romans the herb was known as the herb of happiness, and was believed to increase lifespan. Marjoram is mentioned in De Materia Medica by Pedanius Dioscorides, and was used by Hippocrates as an antiseptic.
Description
Leaves are smooth, simple, petiolated, ovate to oblong-ovate, long, wide, with obtuse apex, entire margin, symmetrical but tapering base, and reticulate venation. The texture of the leaf is extremely smooth due to the presence of numerous hairs.
Cultivation
Considered a tender perennial (USDA Zones 7–9), marjoram can sometimes prove hardy even in zone 5. Under proper conditions it spreads prolifically, and so is usually grown in pots to prevent it from taking over a garden.
Marjoram is cultivated for its aromatic leaves, either green or dry, for culinary purposes; the tops are cut as the plants begin to flower and are dried slowly in the shade. It is often used in herb combinations such as herbes de Provence and za'atar. The flowering leaves and tops of marjoram are steam-distilled to produce an essential oil that is yellowish (darkening to brown as it ages). It has many chemical components, some of which are borneol, camphor, and pinene.
Related species
Oregano (Origanum vulgare), sometimes listed with marjoram as O. majorana, is also called wild marjoram. It is a perennial common in southern Europe and north to Sweden in dry copses and on hedge-banks, with many stout stems high, bearing short-stalked, somewhat ovate leaves and clusters of purple flowers. It has a stronger flavor than marjoram.
Pot marjoram or Cretan oregano (O. onites) has similar uses to marjoram.
Hardy marjoram or French/Italian/Sicilian marjoram (O. × majoricum), a cross of marjoram with oregano, is much more resistant to cold, but is slightly less sweet.
O. × hybridum is known as showy marjoram or showy oregano.
Uses
Marjoram is used for seasoning soups, stews, salad dressings, sauces, herbal teas and sausages.
Marjoram has long been used as a medicinal herb. Marjoram or marjoram oil has been used to treat cancer, colds, coughs, cramps, depression, as a diuretic, ear infections, gastrointestinal problems, headaches, and paralysis, as well as arthritis, chest congestion, and muscle aches. It has also been used as an aphrodisiac, mouthwash, tea, and in poultices, tinctures, and infusions. Though not all of its historic uses are scientifically backed, the plant has verifiable medical use. For example, it contains the phenol carvacrol, which is antibacterial, antifungal and antimicrobial. Ethanol extract is cytotoxic against fibrosarcoma cell lines, ethyl acetate extract has antiproliferative properties against PER.C6 and HeLa cells, as have hesperetin and hydroquinone, which can be isolated from marjoram extract. Cardioprotective, hepatoprotective, antiulcerogenetic, anticholinesterase, anti-polycystic ovary syndrome (PCOS), and anti-inflammatory effects were also found in dried marjoram, marjoram tea, or in compounds extracted from marjoram. Marjoram is generally not toxic, but should not be used by pregnant or lactating women. However, it is always important to be cautious and consult a doctor when using medical herbs.
Symbolism
It is used by the clown Lavatch in All's Well That Ends Well (IV.5) to describe Helena and his regret at her apparent death:
"she was the sweet marjoram of the salad, or rather, the herb of grace."
| Biology and health sciences | Herbs and spices | Plants |
18957 | https://en.wikipedia.org/wiki/Medicine | Medicine | Medicine is the science and practice of caring for patients, managing the diagnosis, prognosis, prevention, treatment, palliation of their injury or disease, and promoting their health. Medicine encompasses a variety of health care practices evolved to maintain and restore health by the prevention and treatment of illness. Contemporary medicine applies biomedical sciences, biomedical research, genetics, and medical technology to diagnose, treat, and prevent injury and disease, typically through pharmaceuticals or surgery, but also through therapies as diverse as psychotherapy, external splints and traction, medical devices, biologics, and ionizing radiation, amongst others.
Medicine has been practiced since prehistoric times, and for most of this time it was an art (an area of creativity and skill), frequently having connections to the religious and philosophical beliefs of local culture. For example, a medicine man would apply herbs and say prayers for healing, or an ancient philosopher and physician would apply bloodletting according to the theories of humorism. In recent centuries, since the advent of modern science, most medicine has become a combination of art and science (both basic and applied, under the umbrella of medical science). For example, while stitching technique for sutures is an art learned through practice, knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science.
Prescientific forms of medicine, now known as traditional medicine or folk medicine, remain commonly used in the absence of scientific medicine and are thus called alternative medicine. Alternative treatments outside of scientific medicine with ethical, safety and efficacy concerns are termed quackery.
Etymology
Medicine (, ) is the science and practice of the diagnosis, prognosis, treatment, and prevention of disease. The word "medicine" is derived from Latin , meaning "a physician". The word "physic" itself, from which "physician" derives, was the old word for what is now called a medicine, and also the field of medicine.
Clinical practice
Medical availability and clinical practice vary across the world due to regional differences in culture and technology. Modern scientific medicine is highly developed in the Western world, while in developing countries such as parts of Africa or Asia, the population may rely more heavily on traditional medicine with limited evidence and efficacy and no required formal training for practitioners.
In the developed world, evidence-based medicine is not universally used in clinical practice; for example, a 2007 survey of literature reviews found that about 49% of the interventions lacked sufficient evidence to support either benefit or harm.
In modern clinical practice, physicians and physician assistants personally assess patients to diagnose, prognose, treat, and prevent disease using clinical judgment. The doctor-patient relationship typically begins with an interaction with an examination of the patient's medical history and medical record, followed by a medical interview and a physical examination. Basic diagnostic medical devices (e.g., stethoscope, tongue depressor) are typically used. After examining for signs and interviewing for symptoms, the doctor may order medical tests (e.g., blood tests), take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided. During the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. The medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. Follow-ups may be shorter but follow the same general procedure, and specialists follow a similar process. The diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue.
The components of the medical interview and encounter are:
Chief complaint (CC): the reason for the current medical visit. These are the symptoms. They are in the patient's own words and are recorded along with the duration of each one. Also called chief concern or presenting complaint.
Current activity: occupation, hobbies, what the patient actually does.
Family history (FH): listing of diseases in the family that may impact the patient. A family tree is sometimes used.
History of present illness (HPI): the chronological order of events of symptoms and further clarification of each symptom. Distinguishable from history of previous illness, often called past medical history (PMH). Medical history comprises HPI and PMH.
Medications (Rx): what drugs the patient takes including prescribed, over-the-counter, and home remedies, as well as alternative and herbal medicines or remedies. Allergies are also recorded.
Past medical history (PMH/PMHx): concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies.
Review of systems (ROS) or systems inquiry: a set of additional questions to ask, which may be missed on HPI: a general enquiry (have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc.), followed by questions on the body's main organ systems (heart, lungs, digestive tract, urinary tract, etc.).
Social history (SH): birthplace, residences, marital history, social and economic status, habits (including diet, medications, tobacco, alcohol).
The physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. The healthcare provider uses sight, hearing, touch, and sometimes smell (e.g., in infection, uremia, diabetic ketoacidosis). Four actions are the basis of physical examination: inspection, palpation (feel), percussion (tap to determine resonance characteristics), and auscultation (listen), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments.
The clinical examination involves the study of:
Abdomen and rectum
Cardiovascular (heart and blood vessels)
General appearance of the patient and specific indicators of disease (nutritional status, presence of jaundice, pallor or clubbing)
Genitalia (and pregnancy if the patient is or could be pregnant)
Head, eye, ear, nose, and throat (HEENT)
Musculoskeletal (including spine and extremities)
Neurological (consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves)
Psychiatric (orientation, mental state, mood, evidence of abnormal perception or thought).
Respiratory (large airways and lungs)
Skin
Vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation
It is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above.
The treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. A follow-up may be advised. Depending upon the health insurance plan and the managed care system, various forms of "utilization review", such as prior authorization of tests, may place barriers on accessing expensive services.
The medical decision-making (MDM) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses (the differential diagnoses), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient's problem.
On subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations.
Institutions
Contemporary medicine is, in general, conducted within health care systems. Legal, credentialing, and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. The characteristics of any given health care system have a significant impact on the way medical care is provided.
From ancient times, Christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals, and the Catholic Church today remains the largest non-government provider of medical services in the world. Advanced industrial countries (with the exception of the United States) and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single-payer health care system or compulsory private or cooperative health insurance. This is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. Delivery may be via private medical practices, state-owned hospitals and clinics, or charities, most commonly a combination of all three.
Most tribal societies provide no guarantee of healthcare for the population as a whole. In such societies, healthcare is available to those who can afford to pay for it, have self-insured it (either directly or as part of an employment contract), or may be covered by care financed directly by the government or tribe.
Transparency of information is another factor defining a delivery system. Access to information on conditions, treatments, quality, and pricing greatly affects the choice of patients/consumers and, therefore, the incentives of medical professionals. While the US healthcare system has come under fire for its lack of openness, new legislation may encourage greater openness. There is a perceived tension between the need for transparency on the one hand and such issues as patient confidentiality and the possible exploitation of information for commercial gain on the other.
The health professionals who provide care in medicine comprise multiple professions, such as medics, nurses, physiotherapists, and psychologists. These professions will have their own ethical standards, professional education, and bodies. The medical profession has been conceptualized from a sociological perspective.
Delivery
Provision of medical care is classified into primary, secondary, and tertiary care categories.
Primary care medical services are provided by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. These occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. About 90% of medical visits can be treated by the primary care provider. These include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes.
Secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. Referrals are made for those patients who required the expertise or procedures performed by specialists. These include both ambulatory care and inpatient services, emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. Some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting.
Tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. These include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high-risk pregnancy, radiation oncology, etc.
Modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means.
In low-income countries, modern healthcare is often too expensive for the average person. International healthcare policy researchers have advocated that "user fees" be removed in these areas to ensure access, although even after removal, significant costs and barriers remain.
Separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs.
Branches
Working together as an interdisciplinary team, many highly trained health professionals besides medical practitioners are involved in the delivery of modern health care. Examples include: nurses, emergency medical technicians and paramedics, laboratory scientists, pharmacists, podiatrists, physiotherapists, respiratory therapists, speech therapists, occupational therapists, radiographers, dietitians, and bioengineers, medical physicists, surgeons, surgeon's assistant, surgical technologist.
The scope and sciences underpinning human medicine overlap many other fields. A patient admitted to the hospital is usually under the care of a specific team based on their main presenting problem, e.g., the cardiology team, who then may interact with other specialties, e.g., surgical, radiology, to help diagnose or treat the main problem or any subsequent complications/developments.
Physicians have many specializations and subspecializations into certain branches of medicine, which are listed below. There are variations from country to country regarding which specialties certain subspecialties are in.
The main branches of medicine are:
Basic sciences of medicine; this is what every physician is educated in, and some return to in biomedical research.
Interdisciplinary fields, where different medical specialties are mixed to function in certain occasions.
Medical specialties
Basic sciences
Anatomy is the study of the physical structure of organisms. In contrast to macroscopic or gross anatomy, cytology and histology are concerned with microscopic structures.
Biochemistry is the study of the chemistry taking place in living organisms, especially the structure and function of their chemical components.
Biomechanics is the study of the structure and function of biological systems by means of the methods of Mechanics.
Biophysics is an interdisciplinary science that uses the methods of physics and physical chemistry to study biological systems.
Biostatistics is the application of statistics to biological fields in the broadest sense. A knowledge of biostatistics is essential in the planning, evaluation, and interpretation of medical research. It is also fundamental to epidemiology and evidence-based medicine.
Cytology is the microscopic study of individual cells.
Embryology is the study of the early development of organisms.
Endocrinology is the study of hormones and their effect throughout the body of animals.
Epidemiology is the study of the demographics of disease processes, and includes, but is not limited to, the study of epidemics.
Genetics is the study of genes, and their role in biological inheritance.
Gynecology is the study of female reproductive system.
Histology is the study of the structures of biological tissues by light microscopy, electron microscopy and immunohistochemistry.
Immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example.
Lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them.
Medical physics is the study of the applications of physics principles in medicine.
Microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses.
Molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material.
Neuroscience includes those disciplines of science that are related to the study of the nervous system. A main focus of neuroscience is the biology and physiology of the human brain and spinal cord. Some related clinical specialties include neurology, neurosurgery and psychiatry.
Nutrition science (theoretical focus) and dietetics (practical focus) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. Medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases.
Pathology as a science is the study of diseasethe causes, course, progression and resolution thereof.
Pharmacology is the study of drugs and their actions.
Photobiology is the study of the interactions between non-ionizing radiation and living organisms.
Physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms.
Radiobiology is the study of the interactions between ionizing radiation and living organisms.
Toxicology is the study of hazardous effects of drugs and poisons.
Specialties
In the broadest meaning of "medicine", there are many different specialties. In the UK, most specialities have their own body or college, which has its own entrance examination. These are collectively known as the Royal Colleges, although not all currently use the term "Royal". The development of a speciality is often driven by new technology (such as the development of effective anaesthetics) or ways of working (such as emergency departments); the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination.
Within medical circles, specialities usually fit into one of two broad categories: "Medicine" and "Surgery". "Medicine" refers to the practice of non-operative medicine, and most of its subspecialties require preliminary training in Internal Medicine. In the UK, this was traditionally evidenced by passing the examination for the Membership of the Royal College of Physicians (MRCP) or the equivalent college in Scotland or Ireland. "Surgery" refers to the practice of operative medicine, and most subspecialties in this area require preliminary training in General Surgery, which in the UK leads to membership of the Royal College of Surgeons of England (MRCS). At present, some specialties of medicine do not fit easily into either of these categories, such as radiology, pathology, or anesthesia. Most of these have branched from one or other of the two camps above; for example anaesthesia developed first as a faculty of the Royal College of Surgeons (for which MRCS/FRCS would have been required) before becoming the Royal College of Anaesthetists and membership of the college is attained by sitting for the examination of the Fellowship of the Royal College of Anesthetists (FRCA).
Surgical specialty
Surgery is an ancient medical specialty that uses operative manual and instrumental techniques on a patient to investigate or treat a pathological condition such as disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas (for example, a perforated ear drum). Surgeons must also manage pre-operative, post-operative, and potential surgical candidates on the hospital wards. In some centers, anesthesiology is part of the division of surgery (for historical and logistical reasons), although it is not a surgical discipline. Other medical specialties may employ surgical procedures, such as ophthalmology and dermatology, but are not considered surgical sub-specialties per se.
Surgical training in the U.S. requires a minimum of five years of residency after medical school. Sub-specialties of surgery often require seven or more years. In addition, fellowships can last an additional one to three years. Because post-residency fellowships can be competitive, many trainees devote two additional years to research. Thus in some cases surgical training will not finish until more than a decade after medical school. Furthermore, surgical training can be very difficult and time-consuming.
Surgical subspecialties include those a physician may specialize in after undergoing general surgery residency training as well as several surgical fields with separate residency training. Surgical subspecialties that one may pursue following general surgery residency training:
Bariatric surgery
Cardiovascular surgery – may also be pursued through a separate cardiovascular surgery residency track
Colorectal surgery
Endocrine surgery
General surgery
Hand surgery
Hepatico-Pancreatico-Biliary Surgery
Minimally invasive surgery
Pediatric surgery
Plastic surgery – may also be pursued through a separate plastic surgery residency track
Surgical critical care
Surgical oncology
Transplant surgery
Trauma surgery
Vascular surgery – may also be pursued through a separate vascular surgery residency track
Other surgical specialties within medicine with their own individual residency training:
Dermatology
Neurosurgery
Ophthalmology
Oral and maxillofacial surgery
Orthopedic surgery
Otorhinolaryngology
Podiatric surgery – do not undergo medical school training, but rather separate training in podiatry school
Urology
Internal medicine specialty
Internal medicine is the medical specialty dealing with the prevention, diagnosis, and treatment of adult diseases. According to some sources, an emphasis on internal structures is implied. In North America, specialists in internal medicine are commonly called "internists". Elsewhere, especially in Commonwealth nations, such specialists are often called physicians. These terms, internist or physician (in the narrow sense, common outside North America), generally exclude practitioners of gynecology and obstetrics, pathology, psychiatry, and especially surgery and its subspecialities.
Because their patients are often seriously ill or require complex investigations, internists do much of their work in hospitals. Formerly, many internists were not subspecialized; such general physicians would see any complex nonsurgical problem; this style of practice has become much less common. In modern urban practice, most internists are subspecialists: that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. For example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys.
In the Commonwealth of Nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians (or internists) who have subspecialized by age of patient rather than by organ system. Elsewhere, especially in North America, general pediatrics is often a form of primary care.
There are many subspecialities (or subdisciplines) of internal medicine:
Angiology/Vascular Medicine
Bariatrics
Cardiology
Critical care medicine
Endocrinology
Gastroenterology
Geriatrics
Hematology
Hepatology
Infectious disease
Nephrology
Neurology
Oncology
Pediatrics
Pulmonology/Pneumology/Respirology/chest medicine
Rheumatology
Sports Medicine
Training in internal medicine (as opposed to surgical training), varies considerably across the world: see the articles on medical education for more details. In North America, it requires at least three years of residency training after medical school, which can then be followed by a one- to three-year fellowship in the subspecialties listed above. In general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the US. This difference does not apply in the UK where all doctors are now required by law to work less than 48 hours per week on average.
Diagnostic specialties
Clinical laboratory sciences are the clinical diagnostic services that apply laboratory techniques to diagnosis and management of patients. In the United States, these services are supervised by a pathologist. The personnel that work in these medical laboratory departments are technically trained staff who do not hold medical degrees, but who usually hold an undergraduate medical technology degree, who actually perform the tests, assays, and procedures needed for providing the specific services. Subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology.
Clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of the nervous system. These kinds of tests can be divided into recordings of: (1) spontaneous or continuously running electrical activity, or (2) stimulus evoked responses. Subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. Sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional.
Diagnostic radiology is concerned with imaging of the body, e.g. by x-rays, x-ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. Interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling.
Nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances (radiopharmaceuticals) to the body, which can then be imaged outside the body by a gamma camera or a PET scanner. Each radiopharmaceutical consists of two parts: a tracer that is specific for the function under study (e.g., neurotransmitter pathway, metabolic pathway, blood flow, or other), and a radionuclide (usually either a gamma-emitter or a positron emitter). There is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the PET/CT scanner.
Pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. As a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence-based medicine. Many modern molecular tests such as flow cytometry, polymerase chain reaction (PCR), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization (FISH) fall within the territory of pathology.
Other major specialties
The following are some major medical specialties that do not directly fit into any of the above-mentioned groups:
Anesthesiology (also known as anaesthetics): concerned with the perioperative management of the surgical patient. The anesthesiologist's role during surgery is to prevent derangement in the vital organs' (i.e. brain, heart, kidneys) functions and postoperative pain. Outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine.
Emergency medicine is concerned with the diagnosis and treatment of acute or life-threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies.
Family medicine, family practice, general practice or primary care is, in many countries, the first port-of-call for patients with non-emergency medical problems. Family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care.
Medical genetics is concerned with the diagnosis and management of hereditary disorders.
Neurology is concerned with diseases of the nervous system. In the UK, neurology is a subspecialty of general medicine.
Obstetrics and gynecology (often abbreviated as OB/GYN (American English) or Obs & Gynae (British English)) are concerned respectively with childbirth and the female reproductive and associated organs. Reproductive medicine and fertility medicine are generally practiced by gynecological specialists.
Pediatrics (AE) or paediatrics (BE) is devoted to the care of infants, children, and adolescents. Like internal medicine, there are many pediatric subspecialties for specific age ranges, organ systems, disease classes, and sites of care delivery.
Pharmaceutical medicine is the medical scientific discipline concerned with the discovery, development, evaluation, registration, monitoring and medical aspects of marketing of medicines for the benefit of patients and public health.
Physical medicine and rehabilitation (or physiatry) is concerned with functional improvement after injury, illness, or congenital disorders.
Podiatric medicine is the study of, diagnosis, and medical and surgical treatment of disorders of the foot, ankle, lower limb, hip and lower back.
Preventive medicine is the branch of medicine concerned with preventing disease.
Community health or public health is an aspect of health services concerned with threats to the overall health of a community based on population health analysis.
Psychiatry is the branch of medicine concerned with the bio-psycho-social study of the etiology, diagnosis, treatment and prevention of cognitive, perceptual, emotional and behavioral disorders. Related fields include psychotherapy and clinical psychology.
Interdisciplinary fields
Some interdisciplinary sub-specialties of medicine include:
Addiction medicine deals with the treatment of addiction.
Aerospace medicine deals with medical problems related to flying and space travel.
Biomedical Engineering is a field dealing with the application of engineering principles to medical practice.
Clinical pharmacology is concerned with how systems of therapeutics interact with patients.
Conservation medicine studies the relationship between human and non-human animal health, and environmental conditions. Also known as ecological medicine, environmental medicine, or medical geology.
Disaster medicine deals with medical aspects of emergency preparedness, disaster mitigation and management.
Diving medicine (or hyperbaric medicine) is the prevention and treatment of diving-related problems.
Evolutionary medicine is a perspective on medicine derived through applying evolutionary theory.
Forensic medicine deals with medical questions in legal context, such as determination of the time and cause of death, type of weapon used to inflict trauma, reconstruction of the facial features using remains of deceased (skull) thus aiding identification.
Gender-based medicine studies the biological and physiological differences between the human sexes and how that affects differences in disease.
Health informatics is a relatively recent field that deal with the application of computers and information technology to medicine.
Hospice and Palliative Medicine is a relatively modern branch of clinical medicine that deals with pain and symptom relief and emotional support in patients with terminal illnesses including cancer and heart failure.
Hospital medicine is the general medical care of hospitalized patients. Physicians whose primary professional focus is hospital medicine are called hospitalists in the United States and Canada. The term Most Responsible Physician (MRP) or attending physician is also used interchangeably to describe this role.
Laser medicine involves the use of lasers in the diagnostics or treatment of various conditions.
Many other health science fields, e.g. dietetics
Medical ethics deals with ethical and moral principles that apply values and judgments to the practice of medicine.
Medical humanities includes the humanities (literature, philosophy, ethics, history and religion), social science (anthropology, cultural studies, psychology, sociology), and the arts (literature, theater, film, and visual arts) and their application to medical education and practice.
Nosokinetics is the science/subject of measuring and modelling the process of care in health and social care systems.
Nosology is the classification of diseases for various purposes.
Occupational medicine is the provision of health advice to organizations and individuals to ensure that the highest standards of health and safety at work can be achieved and maintained.
Pain management (also called pain medicine, or algiatry) is the medical discipline concerned with the relief of pain.
Pharmacogenomics is a form of individualized medicine.
Podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back.
Sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality.
Sports medicine deals with the treatment and prevention and rehabilitation of sports/exercise injuries such as muscle spasms, muscle tears, injuries to ligaments (ligament tears or ruptures) and their repair in athletes, amateur and professional.
Therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health.
Travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments.
Tropical medicine deals with the prevention and treatment of tropical diseases. It is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs.
Urgent care focuses on delivery of unscheduled, walk-in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. In some jurisdictions this function is combined with the emergency department.
Veterinary medicine; veterinarians apply similar techniques as physicians to the care of non-human animals.
Wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available.
Education and legal controls
Medical education and training varies around the world. It typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. This can be followed by postgraduate vocational training. A variety of teaching methods have been employed in medical education, still itself a focus of active research. In Canada and the United States of America, a Doctor of Medicine degree, often abbreviated M.D., or a Doctor of Osteopathic Medicine degree, often abbreviated as D.O. and unique to the United States, must be completed in and delivered from a recognized university.
Since knowledge, techniques, and medical technology continue to evolve at a rapid rate, many regulatory authorities require continuing medical education. Medical practitioners upgrade their knowledge in various ways, including medical journals, seminars, conferences, and online programs. A database of objectives covering medical knowledge, as suggested by national societies across the United States, can be searched at http://data.medobjectives.marian.edu/ .
In most countries, it is a legal requirement for a medical doctor to be licensed or registered. In general, this entails a medical degree from a university and accreditation by a medical board or an equivalent national organization, which may ask the applicant to pass exams. This restricts the considerable legal authority of the medical profession to physicians that are trained and qualified by national standards. It is also intended as an assurance to patients and as a safeguard against charlatans that practice inadequate medicine for personal gain. While the laws generally require medical doctors to be trained in "evidence based", Western, or Hippocratic Medicine, they are not intended to discourage different paradigms of health.
In the European Union, the profession of doctor of medicine is regulated. A profession is said to be regulated when access and exercise is subject to the possession of a specific professional qualification. The regulated professions database contains a list of regulated professions for doctor of medicine in the EU member states, EEA countries and Switzerland. This list is covered by the Directive 2005/36/EC.
Doctors who are negligent or intentionally harmful in their care of patients can face charges of medical malpractice and be subject to civil, criminal, or professional sanctions.
Medical ethics
Medical ethics is a system of moral principles that apply values and judgments to the practice of medicine. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology. Six of the values that commonly apply to medical ethics discussions are:
autonomy – the patient has the right to refuse or choose their treatment. (.)
beneficence – a practitioner should act in the best interest of the patient. (.)
justice – concerns the distribution of scarce health resources, and the decision of who gets what treatment (fairness and equality).
non-maleficence – "first, do no harm" ().
respect for persons – the patient (and the person treating the patient) have the right to be treated with dignity.
truthfulness and honesty – the concept of informed consent has increased in importance since the historical events of the Doctors' Trial of the Nuremberg trials, Tuskegee syphilis experiment, and others.
Values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts. When moral values are in conflict, the result may be an ethical dilemma or crisis. Sometimes, no good solution to a dilemma in medical ethics exists, and occasionally, the values of the medical community (i.e., the hospital and its staff) conflict with the values of the individual patient, family, or larger non-medical community. Conflicts can also arise between health care providers, or among family members. For example, some argue that the principles of autonomy and beneficence clash when patients refuse blood transfusions, considering them life-saving; and truth-telling was not emphasized to a large extent before the HIV era.
History
Ancient world
Prehistoric medicine incorporated plants (herbalism), animal parts, and minerals. In many cases these materials were used ritually as magical substances by priests, shamans, or medicine men. Well-known spiritual systems include animism (the notion of inanimate objects having spirits), spiritualism (an appeal to gods or communion with ancestor spirits); shamanism (the vesting of an individual with mystic powers); and divination (magically obtaining the truth). The field of medical anthropology examines the ways in which culture and society are organized around or impacted by issues of health, health care and related issues.
The earliest known medical texts in the world were found in the ancient Syrian city of Ebla and date back to 2500 BCE. Other early records on medicine have been discovered from ancient Egyptian medicine, Babylonian Medicine, Ayurvedic medicine (in the Indian subcontinent), classical Chinese medicine (Alternative medicine) predecessor to the modern traditional Chinese medicine), and ancient Greek medicine and Roman medicine.
In Egypt, Imhotep (3rd millennium BCE) is the first physician in history known by name. The oldest Egyptian medical text is the Kahun Gynaecological Papyrus from around 2000 BCE, which describes gynaecological diseases. The Edwin Smith Papyrus dating back to 1600 BCE is an early work on surgery, while the Ebers Papyrus dating back to 1500 BCE is akin to a textbook on medicine.
In China, archaeological evidence of medicine in Chinese dates back to the Bronze Age Shang dynasty, based on seeds for herbalism and tools presumed to have been used for surgery. The Huangdi Neijing, the progenitor of Chinese medicine, is a medical text written beginning in the 2nd century BCE and compiled in the 3rd century.
In India, the surgeon Sushruta described numerous surgical operations, including the earliest forms of plastic surgery.Earliest records of dedicated hospitals come from Mihintale in Sri Lanka where evidence of dedicated medicinal treatment facilities for patients are found.
In Greece, the ancient Greek physician Hippocrates, the "father of modern medicine", laid the foundation for a rational approach to medicine. Hippocrates introduced the Hippocratic Oath for physicians, which is still relevant and in use today, and was the first to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence". The Greek physician Galen was also one of the greatest surgeons of the ancient world and performed many audacious operations, including brain and eye surgeries. After the fall of the Western Roman Empire and the onset of the Early Middle Ages, the Greek tradition of medicine went into decline in Western Europe, although it continued uninterrupted in the Eastern Roman (Byzantine) Empire.
Most of our knowledge of ancient Hebrew medicine during the 1st millennium BC comes from the Torah, i.e. the Five Books of Moses, which contain various health related laws and rituals. The Hebrew contribution to the development of modern medicine started in the Byzantine Era, with the physician Asaph the Jew.
Middle Ages
The concept of hospital as institution to offer medical care and possibility of a cure for the patients due to the ideals of Christian charity, rather than just merely a place to die, appeared in the Byzantine Empire.
Although the concept of uroscopy was known to Galen, he did not see the importance of using it to localize the disease. It was under the Byzantines with physicians such of Theophilus Protospatharius that they realized the potential in uroscopy to determine disease in a time when no microscope or stethoscope existed. That practice eventually spread to the rest of Europe.
After 750 CE, the Muslim world had the works of Hippocrates, Galen and Sushruta translated into Arabic, and Islamic physicians engaged in some significant medical research. Notable Islamic medical pioneers include the Persian polymath, Avicenna, who, along with Imhotep and Hippocrates, has also been called the "father of medicine". He wrote The Canon of Medicine which became a standard medical text at many medieval European universities, considered one of the most famous books in the history of medicine. Others include Abulcasis, Avenzoar, Ibn al-Nafis, and Averroes. Persian physician Rhazes was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine. Some volumes of Rhazes's work Al-Mansuri, namely "On Surgery" and "A General Book on Therapy", became part of the medical curriculum in European universities. Additionally, he has been described as a doctor's doctor, the father of pediatrics, and a pioneer of ophthalmology. For example, he was the first to recognize the reaction of the eye's pupil to light. The Persian Bimaristan hospitals were an early example of public hospitals.
In Europe, Charlemagne decreed that a hospital should be attached to each cathedral and monastery and the historian Geoffrey Blainey likened the activities of the Catholic Church in health care during the Middle Ages to an early version of a welfare state: "It conducted hospitals for the old and orphanages for the young; hospices for the sick of all ages; places for the lepers; and hostels or inns where pilgrims could buy a cheap bed and meal". It supplied food to the population during famine and distributed food to the poor. This welfare system the church funded through collecting taxes on a large scale and possessing large farmlands and estates. The Benedictine order was noted for setting up hospitals and infirmaries in their monasteries, growing medical herbs and becoming the chief medical care givers of their districts, as at the great Abbey of Cluny. The Church also established a network of cathedral schools and universities where medicine was studied. The Schola Medica Salernitana in Salerno, looking to the learning of Greek and Arab physicians, grew to be the finest medical school in Medieval Europe.
However, the fourteenth and fifteenth century Black Death devastated both the Middle East and Europe, and it has even been argued that Western Europe was generally more effective in recovering from the pandemic than the Middle East. In the early modern period, important early figures in medicine and anatomy emerged in Europe, including Gabriele Falloppio and William Harvey.
The major shift in medical thinking was the gradual rejection, especially during the Black Death in the 14th and 15th centuries, of what may be called the "traditional authority" approach to science and medicine. This was the notion that because some prominent person in the past said something must be so, then that was the way it was, and anything one observed to the contrary was an anomaly (which was paralleled by a similar shift in European society in general – see Copernicus's rejection of Ptolemy's theories on astronomy). Physicians like Vesalius improved upon or disproved some of the theories from the past. The main tomes used both by medicine students and expert physicians were Materia Medica and Pharmacopoeia.
Andreas Vesalius was the author of De humani corporis fabrica, an important book on human anatomy. Bacteria and microorganisms were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field microbiology. Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the "Manuscript of Paris" in 1546, and later published in the theological work for which he paid with his life in 1553. Later this was described by Renaldus Columbus and Andrea Cesalpino. Herman Boerhaave is sometimes referred to as a "father of physiology" due to his exemplary teaching in Leiden and textbook 'Institutiones medicae' (1708). Pierre Fauchard has been called "the father of modern dentistry".
Modern
Veterinary medicine was, for the first time, truly separated from human medicine in 1761, when the French veterinarian Claude Bourgelat founded the world's first veterinary school in Lyon, France. Before this, medical doctors treated both humans and other animals.
Modern scientific biomedical research (where results are testable and reproducible) began to replace early Western traditions based on herbalism, the Greek "four humours" and other such pre-modern notions. The modern era really began with Edward Jenner's discovery of the smallpox vaccine at the end of the 18th century (inspired by the method of variolation originated in ancient China), Robert Koch's discoveries around 1880 of the transmission of disease by bacteria, and then the discovery of antibiotics around 1900.
The post-18th century modernity period brought more groundbreaking researchers from Europe. From Germany and Austria, doctors Rudolf Virchow, Wilhelm Conrad Röntgen, Karl Landsteiner and Otto Loewi made notable contributions. In the United Kingdom, Alexander Fleming, Joseph Lister, Francis Crick and Florence Nightingale are considered important. Spanish doctor Santiago Ramón y Cajal is considered the father of modern neuroscience.
From New Zealand and Australia came Maurice Wilkins, Howard Florey, and Frank Macfarlane Burnet.
Others that did significant work include William Williams Keen, William Coley, James D. Watson (United States); Salvador Luria (Italy); Alexandre Yersin (Switzerland); Kitasato Shibasaburō (Japan); Jean-Martin Charcot, Claude Bernard, Paul Broca (France); Adolfo Lutz (Brazil); Nikolai Korotkov (Russia); Sir William Osler (Canada); and Harvey Cushing (United States).
As science and technology developed, medicine became more reliant upon medications. Throughout history and in Europe right until the late 18th century, not only plant products were used as medicine, but also animal (including human) body parts and fluids. Pharmacology developed in part from herbalism and some drugs are still derived from plants (atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc.). Vaccines were discovered by Edward Jenner and Louis Pasteur.
The first antibiotic was arsphenamine (Salvarsan) discovered by Paul Ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. The first major class of antibiotics was the sulfa drugs, derived by German chemists originally from azo dyes.
Pharmacology has become increasingly sophisticated; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side-effects. Genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision-making.
Evidence-based medicine is a contemporary movement to establish the most effective algorithms of practice (ways of doing things) through the use of systematic reviews and meta-analysis. The movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. The Cochrane Collaboration leads this movement. A 2001 review of 160 Cochrane systematic reviews revealed that, according to two readers, 21.3% of the reviews concluded insufficient evidence, 20% concluded evidence of no effect, and 22.5% concluded positive effect.
Quality, efficiency, and access
Evidence-based medicine, prevention of medical error (and other "iatrogenesis"), and avoidance of unnecessary health care are a priority in modern medical systems. These topics generate significant political and public policy attention, particularly in the United States where healthcare is regarded as excessively costly but population health metrics lag similar nations.
Globally, many developing countries lack access to care and access to medicines. , most wealthy developed countries provide health care to all citizens, with a few exceptions such as the United States where lack of health insurance coverage may limit access.
| Biology and health sciences | Science and medicine | null |
18967 | https://en.wikipedia.org/wiki/Flowering%20plant | Flowering plant | Flowering plants are plants that bear flowers and fruits, and form the clade Angiospermae (). The term 'angiosperm' is derived from the Greek words ἀγγεῖον / ('container, vessel') and σπέρμα / ('seed'), meaning that the seeds are enclosed within a fruit. The group was formerly called Magnoliophyta.
Angiosperms are by far the most diverse group of land plants with 64 orders, 416 families, approximately 13,000 known genera and 300,000 known species. They include all forbs (flowering plants without a woody stem), grasses and grass-like plants, a vast majority of broad-leaved trees, shrubs and vines, and most aquatic plants. Angiosperms are distinguished from the other major seed plant clade, the gymnosperms, by having flowers, xylem consisting of vessel elements instead of tracheids, endosperm within their seeds, and fruits that completely envelop the seeds. The ancestors of flowering plants diverged from the common ancestor of all living gymnosperms before the end of the Carboniferous, over 300 million years ago. In the Cretaceous, angiosperms diversified explosively, becoming the dominant group of plants across the planet.
Agriculture is almost entirely dependent on angiosperms, and a small number of flowering plant families supply nearly all plant-based food and livestock feed. Rice, maize and wheat provide half of the world's staple calorie intake, and all three plants are cereals from the Poaceae family (colloquially known as grasses). Other families provide important industrial plant products such as wood, paper and cotton, and supply numerous ingredients for beverages, sugar production, traditional medicine and modern pharmaceuticals. Flowering plants are also commonly grown for decorative purposes, with certain flowers playing significant cultural roles in many societies.
Out of the "Big Five" extinction events in Earth's history, only the Cretaceous–Paleogene extinction event had occurred while angiosperms dominated plant life on the planet. Today, the Holocene extinction affects all kingdoms of complex life on Earth, and conservation measures are necessary to protect plants in their habitats in the wild (in situ), or failing that, ex situ in seed banks or artificial habitats like botanic gardens. Otherwise, around 40% of plant species may become extinct due to human actions such as habitat destruction, introduction of invasive species, unsustainable logging, land clearing and overharvesting of medicinal or ornamental plants. Further, climate change is starting to impact plants and is likely to cause many species to become extinct by 2100.
Distinguishing features
Angiosperms are terrestrial vascular plants; like the gymnosperms, they have roots, stems, leaves, and seeds. They differ from other seed plants in several ways.
Diversity
Ecological diversity
The largest angiosperms are Eucalyptus gum trees of Australia, and Shorea faguetiana, dipterocarp rainforest trees of Southeast Asia, both of which can reach almost in height. The smallest are Wolffia duckweeds which float on freshwater, each plant less than across.
Considering their method of obtaining energy, some 99% of flowering plants are photosynthetic autotrophs, deriving their energy from sunlight and using it to create molecules such as sugars. The remainder are parasitic, whether on fungi like the orchids for part or all of their life-cycle, or on other plants, either wholly like the broomrapes, Orobanche, or partially like the witchweeds, Striga.
In terms of their environment, flowering plants are cosmopolitan, occupying a wide range of habitats on land, in fresh water and in the sea. On land, they are the dominant plant group in every habitat except for frigid moss-lichen tundra and coniferous forest. The seagrasses in the Alismatales grow in marine environments, spreading with rhizomes that grow through the mud in sheltered coastal waters.
Some specialised angiosperms are able to flourish in extremely acid or alkaline habitats. The sundews, many of which live in nutrient-poor acid bogs, are carnivorous plants, able to derive nutrients such as nitrate from the bodies of trapped insects. Other flowers such as Gentiana verna, the spring gentian, are adapted to the alkaline conditions found on calcium-rich chalk and limestone, which give rise to often dry topographies such as limestone pavement.
As for their growth habit, the flowering plants range from small, soft herbaceous plants, often living as annuals or biennials that set seed and die after one growing season, to large perennial woody trees that may live for many centuries and grow to many metres in height. Some species grow tall without being self-supporting like trees by climbing on other plants in the manner of vines or lianas.
Taxonomic diversity
The number of species of flowering plants is estimated to be in the range of 250,000 to 400,000. This compares to around 12,000 species of moss and 11,000 species of pteridophytes. The APG system seeks to determine the number of families, mostly by molecular phylogenetics. In the 2009 APG III there were 415 families. The 2016 APG IV added five new orders (Boraginales, Dilleniales, Icacinales, Metteniusales and Vahliales), along with some new families, for a total of 64 angiosperm orders and 416 families.
The diversity of flowering plants is not evenly distributed. Nearly all species belong to the eudicot (75%), monocot (23%), and magnoliid (2%) clades. The remaining five clades contain a little over 250 species in total; i.e. less than 0.1% of flowering plant diversity, divided among nine families. The 25 most species-rich of 443 families, containing over 166,000 species between them in their APG circumscriptions, are:
Evolution
History of classification
The botanical term "angiosperm", from Greek words ( 'bottle, vessel') and ( 'seed'), was coined in the form "Angiospermae" by Paul Hermann in 1690, including only flowering plants whose seeds were enclosed in capsules. The term angiosperm fundamentally changed in meaning in 1827 with Robert Brown, when angiosperm came to mean a seed plant with enclosed ovules. In 1851, with Wilhelm Hofmeister's work on embryo-sacs, Angiosperm came to have its modern meaning of all the flowering plants including Dicotyledons and Monocotyledons. The APG system treats the flowering plants as an unranked clade without a formal Latin name (angiosperms). A formal classification was published alongside the 2009 revision in which the flowering plants rank as the subclass Magnoliidae. From 1998, the Angiosperm Phylogeny Group (APG) has reclassified the angiosperms, with updates in the APG II system in 2003, the APG III system in 2009, and the APG IV system in 2016.
Phylogeny
External
In 2019, a molecular phylogeny of plants placed the flowering plants in their evolutionary context:
Internal
The main groups of living angiosperms are:
In 2024, Alexandre R. Zuntini and colleagues constructed a tree of some 6,000 flowering plant genera, representing some 60% of the existing genera, on the basis of analysis of 353 nuclear genes in each specimen. Much of the existing phylogeny is confirmed; the rosid phylogeny is revised.
Fossil history
Fossilised spores suggest that land plants (embryophytes) have existed for at least 475 million years. However, angiosperms appear suddenly and in great diversity in the fossil record in the Early Cretaceous (~130 mya). Claimed records of flowering plants prior to this are not widely accepted. Molecular evidence suggests that the ancestors of angiosperms diverged from the gymnosperms during the late Devonian, about 365 million years ago. The origin time of the crown group of flowering plants remains contentious. By the Late Cretaceous, angiosperms appear to have dominated environments formerly occupied by ferns and gymnosperms. Large canopy-forming trees replaced conifers as the dominant trees close to the end of the Cretaceous, 66 million years ago. The radiation of herbaceous angiosperms occurred much later.
Reproduction
Flowers
The characteristic feature of angiosperms is the flower. Its function is to ensure fertilization of the ovule and development of fruit containing seeds. It may arise terminally on a shoot or from the axil of a leaf. The flower-bearing part of the plant is usually sharply distinguished from the leaf-bearing part, and forms a branch-system called an inflorescence.
Flowers produce two kinds of reproductive cells. Microspores, which divide to become pollen grains, are the male cells; they are borne in the stamens. The female cells, megaspores, divide to become the egg cell. They are contained in the ovule and enclosed in the carpel; one or more carpels form the pistil.
The flower may consist only of these parts, as in wind-pollinated plants like the willow, where each flower comprises only a few stamens or two carpels. In insect- or bird-pollinated plants, other structures protect the sporophylls and attract pollinators. The individual members of these surrounding structures are known as sepals and petals (or tepals in flowers such as Magnolia where sepals and petals are not distinguishable from each other). The outer series (calyx of sepals) is usually green and leaf-like, and functions to protect the rest of the flower, especially the bud. The inner series (corolla of petals) is, in general, white or brightly colored, is more delicate in structure, and attracts pollinators by colour, scent, and nectar.
Most flowers are hermaphroditic, producing both pollen and ovules in the same flower, but some use other devices to reduce self-fertilization. Heteromorphic flowers have carpels and stamens of differing lengths, so animal pollinators cannot easily transfer pollen between them. Homomorphic flowers may use a biochemical self-incompatibility to discriminate between self and non-self pollen grains. Dioecious plants such as holly have male and female flowers on separate plants. Monoecious plants have separate male and female flowers on the same plant; these are often wind-pollinated, as in maize, but include some insect-pollinated plants such as Cucurbita squashes.
Fertilisation and embryogenesis
Double fertilization requires two sperm cells to fertilise cells in the ovule. A pollen grain sticks to the stigma at the top of the pistil, germinates, and grows a long pollen tube. A haploid generative cell travels down the tube behind the tube nucleus. The generative cell divides by mitosis to produce two haploid (n) sperm cells. The pollen tube grows from the stigma, down the style and into the ovary. When it reaches the micropyle of the ovule, it digests its way into one of the synergids, releasing its contents including the sperm cells. The synergid that the cells were released into degenerates; one sperm makes its way to fertilise the egg cell, producing a diploid (2n) zygote. The second sperm cell fuses with both central cell nuclei, producing a triploid (3n) cell. The zygote develops into an embryo; the triploid cell develops into the endosperm, the embryo's food supply. The ovary develops into a fruit. and each ovule into a seed.
Fruit and seed
As the embryo and endosperm develop, the wall of the embryo sac enlarges and combines with the nucellus and integument to form the seed coat. The ovary wall develops to form the fruit or pericarp, whose form is closely associated with type of seed dispersal system.
Other parts of the flower often contribute to forming the fruit. For example, in the apple, the hypanthium forms the edible flesh, surrounding the ovaries which form the tough cases around the seeds.
Apomixis, setting seed without fertilization, is found naturally in about 2.2% of angiosperm genera. Some angiosperms, including many citrus varieties, are able to produce fruits through a type of apomixis called nucellar embryony.
Sexual selection
Adaptive function of flowers
Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom in the initial paragraph of chapter XII noted "The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented." Flowers emerged in plant evolution as an adaptation for the promotion of cross-fertilisation (outcrossing), a process that allows the masking of deleterious mutations in the genome of progeny. The masking effect is known as genetic complementation. Meiosis in flowering plants provides a direct mechanism for repairing DNA through genetic recombination in reproductive tissues. Sexual reproduction appears to be required for maintaining long-term genomic integrity and only infrequent combinations of extrinsic and intrinsic factors permit shifts to asexuality. Thus the two fundamental aspects of sexual reproduction in flowering plants, cross-fertilization (outcrossing) and meiosis appear to be maintained respectively by the advantages of genetic complementation and recombinational repair.
Human uses
Practical uses
Agriculture is almost entirely dependent on angiosperms, which provide virtually all plant-based food and livestock feed. Much of this food derives from a small number of flowering plant families. For instance, half of the world's calorie intake is supplied by just three plants – wheat, rice and maize.
Flowering plants provide a diverse range of materials in the form of wood, paper, fibers such as cotton, flax, and hemp, medicines such as digoxin and opioids, and decorative and landscaping plants. Coffee and hot chocolate are beverages from flowering plants (in the Rubiaceae and Malvaceae respectively).
Cultural uses
Both real and fictitious plants play a wide variety of roles in literature and film. Flowers are the subjects of many poems by poets such as William Blake, Robert Frost, and Rabindranath Tagore. Bird-and-flower painting () is a kind of Chinese painting that celebrates the beauty of flowering plants. Flowers have been used in literature to convey meaning by authors including William Shakespeare.
Flowers are used in a variety of art forms which arrange cut or living plants, such as bonsai, ikebana, and flower arranging. Ornamental plants have sometimes changed the course of history, as in tulipomania. Many countries and regions have floral emblems; a survey of 70 of these found that the most popular flowering plant family for such emblems is Orchidaceae at 15.7% (11 emblems), followed by Fabaceae at 10% (7 emblems), and Asparagaceae, Asteraceae, and Rosaceae all at 5.7% (4 emblems each).
Conservation
Human impact on the environment has driven a range of species extinct and is threatening even more today. Multiple organizations such as IUCN and Royal Botanic Gardens, Kew suggest that around 40% of plant species are threatened with extinction. The majority are threatened by habitat loss, but activities such as logging of wild timber trees and collection of medicinal plants, or the introduction of non-native invasive species, also play a role.
Relatively few plant diversity assessments currently consider climate change, yet it is starting to impact plants as well. About 3% of flowering plants are very likely to be driven extinct within a century at of global warming, and 10% at . In worst-case scenarios, half of all tree species may be driven extinct by climate change over that timeframe.
Conservation in this context is the attempt to prevent extinction, whether in situ by protecting plants and their habitats in the wild, or ex situ in seed banks or as living plants. Some 3000 botanic gardens around the world maintain living plants, including over 40% of the species known to be threatened, as an "insurance policy against extinction in the wild." The United Nations' Global Strategy for Plant Conservation asserts that "without plants, there is no life". It aims to "halt the continuing loss of plant diversity" throughout the world.
| Biology and health sciences | Flowering plants | null |
18970 | https://en.wikipedia.org/wiki/Malpighiales | Malpighiales | The Malpighiales comprise one of the largest orders of flowering plants. The order is very diverse, with well-known members including willows, violets, aspens and poplars, poinsettia, corpse flower, coca plant, cassava, flaxseed, castor bean, Saint John's wort, passionfruit, mangosteen, and manchineel tree.
The order is not part of any of the classification systems based only on plant morphology and the relationships of its diverse members can be hard to recognize except with molecular phylogenetic evidence. Molecular clock calculations estimate the origin of stem group Malpighiales at around 100 million years ago (Mya) and the origin of crown group Malpighiales at about 90 Mya.
The Malpighiales contain about 36 families and more than species, about 7.8% of the eudicots.
Taxonomy
The Malpighiales include the following 36 families, according to the APG IV system of classification:
Achariaceae
Balanopaceae
Bonnetiaceae
Calophyllaceae
Caryocaraceae
Centroplacaceae
Chrysobalanaceae
Clusiaceae
Ctenolophonaceae
Dichapetalaceae
Elatinaceae
Erythroxylaceae
Euphorbiaceae
Euphroniaceae
Goupiaceae
Humiriaceae
Hypericaceae
Irvingiaceae
Ixonanthaceae
Lacistemataceae
Linaceae
Lophopyxidaceae
Malpighiaceae
Ochnaceae
Pandaceae
Passifloraceae
Peraceae
Phyllanthaceae
Picrodendraceae
Podostemaceae
Putranjivaceae
Rafflesiaceae
Rhizophoraceae
Salicaceae
Trigoniaceae
Violaceae
In the APG III system, 35 families were recognized. Medusagynaceae, Quiinaceae, Peraceae, Malesherbiaceae, Turneraceae, Samydaceae, and Scyphostegiaceae were consolidated into other families. The largest family, by far, is the Euphorbiaceae, with about 6300 species in about 245 genera. Changes made in the Angiosperm Phylogeny Group (APG) classification of 2016 (APG IV) were the inclusion of Irvingiaceae, Peraceae, Euphorbiaceae and Ixonanthaceae, together with the transfer of the COM clade from the fabids (rosid I) to the malvids (rosid II).
Phylogeny
The phylogenetic tree shown below is from Xi et al. (2012). The study presented a more resolved phylogenetic tree than previous studies through the use of data from a large number of genes. They included analyses of 82 plastid genes from 58 species (ignoring the problematic Rafflesiaceae), using partitions identified a posteriori by applying a Bayesian mixture model. Xi et al. identified 12 additional clades and three major, basal clades.
2009 (Older)
The older phylogenetic tree shown below is from Wurdack and Davis (2009). The DNA sequences of 13 genes, 42 families were placed into 16 groups, ranging in size from one to 10 families. The relationships among these 16 groups were poorly resolved. The statistical support for each branch is 100% bootstrap percentage and 100% posterior probability, except where labeled, with bootstrap percentage followed by posterior probability.
Circumscription
Malpighiales is monophyletic and in molecular phylogenetic studies, it receives strong statistical support. Since the APG II system was published in 2003, minor changes to the circumscription of the order have been made. The family Peridiscaceae has been expanded from two genera to three, and then to four, and transferred to Saxifragales.
The genera Cyrillopsis (Ixonanthaceae), Centroplacus (Centroplacaceae), Bhesa (Centroplacaceae), Aneulophus (Erythroxylaceae), Ploiarium (Bonnetiaceae), Trichostephanus (Samydaceae), Sapria (Rafflesiaceae), Rhizanthes (Rafflesiaceae), and Rafflesia (Rafflesiaceae) had been either added or confirmed as members of Malpighiales by the end of 2009.
Some family delimitations within the order have changed, as well, most notably, the segregation of Calophyllaceae from Clusiaceae sensu lato when it was shown that the latter is paraphyletic. Some differences of opinion on family delimitation exist, as well. For example, Samydaceae and Scyphostegiaceae may be recognized as families or included in a large version of Salicaceae.
The group is difficult to characterize phenotypically, due to sheer morphological diversity, ranging from tropical holoparasites with giant flowers, such as Rafflesia, to temperate trees and herbs with tiny, simple flowers, such as Salix. Members often have dentate leaves, with the teeth having a single vein running into a congested and often deciduous apex (i.e., violoid, salicoid, or theoid). Also, zeylanol has recently been discovered in Balanops and Dichapetalum which are in the balanops clade (so-called Chrysobalanaceae s. l.). The so-called parietal suborder (the clusioid clade and Ochnaceae s. l. were also part of Parietales) corresponds with the traditional Violales as 8 (Achariaceae, Violaceae, Flacourtiaceae, Lacistemataceae, Scyphostegiaceae, Turneraceae, Malesherbiaceae, and Passifloraceae) of the order's 10 families along with Salicaceae, which have usually been assigned as a related order or suborder, are in this most derived malpighian suborder, so that eight of the 10 families of this suborder are Violales. The family Flacourtiaceae has proven to be polyphyletic as the cyanogenic members have been placed in Achariaceae and the ones with salicoid teeth were transferred to Salicaceae. Scyphostegiaceae, consisting of the single genus Scyphostegia has been merged into Salicaceae.
Affinities
Malpighiales is a member of a supraordinal group called the COM clade, which consists of the orders Celastrales, Oxalidales, and Malpighiales. Some describe it as containing a fourth order, Huales, separating the family Huaceae into its own order, separate from Oxalidales.
Some recent studies have placed Malpighiales as sister to Oxalidales sensu lato (including Huaceae), while others have found a different topology for the COM clade.
The COM clade is part of an unranked group known as malvids (rosid II), though formally placed in Fabidae (rosid I). These in turn are part of a group that has long been recognized, namely, the rosids.
History
The French botanist Charles Plumier named the genus Malpighia in honor of Marcello Malpighi's work on plants; Malpighia is the type genus for the Malpighiaceae, a family of tropical and subtropical flowering plants.
The family Malpighiaceae was the type family for one of the orders created by Jussieu in his 1789 work Genera Plantarum. Friedrich von Berchtold and Jan Presl described such an order in 1820. Unlike modern taxonomists, these authors did not use the suffix "ales" in naming their orders. The name "Malpighiales" is attributed by some to Carl von Martius. In the 20th century, it was usually associated with John Hutchinson, who used it in all three editions of his book, The Families of Flowering Plants. The name was not used by those who wrote later, in the 1970s, '80s, and '90s.
The taxon was largely presaged by Hans Hallier in 1912 in an article in the Archiv. Néerl. Sci. Exact. Nat. titled "L'Origine et le système phylétique des angiospermes", in which his Passionales and Polygalinae were derived from Linaceae (in Guttales), with Passionales containing seven (of eight) families that also appear in the current Malpighiales, namely Passifloraceae, Salicaceae, Euphorbiaceae, Achariaceae, Flacourtiaceae, Malesherbiaceae, and Turneraceae, and Polygalinae containing four (of 10) families that also appear in the current Malpighiales, namely Malpighiaceae, Violaceae, Dichapetalaceae, and Trigoniaceae.
The molecular phylogenetic revolution led to a major restructuring of the order. The first semblance of Malpighiales as now known came from a phylogeny of seed plants published in 1993 and based upon DNA sequences of the gene rbcL. This study recovered a group of rosids unlike any group found in any previous system of plant classification. To make a clear break with classification systems being used at that time, the Angiosperm Phylogeny Group resurrected Hutchinson's name, though his concept of Malpighiales included much of what is now in Celastrales and Oxalidales.
Gallery of type genera
"Litoh family" is a common name for Ctenolophonaceae, and "koteb family" for Lophopyxidaceae.
| Biology and health sciences | Malpighiales | Plants |
18976 | https://en.wikipedia.org/wiki/Meiosis | Meiosis | Meiosis (; , (since it is a reductional division) is a special type of cell division of germ cells in sexually-reproducing organisms that produces the gametes, the sperm or egg cells. It involves two rounds of division that ultimately result in four cells, each with only one copy of each chromosome (haploid). Additionally, prior to the division, genetic material from the paternal and maternal copies of each chromosome is crossed over, creating new combinations of code on each chromosome. Later on, during fertilisation, the haploid cells produced by meiosis from a male and a female will fuse to create a zygote, a cell with two copies of each chromosome again.
Errors in meiosis resulting in aneuploidy (an abnormal number of chromosomes) are the leading known cause of miscarriage and the most frequent genetic cause of developmental disabilities.
In meiosis, DNA replication is followed by two rounds of cell division to produce four daughter cells, each with half the number of chromosomes as the original parent cell. The two meiotic divisions are known as meiosis I and meiosis II. Before meiosis begins, during S phase of the cell cycle, the DNA of each chromosome is replicated so that it consists of two identical sister chromatids, which remain held together through sister chromatid cohesion. This S-phase can be referred to as "premeiotic S-phase" or "meiotic S-phase". Immediately following DNA replication, meiotic cells enter a prolonged G2-like stage known as meiotic prophase. During this time, homologous chromosomes pair with each other and undergo genetic recombination, a programmed process in which DNA may be cut and then repaired, which allows them to exchange some of their genetic information. A subset of recombination events results in crossovers, which create physical links known as chiasmata (singular: chiasma, for the Greek letter Chi, Χ) between the homologous chromosomes. In most organisms, these links can help direct each pair of homologous chromosomes to segregate away from each other during meiosis I, resulting in two haploid cells that have half the number of chromosomes as the parent cell.
During meiosis II, the cohesion between sister chromatids is released and they segregate from one another, as during mitosis. In some cases, all four of the meiotic products form gametes such as sperm, spores or pollen. In female animals, three of the four meiotic products are typically eliminated by extrusion into polar bodies, and only one cell develops to produce an ovum. Because the number of chromosomes is halved during meiosis, gametes can fuse (i.e. fertilization) to form a diploid zygote that contains two copies of each chromosome, one from each parent. Thus, alternating cycles of meiosis and fertilization enable sexual reproduction, with successive generations maintaining the same number of chromosomes. For example, diploid human
cells contain 23 pairs of chromosomes including 1 pair of sex chromosomes (46 total), half of maternal origin and half of paternal origin. Meiosis produces haploid gametes (ova or sperm) that contain one set of 23 chromosomes. When two gametes (an egg and a sperm) fuse, the resulting zygote is once again diploid, with the mother and father each contributing 23 chromosomes. This same pattern, but not the same number of chromosomes, occurs in all organisms that utilize meiosis.
Meiosis occurs in all sexually-reproducing single-celled and multicellular organisms (which are all eukaryotes), including animals, plants and fungi. It is an essential process for oogenesis and spermatogenesis.
Overview
Although the process of meiosis is related to the more general cell division process of mitosis, it differs in two important respects:
Meiosis begins with a diploid cell, which contains two copies of each chromosome, termed homologs. First, the cell undergoes DNA replication, so each homolog now consists of two identical sister chromatids. Then each set of homologs pair with each other and exchange genetic information by homologous recombination often leading to physical connections (crossovers) between the homologs. In the first meiotic division, the homologs are segregated to separate daughter cells by the spindle apparatus. The cells then proceed to a second division without an intervening round of DNA replication. The sister chromatids are segregated to separate daughter cells to produce a total of four haploid cells. Female animals employ a slight variation on this pattern and produce one large ovum and three small polar bodies. Because of recombination, an individual chromatid can consist of a new combination of maternal and paternal genetic information, resulting in offspring that are genetically distinct from either parent. Furthermore, an individual gamete can include an assortment of maternal, paternal, and recombinant chromatids. This genetic diversity resulting from sexual reproduction contributes to the variation in traits upon which natural selection can act.
Meiosis uses many of the same mechanisms as mitosis, the type of cell division used by eukaryotes to divide one cell into two identical daughter cells. In some plants, fungi, and protists meiosis results in the formation of spores: haploid cells that can divide vegetatively without undergoing fertilization. Some eukaryotes, like bdelloid rotifers, do not have the ability to carry out meiosis and have acquired the ability to reproduce by parthenogenesis.
Meiosis does not occur in archaea or bacteria, which generally reproduce asexually via binary fission. However, a "sexual" process known as horizontal gene transfer involves the transfer of DNA from one bacterium or archaeon to another and recombination of these DNA molecules of different parental origin.
History
Meiosis was discovered and described for the first time in sea urchin eggs in 1876 by the German biologist Oscar Hertwig. It was described again in 1883, at the level of chromosomes, by the Belgian zoologist Edouard Van Beneden, in Ascaris roundworm eggs. The significance of meiosis for reproduction and inheritance, however, was described only in 1890 by German biologist August Weismann, who noted that two cell divisions were necessary to transform one diploid cell into four haploid cells if the number of chromosomes had to be maintained. In 1911, the American geneticist Thomas Hunt Morgan detected crossovers in meiosis in the fruit fly Drosophila melanogaster, which helped to establish that genetic traits are transmitted on chromosomes.
The term "meiosis" is derived from the Greek word , meaning 'lessening'. It was introduced to biology by J.B. Farmer and J.E.S. Moore in 1905, using the idiosyncratic rendering "maiosis":
We propose to apply the terms Maiosis or Maiotic phase to cover the whole series of nuclear changes included in the two divisions that were designated as Heterotype and Homotype by Flemming.
The spelling was changed to "meiosis" by Koernicke (1905) and by Pantel and De Sinety (1906) to follow the usual conventions for transliterating Greek.
Phases
Meiosis is divided into meiosis I and meiosis II which are further divided into Karyokinesis I, Cytokinesis I, Karyokinesis II, and Cytokinesis II, respectively. The preparatory steps that lead up to meiosis are identical in pattern and name to interphase of the mitotic cell cycle. Interphase is divided into three phases:
Growth 1 (G1) phase: In this very active phase, the cell synthesizes its vast array of proteins, including the enzymes and structural proteins it will need for growth. In G1, each of the chromosomes consists of a single linear molecule of DNA.
Synthesis (S) phase: The genetic material is replicated; each of the cell's chromosomes duplicates to become two identical sister chromatids attached at a centromere. This replication does not change the ploidy of the cell since the centromere number remains the same. The identical sister chromatids have not yet condensed into the densely packaged chromosomes visible with the light microscope. This will take place during prophase I in meiosis.
Growth 2 (G2) phase: G2 phase as seen before mitosis is not present in meiosis. Meiotic prophase corresponds most closely to the G2 phase of the mitotic cell cycle.
Interphase is followed by meiosis I and then meiosis II. Meiosis I separates replicated homologous chromosomes, each still made up of two sister chromatids, into two daughter cells, thus reducing the chromosome number by half. During meiosis II, sister chromatids decouple and the resultant daughter chromosomes are segregated into four daughter cells. For diploid organisms, the daughter cells resulting from meiosis are haploid and contain only one copy of each chromosome. In some species, cells enter a resting phase known as interkinesis between meiosis I and meiosis II.
Meiosis I and II are each divided into prophase, metaphase, anaphase, and telophase stages, similar in purpose to their analogous subphases in the mitotic cell cycle. Therefore, meiosis includes the stages of meiosis I (prophase I, metaphase I, anaphase I, telophase I) and meiosis II (prophase II, metaphase II, anaphase II, telophase II).
<div class="skin-invert-image">
During meiosis, specific genes are more highly transcribed. In addition to strong meiotic stage-specific expression of mRNA, there are also pervasive translational controls (e.g. selective usage of preformed mRNA), regulating the ultimate meiotic stage-specific protein expression of genes during meiosis. Thus, both transcriptional and translational controls determine the broad restructuring of meiotic cells needed to carry out meiosis.
Meiosis I
Meiosis I segregates homologous chromosomes, which are joined as tetrads (2n, 4c), producing two haploid cells (n chromosomes, 23 in humans) which each contain chromatid pairs (1n, 2c). Because the ploidy is reduced from diploid to haploid, meiosis I is referred to as a reductional division. Meiosis II is an equational division analogous to mitosis, in which the sister chromatids are segregated, creating four haploid daughter cells (1n, 1c).
Prophase I
Prophase I is by far the longest phase of meiosis (lasting 13 out of 14 days in mice). During prophase I, homologous maternal and paternal chromosomes pair, synapse, and exchange genetic information (by homologous recombination), forming at least one crossover per chromosome. These crossovers become visible as chiasmata (plural; singular chiasma). This process facilitates stable pairing between homologous chromosomes and hence enables accurate segregation of the chromosomes at the first meiotic division. The paired and replicated chromosomes are called bivalents (two chromosomes) or tetrads (four chromatids), with one chromosome coming from each parent. Prophase I is divided into a series of substages which are named according to the appearance of chromosomes.
Leptotene
The first stage of prophase I is the leptotene stage, also known as leptonema, from Greek words meaning "thin threads". In this stage of prophase I, individual chromosomes—each consisting of two replicated sister chromatids—become "individualized" to form visible strands within the nucleus. The chromosomes each form a linear array of loops mediated by cohesin, and the lateral elements of the synaptonemal complex assemble forming an "axial element" from which the loops emanate. Recombination is initiated in this stage by the enzyme SPO11 which creates programmed double strand breaks (around 300 per meiosis in mice). This process generates single stranded DNA filaments coated by RAD51 and DMC1 which invade the homologous chromosomes, forming inter-axis bridges, and resulting in the pairing/co-alignment of homologues (to a distance of ~400 nm in mice).
Zygotene
Leptotene is followed by the zygotene stage, also known as zygonema, from Greek words meaning "paired threads", which in some organisms is also called the bouquet stage because of the way the telomeres cluster at one end of the nucleus. In this stage the homologous chromosomes become much more closely (~100 nm) and stably paired (a process called synapsis) mediated by the installation of the transverse and central elements of the synaptonemal complex. Synapsis is thought to occur in a zipper-like fashion starting from a recombination nodule. The paired chromosomes are called bivalent or tetrad chromosomes.
Pachytene
The pachytene stage ( ), also known as pachynema, from Greek words meaning "thick threads". is the stage at which all autosomal chromosomes have synapsed. In this stage homologous recombination, including chromosomal crossover (crossing over), is completed through the repair of the double strand breaks formed in leptotene. Most breaks are repaired without forming crossovers resulting in gene conversion. However, a subset of breaks (at least one per chromosome) form crossovers between non-sister (homologous) chromosomes resulting in the exchange of genetic information. The exchange of information between the homologous chromatids results in a recombination of information; each chromosome has the complete set of information it had before, and there are no gaps formed as a result of the process. Because the chromosomes cannot be distinguished in the synaptonemal complex, the actual act of crossing over is not perceivable through an ordinary light microscope, and chiasmata are not visible until the next stage.
Diplotene
During the diplotene stage, also known as diplonema, from Greek words meaning "two threads", the synaptonemal complex disassembles and homologous chromosomes separate from one another a little. However, the homologous chromosomes of each bivalent remain tightly bound at chiasmata, the regions where crossing-over occurred. The chiasmata remain on the chromosomes until they are severed at the transition to anaphase I to allow homologous chromosomes to move to opposite poles of the cell.
In human fetal oogenesis, all developing oocytes develop to this stage and are arrested in prophase I before birth. This suspended state is referred to as the dictyotene stage or dictyate. It lasts until meiosis is resumed to prepare the oocyte for ovulation, which happens at puberty or even later.
Diakinesis
Chromosomes condense further during the diakinesis stage, from Greek words meaning "moving through". This is the first point in meiosis where the four parts of the tetrads are actually visible. Sites of crossing over entangle together, effectively overlapping, making chiasmata clearly visible. Other than this observation, the rest of the stage closely resembles prometaphase of mitosis; the nucleoli disappear, the nuclear membrane disintegrates into vesicles, and the meiotic spindle begins to form.
Meiotic spindle formation
Unlike mitotic cells, human and mouse oocytes do not have centrosomes to produce the meiotic spindle. In mice, approximately 80 MicroTubule Organizing Centers (MTOCs) form a sphere in the ooplasm and begin to nucleate microtubules that reach out towards chromosomes, attaching to the chromosomes at the kinetochore. Over time, the MTOCs merge until two poles have formed, generating a barrel shaped spindle. In human oocytes spindle microtubule nucleation begins on the chromosomes, forming an aster that eventually expands to surround the chromosomes. Chromosomes then slide along the microtubules towards the equator of the spindle, at which point the chromosome kinetochores form end-on attachments to microtubules.
Metaphase I
Homologous pairs move together along the metaphase plate: As kinetochore microtubules from both spindle poles attach to their respective kinetochores, the paired homologous chromosomes align along an equatorial plane that bisects the spindle, due to continuous counterbalancing forces exerted on the bivalents by the microtubules emanating from the two kinetochores of homologous chromosomes. This attachment is referred to as a bipolar attachment. The physical basis of the independent assortment of chromosomes is the random orientation of each bivalent along with the metaphase plate, with respect to the orientation of the other bivalents along the same equatorial line. The protein complex cohesin holds sister chromatids together from the time of their replication until anaphase. In mitosis, the force of kinetochore microtubules pulling in opposite directions creates tension. The cell senses this tension and does not progress with anaphase until all the chromosomes are properly bi-oriented. In meiosis, establishing tension ordinarily requires at least one crossover per chromosome pair in addition to cohesin between sister chromatids (see Chromosome segregation).
Anaphase I
Kinetochore microtubules shorten, pulling homologous chromosomes (which each consist of a pair of sister chromatids) to opposite poles. Nonkinetochore microtubules lengthen, pushing the centrosomes farther apart. The cell elongates in preparation for division down the center. Unlike in mitosis, only the cohesin from the chromosome arms is degraded while the cohesin surrounding the centromere remains protected by a protein named Shugoshin (Japanese for "guardian spirit"), what prevents the sister chromatids from separating. This allows the sister chromatids to remain together while homologs are segregated.
Telophase I
The first meiotic division effectively ends when the chromosomes arrive at the poles. Each daughter cell now has half the number of chromosomes but each chromosome consists of a pair of chromatids. The microtubules that make up the spindle network disappear, and a new nuclear membrane surrounds each haploid set. Cytokinesis, the pinching of the cell membrane in animal cells or the formation of the cell wall in plant cells, occurs, completing the creation of two daughter cells. However, cytokinesis does not fully complete resulting in "cytoplasmic bridges" which enable the cytoplasm to be shared between daughter cells until the end of meiosis II. Sister chromatids remain attached during telophase I.
Cells may enter a period of rest known as interkinesis or interphase II. No DNA replication occurs during this stage.
Meiosis II
Meiosis II is the second meiotic division, and usually involves equational segregation, or separation of sister chromatids. Mechanically, the process is similar to mitosis, though its genetic results are fundamentally different. The result is the production of four haploid cells (n chromosomes; 23 in humans) from the two haploid cells (with n chromosomes, each consisting of two sister chromatids) produced in meiosis I. The four main steps of meiosis II are: prophase II, metaphase II, anaphase II, and telophase II.
In prophase II, we see the disappearance of the nucleoli and the nuclear envelope again as well as the shortening and thickening of the chromatids. Centrosomes move to the polar regions and arrange spindle fibers for the second meiotic division.
In metaphase II, the centromeres contain two kinetochores that attach to spindle fibers from the centrosomes at opposite poles. The new equatorial metaphase plate is rotated by 90 degrees when compared to meiosis I, perpendicular to the previous plate.
This is followed by anaphase II, in which the remaining centromeric cohesin, not protected by Shugoshin anymore, is cleaved, allowing the sister chromatids to segregate. The sister chromatids by convention are now called sister chromosomes as they move toward opposing poles.
The process ends with telophase II, which is similar to telophase I, and is marked by decondensation and lengthening of the chromosomes and the disassembly of the spindle. Nuclear envelopes re-form and cleavage or cell plate formation eventually produces a total of four daughter cells, each with a haploid set of chromosomes.
Meiosis is now complete and ends up with four new daughter cells.
Origin and function
Origin of meiosis
Meiosis appears to be a fundamental characteristic of eukaryotic organisms and to have been present early in eukaryotic evolution. Eukaryotes that were once thought to lack meiotic sex have recently been shown to likely have, or once have had, this capability. As one example, Giardia intestinalis, a common intestinal parasite, was previously considered to have descended from a lineage that predated the emergence of meiosis and sex. However, G. intestinalis has now been found to possess a core set of meiotic genes, including five meiosis specific genes. Also evidence for meiotic recombination, indicative of sexual reproduction, was found in G. intestinalis. Another example of organisms previously thought to be asexual are parasitic protozoa of the genus Leishmania, which cause human disease. However, these organisms were shown to have a sexual cycle consistent with a meiotic process. Although amoeba were once generally regarded as asexual, evidence has been presented that most lineages are anciently sexual and that the majority of asexual groups probably arose recently and independently. Dacks and Rogers proposed, based on a phylogenetic analysis, that facultative sex was likely present in the common ancestor of eukaryotes.
Genetic variation
The new combinations of DNA created during meiosis are a significant source of genetic variation alongside mutation, resulting in new combinations of alleles, which may be beneficial. Meiosis generates gamete genetic diversity in two ways: (1) Law of Independent Assortment. The independent orientation of homologous chromosome pairs along the metaphase plate during metaphase I and orientation of sister chromatids in metaphase II, this is the subsequent separation of homologs and sister chromatids during anaphase I and II, it allows a random and independent distribution of chromosomes to each daughter cell (and ultimately to gametes); and (2) Crossing Over. The physical exchange of homologous chromosomal regions by homologous recombination during prophase I results in new combinations of genetic information within chromosomes. However, such physical exchange does not always occur during meiosis. In the oocytes of the silkworm Bombyx mori, meiosis is completely achiasmate (lacking crossovers). Although synaptonemal complexes are present during the pachytene stage of meiosis in B. mori, crossing-over homologous recombination is absent between the paired chromosomes.
Prophase I arrest
Female mammals and birds are born possessing all the oocytes needed for future ovulations, and these oocytes are arrested at the prophase I stage of meiosis. In humans, as an example, oocytes are formed between three and four months of gestation within the fetus and are therefore present at birth. During this prophase I arrested stage (dictyate), which may last for decades, four copies of the genome are present in the oocytes. The arrest of ooctyes at the four genome copy stage was proposed to provide the informational redundancy needed to repair damage in the DNA of the germline. The repair process used appears to involve homologous recombinational repair Prophase I arrested oocytes have a high capability for efficient repair of DNA damage, particularly exogenously induced double-strand breaks. DNA repair capability appears to be a key quality control mechanism in the female germ line and a critical determinant of fertility.
Meiosis as an adaptation for repairing germline DNA
Genetic recombination can be viewed as fundamentally a DNA repair process, and that when it occurs during meiosis it is an adaptation for repairing the genomic DNA that is passed on to progeny. Experimental findings indicate that a substantial benefit of meiosis is recombinational repair of DNA damage in the germline, as indicated by the following examples. Hydrogen peroxide is an agent that causes oxidative stress leading to oxidative DNA damage. Treatment of the yeast Schizosaccharomyces pombe with hydrogen peroxide increased the frequency of mating and the formation of meiotic spores by 4 to 18-fold. Volvox carteri, a haploid multicellular, facultatively sexual green algae, can be induced by heat shock to reproduce by meiotic sex. This induction can be inhibited by antioxidants indicating that the induction of meiotic sex by heat shock is likely mediated by oxidative stress leading to increased DNA damage.
Occurrence
In life cycles
Meiosis occurs in eukaryotic life cycles involving sexual reproduction, consisting of the cyclical process of growth and development by mitotic cell division, production of gametes by meiosis and fertilization. At certain stages of the life cycle, germ cells produce gametes. Somatic cells make up the body of the organism and are not involved in gamete production.
Cycling meiosis and fertilization events results in alternation between haploid and diploid states. The organism phase of the life cycle can occur either during the diploid state (diplontic life cycle), during the haploid state (haplontic life cycle), or both (haplodiplontic life cycle), in which there are two distinct organism phases, one with haploid cells and the other with diploid cells.
In the diplontic life cycle (with pre-gametic meiosis), as in humans, the organism is multicellular and diploid, grown by mitosis from a diploid cell called the zygote. The organism's diploid germ-line stem cells undergo meiosis to make haploid gametes (the spermatozoa in males and ova in females), which fertilize to form the zygote. The diploid zygote undergoes repeated cellular division by mitosis to grow into the organism.
In the haplontic life cycle (with post-zygotic meiosis), the organism is haploid, by the proliferation and differentiation of a single haploid cell called the gamete. Two organisms of opposing sex contribute their haploid gametes to form a diploid zygote. The zygote undergoes meiosis immediately, creating four haploid cells. These cells undergo mitosis to create the organism. Many fungi and many protozoa utilize the haplontic life cycle.
In the haplodiplontic life cycle (with sporic or intermediate meiosis), the living organism alternates between haploid and diploid states. Consequently, this cycle is also known as the alternation of generations. The diploid organism's germ-line cells undergo meiosis to produce spores. The spores proliferate by mitosis, growing into a haploid organism. The haploid organism's gamete then combines with another haploid organism's gamete, creating the zygote. The zygote undergoes repeated mitosis and differentiation to produce a new diploid organism. The haplodiplontic life cycle can be considered a fusion of the diplontic and haplontic life cycles.
In plants and animals
Meiosis occurs in all animals and plants. The result, the production of gametes with half the number of chromosomes as the parent cell, is the same, but the detailed process is different. In animals, meiosis produces gametes directly. In land plants and some algae, there is an alternation of generations such that meiosis in the diploid sporophyte generation produces haploid spores instead of gametes. When they germinate, these spores undergo repeated cell division by mitosis, developing into a multicellular haploid gametophyte generation, which then produces gametes directly (i.e. without further meiosis).
In both animals and plants, the final stage is for the gametes to fuse to form a zygote in which the original number of chromosomes is restored.
In mammals
In females, meiosis occurs in cells known as oocytes (singular: oocyte). Each primary oocyte divides twice in meiosis, unequally in each case. The first division produces a daughter cell, and a much smaller polar body which may or may not undergo a second division. In meiosis II, division of the daughter cell produces a second polar body, and a single haploid cell, which enlarges to become an ovum. Therefore, in females each primary oocyte that undergoes meiosis results in one mature ovum and two or three polar bodies.
There are pauses during meiosis in females. Maturing oocytes are arrested in prophase I of meiosis I and lie dormant within a protective shell of somatic cells called the follicle. At this stage, the oocyte nucleus is called the germinal vesicle. At the beginning of each menstrual cycle, FSH secretion from the anterior pituitary stimulates a few follicles to mature in a process known as folliculogenesis. During this process, the maturing oocytes resume meiosis and continue until metaphase II of meiosis II, where they are again arrested just before ovulation. The breakdown of the germinal vesicle, condensation of chromosomes, and assembly of the bipolar metaphase I spindle are all clear indications that meiosis has resumed. If these oocytes are fertilized by sperm, they will resume and complete meiosis. During folliculogenesis in humans, usually one follicle becomes dominant while the others undergo atresia. The process of meiosis in females occurs during oogenesis, and differs from the typical meiosis in that it features a long period of meiotic arrest known as the dictyate stage and lacks the assistance of centrosomes.
In males, meiosis occurs during spermatogenesis in the seminiferous tubules of the testicles. Meiosis during spermatogenesis is specific to a type of cell called spermatocytes, which will later mature to become spermatozoa. Meiosis of primordial germ cells happens at the time of puberty, much later than in females. Tissues of the male testis suppress meiosis by degrading retinoic acid, proposed to be a stimulator of meiosis. This is overcome at puberty when cells within seminiferous tubules called Sertoli cells start making their own retinoic acid. Sensitivity to retinoic acid is also adjusted by proteins called nanos and DAZL. Genetic loss-of-function studies on retinoic acid-generating enzymes have shown that retinoic acid is required postnatally to stimulate spermatogonia differentiation which results several days later in spermatocytes undergoing meiosis, however retinoic acid is not required during the time when meiosis initiates.
In female mammals, meiosis begins immediately after primordial germ cells migrate to the ovary in the embryo. Some studies suggest that retinoic acid derived from the primitive kidney (mesonephros) stimulates meiosis in embryonic ovarian oogonia and that tissues of the embryonic male testis suppress meiosis by degrading retinoic acid. However, genetic loss-of-function studies on retinoic acid-generating enzymes have shown that retinoic acid is not required for initiation of either female meiosis which occurs during embryogenesis or male meiosis which initiates postnatally.
Flagellates
While the majority of eukaryotes have a two-divisional meiosis (though sometimes achiasmatic), a very rare form, one-divisional meiosis, occurs in some flagellates (parabasalids and oxymonads) from the gut of the wood-feeding cockroach Cryptocercus.
Role in human genetics and disease
Recombination among the 23 pairs of human chromosomes is responsible for redistributing not just the actual chromosomes, but also pieces of each of them. There is also an estimated 1.6-fold more recombination in females relative to males. In addition, average, female recombination is higher at the centromeres and male recombination is higher at the telomeres. On average, 1 million bp (1 Mb) correspond to 1 cMorgan (cm = 1% recombination frequency). The frequency of cross-overs remain uncertain. In yeast, mouse and human, it has been estimated that ≥200 double-strand breaks (DSBs) are formed per meiotic cell. However, only a subset of DSBs (~5–30% depending on the organism), go on to produce crossovers, which would result in only 1-2 cross-overs per human chromosome.
Nondisjunction
The normal separation of chromosomes in meiosis I or sister chromatids in meiosis II is termed disjunction. When the segregation is not normal, it is called nondisjunction. This results in the production of gametes which have either too many or too few of a particular chromosome, and is a common mechanism for trisomy or monosomy. Nondisjunction can occur in the meiosis I or meiosis II, phases of cellular reproduction, or during mitosis.
Most monosomic and trisomic human embryos are not viable, but some aneuploidies can be tolerated, such as trisomy for the smallest chromosome, chromosome 21. Phenotypes of these aneuploidies range from severe developmental disorders to asymptomatic. Medical conditions include but are not limited to:
Down syndrome – trisomy of chromosome 21
Patau syndrome – trisomy of chromosome 13
Edwards syndrome – trisomy of chromosome 18
Klinefelter syndrome – extra X chromosomes in males – i.e. XXY, XXXY, XXXXY, etc.
Turner syndrome – lacking of one X chromosome in females – i.e. X0
Triple X syndrome – an extra X chromosome in females
Jacobs syndrome – an extra Y chromosome in males.
The probability of nondisjunction in human oocytes increases with increasing maternal age, presumably due to loss of cohesin over time.
Comparison to mitosis
In order to understand meiosis, a comparison to mitosis is helpful. The table below shows the differences between meiosis and mitosis.
Molecular regulation
Maturation promoting factor (MPF) seems to have a role in meiosis based on experiments with Xenopus laevis oocytes. Mammalian oocyte MPF induced germinal vesicle breakdown (GVB) in starfish and Xenopus laevis oocytes. MPF is active prior to GVB but falls off toward the end of meiosis I. CDK1 and cyclin B levels are correlated with oocyte GVB competence and are likely under translational rather than transcriptional control. In meiosis II, MPF reappears ahead of metaphase II, and its activity remains high up to fertilization.
In mammals, meiotic arrest begins with natriuretic peptide type C (NPPC) from mural granulosa cells, which activates production of cyclic guanosine 3′,5′-monophosphate (cGMP) in concert with natriuretic peptide receptor 2 (NPR2) on cumulus cells. cGMP diffuses into oocytes and halts meiosis by inhibiting phosphodiesterase 3A (PDE3A) and cyclic adenosine 3′,5′-monophosphate (cAMP) hydrolysis. In the oocyte, G-protein-coupled receptor GPR3/12 activates adenylyl cyclase to generate cAMP. cAMP stimulates protein kinase A (PKA) to activate the nuclear kinase WEE2 by phosphorylation. PKA also assists in phosphorylation of the CDK1 phosphatase CDC25B to keep it in the cytoplasm; in its unphosphorylated form, CDC25B migrates to the nucleus. Protein kinase C (PKC) may also have a role in inhibiting meiotic progression to metaphase II. Overall, CDK1 activity is suppressed to prevent resumption of meiosis. Oocytes further promote expression of NPR2 and inosine monophosphate dehydrogenase (and thereby the production of cGMP) in cumulus cells. Follicle-stimulating hormone and estradiol likewise promote expression of NPPC and NPR2. Hypoxanthine, a purine apparently originating in the follicle, also inhibits in vitro oocyte meiosis. A spike in luteinizing hormone (LH) spurs oocyte maturation, in which oocytes are released from meiotic arrest and progress from prophase I through metaphase II. LH-induced epidermal growth factor-like factors like amphiregulin and epiregulin synthesized in mural granulosa cells reduce levels of cGMP in oocytes by restricting cGMP transport through cumulus cell-oocyte gap junctions and lowering NPPC levels and NPR2 activity. In fact, LH-induced epidermal growth factor-like factors may cause the destabilization and breakdown of gap junctions altogether. LH-induced epidermal growth factor-like factors may trigger production of additional oocyte maturation factors like steroids and follicular fluid-derived meiosis-activating sterol (FF-MAS) in cumulus cells. FF-MAS promotes progression from metaphase I to metaphase II, and it may help stabilize metaphase II arrest. Meiosis resumption is reinforced by the exit of WEE2 from the nucleus due to CDK1 activation. Phosphodiesterases (PDEs) metabolize cAMP and may be temporarily activated by PKA-mediated phosphorylation. Longer-term regulation of phosphodiesterases may require modulation of protein expression. For example, hypoxanthine is a PDE inhibitor that may stymie cAMP metabolism. Kinases like protein kinase B, Aurora kinase A, and polo-like kinase 1 contribute to the resumption of meiosis. There are similarities between the mechanisms of meiotic prophase I arrest and resumption and the mitotic G2 DNA damage checkpoint: CDC14B-based activation of APC-CDH1 in arrest and CDC25B-based resumption. Meiotic arrest requires inhibitory phosphorylation of CDK1 at amino acid residues Thr-14 and Tyr-15 by MYT1 and WEE1 as well as regulation of cyclin B levels facilitated by the anaphase-promoting complex (APC). CDK1 is regulated by cyclin B, whose synthesis peaks at the end of meiosis I. At anaphase I, cyclin B is degraded by an ubiquitin-dependent pathway. Cyclin B synthesis and CDK1 activation prompt oocytes to enter metaphase, while entry into anaphase follows ubiquitin-mediated cyclin B degradation, which brings down CDK1 activity. Proteolysis of adhesion proteins between homologous chromosomes is involved in anaphase I, while proteolysis of adhesion proteins between sister chromatids is involved in anaphase II. Meiosis II arrest is effected by cytostatic factor (CSF), whose elements include the MOS protein, mitogen-activated protein kinase kinase (MAPKK/MEK1), and MAPK. The protein kinase p90 (RSK) is one critical target of MAPK and may help block entry into S-phase between meiosis I and II by reactivating CDK1. There's evidence that RSK aids entry into meiosis I by inhibiting MYT1, which activates CDK1. CSF arrest might take place through regulation of the APC as part of the spindle assembly checkpoint.
In the budding yeast S. cerevisiae, Clb1 is the main meiotic regulatory cyclin, though Clb3 and Clb4 are also expressed during meiosis and activate a p34cdc28-associated kinase immediately prior to the first meiotic division. The IME1 transcription factor drives entry into meiotic S-phase and is regulated according to inputs like nutrition. a1/α2 represses a repressor of IME1, initiating meiosis. Numerous S. cerevisiae meiotic regulatory genes have been identified. A few are presented here. IME1 enables sporulation of non-a/α diploids. IME2/SME1 enables sporulation when nitrogen is present, supports recombination in a/α cells expressing RME1, an inhibitor of meiosis, and encodes a protein kinase homolog. MCK1 (meiosis and centromere regulatory kinase) also supports recombination in a/α cells expressing RME1 and encodes a protein kinase homolog. SME2 enables sporulation when ammonia or glucose are present. UME1-5 enable expression of certain early meiotic genes in vegetative, non-a/α cells.
In the fission yeast S. pombe, the Cdc2 kinase and Cig2 cyclin together initiate the premeiotic S phase, while cyclin Cdc13 and the CDK activator Cdc25 are necessary for both meiotic divisions. However, the Pat1-Mei2 system is at the heart of S. pombe meiotic regulation. Mei2 is the major meiotic regulator. It moves between the nucleus and cytoplasm and works with meiRNA to promote meiosis I. Moreover, Mei2 is implicated in exit from mitosis and induction of premeiotic S phase. Mei2 may inactivate the DSR-Mmi1 system through sequestration of Mmi1 to stabilize meiosis-specific transcript expression. Mei2 may stall growth and bring about G1 arrest. Pat1 is a Ser/Thr protein kinase that phosphorylates Mei2, an RNA-binding protein, on residues Ser438 and Thr527. This phosphorylation may decrease the half-life of Mei2 by making it more likely to be destroyed by a proteasome working with E2 Ubc2 and E3 Ubr1. The Mei4 transcription factor is necessary to transcriptionally activate cdc25 in meiosis, and the mei4 mutant experiences cell cycle arrest. Mes1 inhibits the APC/C activator Slp1 such that the Cdc2-Cdc13 MPF activity can drive the second meiotic division.
It has been suggested that Yeast CEP1 gene product, that binds centromeric region CDE1, may play a role in chromosome pairing during meiosis-I.
Meiotic recombination is mediated through double stranded break, which is catalyzed by Spo11 protein. Also Mre11, Sae2 and Exo1 play role in breakage and recombination. After the breakage happen, recombination take place which is typically homologous. The recombination may go through either a double Holliday junction (dHJ) pathway or synthesis-dependent strand annealing (SDSA). (The second one gives to noncrossover product).
Seemingly there are checkpoints for meiotic cell division too. In S. pombe, Rad proteins, S. pombe Mek1 (with FHA kinase domain), Cdc25, Cdc2 and unknown factor is thought to form a checkpoint.
In vertebrate oogenesis, maintained by cytostatic factor (CSF) has role in switching into meiosis-II.
| Biology and health sciences | Cellular division | null |
19011 | https://en.wikipedia.org/wiki/Miocene | Miocene | The Miocene ( ) is the first geological epoch of the Neogene Period and extends from about (Ma). The Miocene was named by Scottish geologist Charles Lyell; the name comes from the Greek words (, "less") and (, "new") and means "less recent" because it has 18% fewer modern marine invertebrates than the Pliocene has. The Miocene followed the Oligocene and preceded the Pliocene.
As Earth went from the Oligocene through the Miocene and into the Pliocene, the climate slowly cooled towards a series of ice ages. The Miocene boundaries are not marked by distinct global events but by regionally defined transitions from the warmer Oligocene to the cooler Pliocene Epoch.
During the Early Miocene, Afro-Arabia collided with Eurasia, severing the connection between the Mediterranean and Indian Oceans, and allowing the interchange of fauna between Eurasia and Africa, including the dispersal of proboscideans and hominoids into Eurasia. During the late Miocene, the connections between the Atlantic and Mediterranean closed, causing the Mediterranean Sea to almost completely evaporate. This event is referred to as the "Messinian salinity crisis". Then, at the Miocene–Pliocene boundary, the Strait of Gibraltar opened, and the Mediterranean refilled. That event is referred to as the "Zanclean flood".
Also during the early Miocene (specifically the Aquitanian and Burdigalian Stages), the apes first evolved, began diversifying, and became widespread throughout the Old World. Around the end of this epoch, the ancestors of humans had split away from the ancestors of the chimpanzees and had begun following their own evolutionary path during the final Messinian Stage (7.5–5.3 Ma) of the Miocene. As in the Oligocene before it, grasslands continued to expand, and forests to dwindle. In the seas of the Miocene, kelp forests made their first appearance and soon became one of Earth's most productive ecosystems.
The plants and animals of the Miocene were recognizably modern. Mammals and birds were well established. Whales, pinnipeds, and kelp spread.
The Miocene is of particular interest to geologists and palaeoclimatologists because major phases of the geology of the Himalaya occurred during that epoch, affecting monsoonal patterns in Asia, which were interlinked with glacial periods in the northern hemisphere.
Subdivisions
The Miocene faunal stages from youngest to oldest are typically named according to the International Commission on Stratigraphy:
Regionally, other systems are used, based on characteristic land mammals; some of them overlap with the preceding Oligocene and following Pliocene Epochs:
Paleogeography
Continents continued to drift toward their present positions. Of the modern geologic features, only the land bridge between South America and North America was absent, although South America was approaching the western subduction zone in the Pacific Ocean, causing both the rise of the Andes and a southward extension of the Meso-American peninsula.
Mountain building took place in western North America, Europe, and East Asia. Both continental and marine Miocene deposits are common worldwide with marine outcrops common near modern shorelines. Well studied continental exposures occur in the North American Great Plains and in Argentina.
The global trend was towards increasing aridity caused primarily by global cooling reducing the ability of the atmosphere to absorb moisture, particularly after 7 to 8 million years ago. Uplift of East Africa in the late Miocene was partly responsible for the shrinking of tropical rain forests in that region, and Australia got drier as it entered a zone of low rainfall in the Late Miocene.
Eurasia
The Indian Plate continued to collide with the Eurasian Plate, creating new mountain ranges and uplifting the Tibetan Plateau, resulting in the rain shadowing and aridification of the Asian interior. The Tian Shan experienced significant uplift in the Late Miocene, blocking westerlies from coming into the Tarim Basin and drying it as a result.
At the beginning of the Miocene, the northern margin of the Arabian plate, then part of the African landmass, collided with Eurasia; as a result, the Tethys seaway continued to shrink and then disappeared as Africa collided with Eurasia in the Turkish–Arabian region. The first step of this closure occurred 20 Ma, reducing water mass exchange by 90%, while the second step occurred around 13.8 Ma, coincident with a major expansion of Antarctic glaciers. This severed the connection between the Indian Ocean and the Mediterranean Sea and formed the present land connection between Afro-Arabia and Eurasia. The subsequent uplift of mountains in the western Mediterranean region and a global fall in sea levels combined to cause a temporary drying up of the Mediterranean Sea (known as the Messinian salinity crisis) near the end of the Miocene.
The Paratethys underwent a significant transgression during the early Middle Miocene. Around 13.8 Ma, during a global sea level drop, the Eastern Paratethys was cut off from the global ocean by the closure of the Bârlad Strait, effectively turning it into a saltwater lake. From 13.8 to 13.36 Ma, an evaporite period similar to the later Messinian salinity crisis in the Mediterranean ensued in the Central Paratethys, cut off from sources of freshwater input by its separation from the Eastern Paratethys. From 13.36 to 12.65 Ma, the Central Paratethys was characterised by open marine conditions, before the reopening of the Bârlad Strait resulted in a shift to brackish-marine conditions in the Central Paratethys, causing the Badenian-Sarmatian Extinction Event. As a result of the Bârlad Strait's reopening, the lake levels of the Eastern Paratethys dropped as it once again became a sea.
The Fram Strait opened during the Miocene and acted as the only throughflow for Atlantic Water into the Arctic Ocean until the Quaternary period. Due to regional uplift of the continental shelf, this water could not move through the Barents Seaway in the Miocene.
The modern day Mekong Delta took shape after 8 Ma. Geochemistry of the Qiongdongnan Basin in the northern South China Sea indicates the Pearl River was a major source of sediment flux into the sea during the Early Miocene and was a major fluvial system as in the present.
South America
During the Oligocene and Early Miocene, the coast of northern Brazil, Colombia, south-central Peru, central Chile and large swathes of inland Patagonia were subject to a marine transgression. The transgressions in the west coast of South America are thought to be caused by a regional phenomenon while the steadily rising central segment of the Andes represents an exception. While there are numerous registers of Oligocene–Miocene transgressions around the world it is doubtful that these correlate.
It is thought that the Oligocene–Miocene transgression in Patagonia could have temporarily linked the Pacific and Atlantic Oceans, as inferred from the findings of marine invertebrate fossils of both Atlantic and Pacific affinity in La Cascada Formation. Connection would have occurred through narrow epicontinental seaways that formed channels in a dissected topography.
The Antarctic Plate started to subduct beneath South America 14 million years ago in the Miocene, forming the Chile Triple Junction. At first the Antarctic Plate subducted only in the southernmost tip of Patagonia, meaning that the Chile Triple Junction lay near the Strait of Magellan. As the southern part of Nazca Plate and the Chile Rise became consumed by subduction the more northerly regions of the Antarctic Plate begun to subduct beneath Patagonia so that the Chile Triple Junction advanced to the north over time. The asthenospheric window associated to the triple junction disturbed previous patterns of mantle convection beneath Patagonia inducing an uplift of ca. 1 km that reversed the Oligocene–Miocene transgression.
As the southern Andes rose in the Middle Miocene (14–12 million years ago) the resulting rain shadow originated the Patagonian Desert to the east.
Australia
Far northern Australia was monsoonal during the Miocene. Although northern Australia is often believed to have been much wetter during the Miocene, this interpretation may be an artefact of preservation bias of riparian and lacustrine plants; this finding has itself been challenged by other papers. Western Australia, like today, was arid, particularly so during the Middle Miocene.
Climate
Climates remained moderately warm, although the slow global cooling that eventually led to the Pleistocene glaciations continued. Although a long-term cooling trend was well underway, there is evidence of a warm period during the Miocene when the global climate rivalled that of the Oligocene. The climate of the Miocene has been suggested as a good analogue for future warmer climates caused by anthropogenic global warming, with this being especially true of the global climate during the Middle Miocene Climatic Optimum (MMCO), because the last time carbon dioxide levels were comparable to projected future atmospheric carbon dioxide levels resulting from anthropogenic climate change was during the MMCO. The Ross Sea margin of the East Antarctic Ice Sheet (EAIS) was highly dynamic during the Early Miocene.
The Miocene began with the Early Miocene Cool Event (Mi-1) around 23 million years ago, which marked the start of the Early Miocene Cool Interval (EMCI). This cool event occurred immediately after the Oligocene-Miocene Transition (OMT) during a major expansion of Antarctica's ice sheets, but was not associated with a significant drop in atmospheric carbon dioxide levels. Both continental and oceanic thermal gradients in mid-latitudes during the Early Miocene were very similar to those in the present. Global cooling caused the East Asian Summer Monsoon (EASM) to begin to take on its modern form during the Early Miocene. From 22.1 to 19.7 Ma, the Xining Basin experienced relative warmth and humidity amidst a broader aridification trend.
The EMCI ended 18 million years ago, giving way to the Middle Miocene Warm Interval (MMWI), the warmest part of which was the MMCO that began 16 million years ago. As the world transitioned into the MMCO, carbon dioxide concentrations varied between 300 and 500 ppm. Global annual mean surface temperature during the MMCO was about 18.4 °C. MMCO warmth was driven by the activity of the Columbia River Basalts and enhanced by decreased albedo from the reduction of deserts and expansion of forests. Climate modelling suggests additional, currently unknown, factors also worked to create the warm conditions of the MMCO. The MMCO saw the expansion of the tropical climatic zone to much larger than its current size. The July ITCZ, the zone of maximal monsoonal rainfall, moved to the north, increasing precipitation over southern China whilst simultaneously decreasing it over Indochina during the EASM. Western Australia was at this time characterised by exceptional aridity. In Antarctica, average summer temperatures on land reached 10 °C. In the oceans, the lysocline shoaled by approximately half of a kilometre during warm phases that corresponded to orbital eccentricity maxima. The MMCO ended around 14 million years ago, when global temperatures fell in the Middle Miocene Climate Transition (MMCT). Abrupt increases in opal deposition indicate this cooling was driven by enhanced drawdown of carbon dioxide via silicate weathering. The MMCT caused a sea surface temperature (SST) drop of approximately 6 °C in the North Atlantic. The drop in benthic foraminiferal δ18O values was most noticeable in the waters around Antarctica, suggesting cooling was most intense there. Around this time the Mi3b glacial event (a massive expansion of Antarctic glaciers) occurred. The East Antarctic Ice Sheet (EAIS) markedly stabilised following the MMCT. The intensification of glaciation caused a decoherence of sediment deposition from the 405 kyr eccentricity cycle.
The MMWI ended about 11 Ma, when the Late Miocene Cool Interval (LMCI) started. A major but transient warming occurred around 10.8-10.7 Ma. During the Late Miocene, the Earth's climate began to display a high degree of similarity to that of the present day. The 173 kyr obliquity modulation cycle governed by Earth's interactions with Saturn became detectable in the Late Miocene. By 12 Ma, Oregon was a savanna akin to that of the western margins of the Sierra Nevada of northern California. Central Australia became progressively drier, although southwestern Australia experienced significant wettening from around 12 to 8 Ma. The South Asian Winter Monsoon (SAWM) underwent strengthening ~9.2–8.5 Ma. From 7.9 to 5.8 Ma, the East Asian Winter Monsoon (EAWM) became stronger synchronously with a southward shift of the subarctic front. Greenland may have begun to have large glaciers as early as 8 to 7 Ma, although the climate for the most part remained warm enough to support forests there well into the Pliocene. Zhejiang, China was noticeably more humid than today. In the Great Rift Valley of Kenya, there was a gradual and progressive trend of increasing aridification, though it was not unidirectional, and wet humid episodes continued to occur. Between 7 and 5.3 Ma, temperatures dropped sharply again in the Late Miocene Cooling (LMC), most likely as a result of a decline in atmospheric carbon dioxide and a drop in the amplitude of Earth's obliquity, and the Antarctic ice sheet was approaching its present-day size and thickness. Ocean temperatures plummeted to near-modern values during the LMC; extratropical sea surface temperatures dropped substantially by approximately 7–9 °C. 41 kyr obliquity cycles became the dominant orbital climatic control 7.7 Ma and this dominance strengthened 6.4 Ma. Benthic δ18O values show significant glaciation occurred from 6.26 to 5.50 Ma, during which glacial-interglacial cycles were governed by the 41 kyr obliquity cycle. A major reorganisation of the carbon cycle occurred approximately 6 Ma, causing continental carbon reservoirs to no longer expand during cold spells, as they had done during cold periods in the Oligocene and most of the Miocene. At the end of the Miocene, global temperatures rose again as the amplitude of Earth's obliquity increased, which caused increased aridity in Central Asia. Around 5.5 Ma, the EAWM underwent a period of rapid intensification.
Life
Life during the Miocene Epoch was mostly supported by the two newly formed biomes, kelp forests and grasslands. Grasslands allow for more grazers, such as horses, rhinoceroses, and hippos. Ninety-five percent of modern plants existed by the end of this epoch. Modern bony fish genera were established. A modern-style latitudinal biodiversity gradient appeared ~15 Ma.
Flora
The coevolution of gritty, fibrous, fire-tolerant grasses and long-legged gregarious ungulates with high-crowned teeth, led to a major expansion of grass-grazer ecosystems. Herds of large, swift grazers were hunted by predators across broad sweeps of open grasslands, displacing desert, woodland, and browsers.
The higher organic content and water retention of the deeper and richer grassland soils, with long-term burial of carbon in sediments, produced a carbon and water vapor sink. This, combined with higher surface albedo and lower evapotranspiration of grassland, contributed to a cooler, drier climate. C4 grasses, which are able to assimilate carbon dioxide and water more efficiently than C3 grasses, expanded to become ecologically significant near the end of the Miocene between 6 and 7 million years ago, although they did not expand northward during the Late Miocene. The expansion of grasslands and radiations among terrestrial herbivores correlates to fluctuations in CO2. One study, however, has attributed the expansion of grasslands not to a CO2 drop but to the increasing seasonality and aridity, coupled with a monsoon climate, which made wildfires highly prevalent compared to before. The Late Miocene expansion of grasslands had cascading effects on the global carbon cycle, evidenced by the imprint it left in carbon isotope records.
Cycads between 11.5 and 5 million years ago began to rediversify after previous declines in variety due to climatic changes, and thus modern cycads are not a good model for a "living fossil". Eucalyptus fossil leaves occur in the Miocene of New Zealand, where the genus is not native today, but have been introduced from Australia.
Fauna
Both marine and continental fauna were fairly modern, although marine mammals were less numerous. Only in isolated South America and Australia did widely divergent fauna exist.
In Eurasia, genus richness shifted southward to lower latitudes from the Early to the Middle Miocene. Europe's large mammal diversity significantly declined during the Late Miocene.
In the Early Miocene, several Oligocene groups were still diverse, including nimravids, entelodonts, and three-toed equids. As in the previous Oligocene Epoch, oreodonts were still diverse, only to disappear in the earliest Pliocene. During the later Miocene mammals were more modern, with easily recognizable canids, bears, red pandas, procyonids, equids, beavers, deer, camelids, and whales, along with now-extinct groups like borophagine canids, certain gomphotheres, three-toed horses, and hornless rhinos like Teleoceras and Aphelos. The late Miocene also marks the extinction of the last-surviving members of the hyaenodonts. Islands began to form between South and North America in the Late Miocene, allowing ground sloths like Thinobadistes to island-hop to North America. The expansion of silica-rich C4 grasses led to worldwide extinctions of herbivorous species without high-crowned teeth. Mustelids diversified into their largest forms as terrestrial predators like Ekorus, Eomellivora, and Megalictis and bunodont otters like Enhydriodon and Sivaonyx appeared. Eulipotyphlans were widespread in Europe, being less diverse in Southern Europe than farther north due to the aridity of the former.
Unequivocally-recognizable dabbling ducks, plovers, typical owls, cockatoos and crows appear during the Miocene. By the epoch's end, all or almost all modern bird groups are believed to have been present; the few post-Miocene bird fossils which cannot be placed in the evolutionary tree with full confidence are simply too badly preserved, rather than too equivocal in character. Marine birds reached their highest diversity ever in the course of this epoch.
The youngest representatives of Choristodera, an extinct order of aquatic reptiles that first appeared in the Middle Jurassic, are known from the Miocene of Europe, belonging to the genus Lazarussuchus, which had been the only known surviving genus of the group since the beginning of the Eocene.
The last known representatives of the archaic primitive mammal order Meridiolestida, which dominated South America during the Late Cretaceous, are known from the Miocene of Patagonia, represented by the mole-like Necrolestes.
The youngest known representatives of metatherians (the broader grouping to which marsupials belong) in Europe, Asia and Africa are known from the Miocene, including the European herpetotheriid Amphiperatherium, the peradectids Siamoperadectes and Sinoperadectes from Asia, and the possible herpetotheriid Morotodon from the late Early Miocene of Uganda.
Approximately 100 species of apes lived during this time, ranging throughout Africa, Asia and Europe and varying widely in size, diet, and anatomy. Due to scanty fossil evidence it is unclear which ape or apes contributed to the modern hominid clade, but molecular evidence indicates this ape lived between 18 and 13 million years ago. The first hominins (bipedal apes of the human lineage) appeared in Africa at the very end of the Miocene, including Sahelanthropus, Orrorin, and an early form of Ardipithecus (A. kadabba). The chimpanzee–human divergence is thought to have occurred at this time. The evolution of bipedalism in apes at the end of the Miocene instigated an increased rate of faunal turnover in Africa. In contrast, European apes met their end at the end of the Miocene due to increased habitat uniformity.
The expansion of grasslands in North America also led to an explosive radiation among snakes. Previously, snakes were a minor component of the North American fauna, but during the Miocene, the number of species and their prevalence increased dramatically with the first appearances of vipers and elapids in North America and the significant diversification of Colubridae (including the origin of many modern genera such as Nerodia, Lampropeltis, Pituophis and Pantherophis).
Arthropods were abundant, including in areas such as Tibet where they have traditionally been thought to be undiverse. Neoisopterans diversified and expanded into areas they previously were absent from, such as Madagascar and Australia.
Oceanic
In the oceans, brown algae, called kelp, proliferated, supporting new species of sea life, including otters, fish and various invertebrates.
Corals suffered a significant local decline along the northeastern coast of Australia during the Tortonian, most likely due to warming seawater.
Cetaceans attained their greatest diversity during the Miocene, with over 20 recognized genera of baleen whales in comparison to only six living genera. This diversification correlates with emergence of gigantic macro-predators such as megatoothed sharks and raptorial sperm whales. Prominent examples are O. megalodon and L. melvillei. Other notable large sharks were O. chubutensis, Isurus hastalis, and Hemipristis serra.
Crocodilians also showed signs of diversification during the Miocene. The largest form among them was a gigantic caiman Purussaurus which inhabited South America. Another gigantic form was a false gharial Rhamphosuchus, which inhabited modern age India. A strange form, Mourasuchus also thrived alongside Purussaurus. This species developed a specialized filter-feeding mechanism, and it likely preyed upon small fauna despite its gigantic size.
The youngest members of Sebecidae, a clade of large terrestrial predatory crocodyliformes distantly related to modern crocodilians, from which they likely diverged over 180 million years ago, are known from the Miocene of South America.
The last Desmostylians thrived during this period before becoming the only extinct marine mammal order.
The pinnipeds, which appeared near the end of the Oligocene, became more aquatic. A prominent genus was Allodesmus. A ferocious walrus, Pelagiarctos may have preyed upon other species of pinnipeds including Allodesmus.
Furthermore, South American waters witnessed the arrival of Megapiranha paranensis, which were considerably larger than modern age piranhas.
New Zealand's Miocene fossil record is particularly rich. Marine deposits showcase a variety of cetaceans and penguins, illustrating the evolution of both groups into modern representatives. The early Miocene Saint Bathans Fauna is the only Cenozoic terrestrial fossil record of the landmass, showcasing a wide variety of not only bird species, including early representatives of clades such as moa, kiwi and adzebills, but also a diverse herpetofauna of sphenodontians, crocodiles and turtles as well as a rich terrestrial mammal fauna composed of various species of bats and the enigmatic Saint Bathans Mammal.
Microbiota
Microbial life in the igneous crust of the Fennoscandian Shield shifted from being dominated by methanogens to being primarily composed of sulphate-reducing prokaryotes. The change resulted from fracture reactivation during the Pyrenean-Alpine orogeny, enabling sulphate-reducing microbes to permeate into the Fennoscandian Shield via descending surficial waters.
Diatom diversity was inversely correlated with carbon dioxide levels and global temperatures during the Miocene. Most modern lineages of diatoms appeared by the Late Miocene.
Oceans
There is evidence from oxygen isotopes at Deep Sea Drilling Program sites that ice began to build up in Antarctica about 36 Ma during the Eocene. Further marked decreases in temperature during the Middle Miocene at 15 Ma probably reflect increased ice growth in Antarctica. It can therefore be assumed that East Antarctica had some glaciers during the early to mid Miocene (23–15 Ma). Oceans cooled partly due to the formation of the Antarctic Circumpolar Current, and about 15 million years ago the ice cap in the southern hemisphere started to grow to its present form. The Greenland ice cap developed later, in the Middle Pliocene time, about 3 million years ago.
Middle Miocene disruption
The "Middle Miocene disruption" refers to a wave of extinctions of terrestrial and aquatic life forms that occurred following the Miocene Climatic Optimum (18 to 16 Ma), around 14.8 to 14.5 million years ago, during the Langhian Stage of the mid-Miocene. A major and permanent cooling step occurred between 14.8 and 14.1 Ma, associated with increased production of cold Antarctic deep waters and a major expansion of the East Antarctic ice sheet. The closure of the Indonesian Throughflow, which caused an accumulation of warm water in the western Pacific that then spread eastward and reduced upwelling in the eastern Pacific, may also have been responsible. A Middle Miocene δ18O increase, that is, a relative increase in the heavier isotope of oxygen, has been noted in the Pacific, the Southern Ocean and the South Atlantic. Barium and uranium became enriched in seafloor sediments.
Impact event
A large impact event occurred either during the Miocene (23–5.3 Ma) or the Pliocene (5.3–2.6 Ma). The event formed the Karakul crater (52 km diameter) in Tajikistan, which is estimated to have an age of less than 23 Ma or less than 5 Ma.
| Physical sciences | Geological timescale | Earth science |
19022 | https://en.wikipedia.org/wiki/Measurement | Measurement | Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events.
In other words, measurement is a process of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind.
The scope and application of measurement are dependent on the context and discipline. In natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures. However, in other fields such as statistics as well as the social and behavioural sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales.
Measurement is a cornerstone of trade, science, technology and quantitative research in many disciplines. Historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators. Since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modern International System of Units (SI). This system reduces all physical measurements to a mathematical combination of seven base units. The science of measurement is pursued in the field of metrology.
Measurement is defined as the process of comparison of an unknown quantity with a known or standard quantity.
History
Methodology
The measurement of a property may be categorized by the following criteria: type, magnitude, unit, and uncertainty. They enable unambiguous comparisons between measurements.
The level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by ratio, difference, or ordinal preference. The type is commonly not explicitly expressed, but implicit in the definition of a measurement procedure.
The magnitude is the numerical value of the characterization, usually obtained with a suitably chosen measuring instrument.
A unit assigns a mathematical weighting factor to the magnitude that is derived as a ratio to the property of an artifact used as standard or a natural physical quantity.
An uncertainty represents the random and systemic errors of the measurement procedure; it indicates a confidence level in the measurement. Errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument.
Standardization of measurement units
Measurements most commonly use the International System of Units (SI) as a comparison framework. The system defines seven fundamental units: kilogram, metre, candela, second, ampere, kelvin, and mole. All of these units are defined without reference to a particular physical object which serves as a standard. Artifact-free definitions fix measurements at an exact value related to a physical constant or other invariable phenomena in nature, in contrast to standard artifacts which are subject to deterioration or destruction. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to.
The first proposal to tie an SI base unit to an experimental standard independent of fiat was by Charles Sanders Peirce (1839–1914), who proposed to define the metre in terms of the wavelength of a spectral line. This directly influenced the Michelson–Morley experiment; Michelson and Morley cite Peirce, and improve on his method.
Standards
With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements. Nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of human history, however, first for convenience and then for necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were originally developed to prevent fraud in commerce.
Units of measurement are generally defined on a scientific basis, overseen by governmental or independent agencies, and established in international treaties, pre-eminent of which is the General Conference on Weights and Measures (CGPM), established in 1875 by the Metre Convention, overseeing the International System of Units (SI). For example, the metre was redefined in 1983 by the CGPM in terms of the speed of light, the kilogram was redefined in 2019 in terms of the Planck constant and the international yard was defined in 1960 by the governments of the United States, United Kingdom, Australia and South Africa as being exactly 0.9144 metres.
In the United States, the National Institute of Standards and Technology (NIST), a division of the United States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by the National Physical Laboratory (NPL), in Australia by the National Measurement Institute, in South Africa by the Council for Scientific and Industrial Research and in India the National Physical Laboratory of India.
Units and systems
unit is known or standard quantity in terms of which other physical quantities are measured.
Imperial and US customary systems
Before SI units were widely adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth and the United States. The system came to be known as U.S. customary units in the United States and is still in use there and in a few Caribbean countries. These various systems of measurement have at times been called foot-pound-second systems after the Imperial units for length, weight and time even though the tons, hundredweights, gallons, and nautical miles, for example, are different for the U.S. units. Many Imperial units remain in use in Britain, which has officially switched to the SI system—with a few exceptions such as road signs, which are still in miles. Draught beer and cider must be sold by the imperial pint, and milk in returnable bottles can be sold by the imperial pint. Many people measure their height in feet and inches and their weight in stone and pounds, to give just a few examples. Imperial units are used in many other places, for example, in many Commonwealth countries that are considered metricated, land area is measured in acres and floor space in square feet, particularly for commercial transactions (rather than government statistics). Similarly, gasoline is sold by the gallon in many countries that are considered metricated.
Metric system
The metric system is a decimal system of measurement based on its units for length, the metre and for mass, the kilogram. It exists in several variations, with different choices of base units, though these do not affect its day-to-day use. Since the 1960s, the International System of Units (SI) is the internationally recognised metric system. Metric units of mass, length, and electricity are widely used around the world for both everyday and scientific purposes.
International System of Units
The International System of Units (abbreviated as SI from the French language name Système International d'Unités) is the modern revision of the metric system. It is the world's most widely used system of units, both in everyday commerce and in science. The SI was developed in 1960 from the metre–kilogram–second (MKS) system, rather than the centimetre–gram–second (CGS) system, which, in turn, had many variants. The SI units for the seven base physical quantities are:
In the SI, base units are the simple measurements for time, length, mass, temperature, amount of substance, electric current and light intensity. Derived units are constructed from the base units, for example, the watt, i.e. the unit for power, is defined from the base units as m2·kg·s−3. Other physical properties may be measured in compound units, such as material density, measured in kg/m3.
Converting prefixes
The SI allows easy multiplication when switching among units having the same base but different prefixes. To convert from metres to centimetres it is only necessary to multiply the number of metres by 100, since there are 100 centimetres in a metre. Inversely, to switch from centimetres to metres one multiplies the number of centimetres by 0.01 or divides the number of centimetres by 100.
Length
A ruler or rule is a tool used in, for example, geometry, technical drawing, engineering, and carpentry, to measure lengths or distances or to draw straight lines. Strictly speaking, the ruler is the instrument used to rule straight lines and the calibrated instrument used for determining length is called a measure, however common usage calls both instruments rulers and the special name straightedge is used for an unmarked rule. The use of the word measure, in the sense of a measuring instrument, only survives in the phrase tape measure, an instrument that can be used to measure but cannot be used to draw straight lines. As can be seen in the photographs on this page, a two-metre carpenter's rule can be folded down to a length of only 20 centimetres, to easily fit in a pocket, and a five-metre-long tape measure easily retracts to fit within a small housing.
Time
Time is an abstract measurement of elemental changes over a non-spatial continuum. It is denoted by numbers and/or named periods such as hours, days, weeks, months and years. It is an apparently irreversible series of occurrences within this non spatial continuum. It is also used to denote an interval between two relative points on this continuum.
Mass
Mass refers to the intrinsic property of all material objects to resist changes in their momentum. Weight, on the other hand, refers to the downward force produced when a mass is in a gravitational field. In free fall, (no net gravitational forces) objects lack weight but retain their mass. The Imperial units of mass include the ounce, pound, and ton. The metric units gram and kilogram are units of mass.
One device for measuring weight or mass is called a weighing scale or, often, simply a scale. A spring scale measures force but not mass, a balance compares weight, both require a gravitational field to operate. Some of the most accurate instruments for measuring weight or mass are based on load cells with a digital read-out, but require a gravitational field to function and would not work in free fall.
Economics
The measures used in economics are physical measures, nominal price value measures and real price measures. These measures differ from one another by the variables they measure and by the variables excluded from measurements.
Survey research
In the field of survey research, measures are taken from individual attitudes, values, and behavior using questionnaires as a measurement instrument. As all other measurements, measurement in survey research is also vulnerable to measurement error, i.e. the departure from the true value of the measurement and the value provided using the measurement instrument. In substantive survey research, measurement error can lead to biased conclusions and wrongly estimated effects. In order to get accurate results, when measurement errors appear, the results need to be corrected for measurement errors.
Exactness designation
The following rules generally apply for displaying the exactness of measurements:
All non-0 digits and any 0s appearing between them are significant for the exactness of any number. For example, the number 12000 has two significant digits, and has implied limits of 11500 and 12500.
Additional 0s may be added after a decimal separator to denote a greater exactness, increasing the number of decimals. For example, 1 has implied limits of 0.5 and 1.5 whereas 1.0 has implied limits 0.95 and 1.05.
Difficulties
Since accurate measurement is essential in many fields, and since all measurements are necessarily approximations, a great deal of effort must be taken to make measurements as accurate as possible. For example, consider the problem of measuring the time it takes an object to fall a distance of one metre (about 39 in). Using physics, it can be shown that, in the gravitational field of the Earth, it should take any object about 0.45 second to fall one metre. However, the following are just some of the sources of error that arise:
This computation used for the acceleration of gravity . But this measurement is not exact, but only precise to two significant digits.
The Earth's gravitational field varies slightly depending on height above sea level and other factors.
The computation of 0.45 seconds involved extracting a square root, a mathematical operation that required rounding off to some number of significant digits, in this case two significant digits.
Additionally, other sources of experimental error include:
carelessness,
determining of the exact time at which the object is released and the exact time it hits the ground,
measurement of the height and the measurement of the time both involve some error,
air resistance,
posture of human participants.
Scientific experiments must be carried out with great care to eliminate as much error as possible, and to keep error estimates realistic.
Definitions and theories
Classical definition
In the classical definition, which is standard throughout the physical sciences, measurement is the determination or estimation of ratios of quantities. Quantity and measurement are mutually defined: quantitative attributes are those possible to measure, at least in principle. The classical concept of quantity can be traced back to John Wallis and Isaac Newton, and was foreshadowed in Euclid's Elements.
Representational theory
In the representational theory, measurement is defined as "the correlation of numbers with entities that are not numbers". The most technically elaborated form of representational theory is also known as additive conjoint measurement. In this form of representational theory, numbers are assigned based on correspondences or similarities between the structure of number systems and the structure of qualitative systems. A property is quantitative if such structural similarities can be established. In weaker forms of representational theory, such as that implicit within the work of Stanley Smith Stevens, numbers need only be assigned according to a rule.
The concept of measurement is often misunderstood as merely the assignment of a value, but it is possible to assign a value in a way that is not a measurement in terms of the requirements of additive conjoint measurement. One may assign a value to a person's height, but unless it can be established that there is a correlation between measurements of height and empirical relations, it is not a measurement according to additive conjoint measurement theory. Likewise, computing and assigning arbitrary values, like the "book value" of an asset in accounting, is not a measurement because it does not satisfy the necessary criteria.
Three type of representational theory
Empirical relation In science, an empirical relationship is a relationship or correlation based solely on observation rather than theory. An empirical relationship requires only confirmatory data irrespective of theoretical basis.
The rule of mapping The real world is the Domain of mapping, and the mathematical world is the range. when we map the attribute to mathematical system, we have many choice for mapping and the range.
The representation condition of measurement
Theory
All data are inexact and statistical in nature. Thus the definition of measurement is: "A set of observations that reduce uncertainty where the result is expressed as a quantity." This definition is implied in what scientists actually do when they measure something and report both the mean and statistics of the measurements. In practical terms, one begins with an initial guess as to the expected value of a quantity, and then, using various methods and instruments, reduces the uncertainty in the value. In this view, unlike the positivist representational theory, all measurements are uncertain, so instead of assigning one value, a range of values is assigned to a measurement. This also implies that there is not a clear or neat distinction between estimation and measurement.
Quantum mechanics
In quantum mechanics, a measurement is an action that determines a particular property (such as position, momentum, or energy) of a quantum system. Quantum measurements are always statistical samples from a probability distribution; the distribution for many quantum phenomena is discrete, not continuous. Quantum measurements alter quantum states and yet repeated measurements on a quantum state are reproducible. The measurement appears to act as a filter, changing the quantum state into one with the single measured quantum value. The unambiguous meaning of the quantum measurement is an unresolved fundamental problem in quantum mechanics; the most common interpretation is that when a measurement is performed, the wavefunction of the quantum system "collapses" to a single, definite value.
Biology
In biology, there is generally no well established theory of measurement. However, the importance of the theoretical context is emphasized. Moreover, the theoretical context stemming from the theory of evolution leads to articulate the theory of measurement and historicity as a fundamental notion.
Among the most developed fields of measurement in biology are the measurement of genetic diversity and species diversity.
| Physical sciences | Science: General | null |
19042 | https://en.wikipedia.org/wiki/Metal | Metal | A metal () is a material that, when polished or fractured, shows a lustrous appearance, and conducts electricity and heat relatively well. These properties are all associated with having electrons available at the Fermi level, as against nonmetallic materials which do not. Metals are typically ductile (can be drawn into wires) and malleable (they can be hammered into thin sheets).
A metal may be a chemical element such as iron; an alloy such as stainless steel; or a molecular compound such as polymeric sulfur nitride. The general science of metals is called metallurgy, a subtopic of materials science; aspects of the electronic and thermal properties are also within the scope of condensed matter physics and solid-state chemistry, it is a multidisciplinary topic. In colloquial use materials such as steel alloys are referred to as metals, while others such as polymers, wood or ceramics are nonmetallic materials.
A metal conducts electricity at a temperature of absolute zero, which is a consequence of delocalized states at the Fermi energy. Many elements and compounds become metallic under high pressures, for example, iodine gradually becomes a metal at a pressure of between 40 and 170 thousand times atmospheric pressure. Sodium becomes a nonmetal at pressure of just under two million times atmospheric pressure, and at even higher pressures it is expected to become a metal again.
When discussing the periodic table and some chemical properties the term metal is often used to denote those elements which in pure form and at standard conditions are metals in the sense of electrical conduction mentioned above. The related term metallic may also be used for types of dopant atoms or alloying elements.
In astronomy metal refers to all chemical elements in a star that are heavier than helium. In this sense the first four "metals" collecting in stellar cores through nucleosynthesis are carbon, nitrogen, oxygen, and neon. A star fuses lighter atoms, mostly hydrogen and helium, into heavier atoms over its lifetime. The metallicity of an astronomical object is the proportion of its matter made up of the heavier chemical elements.
The strength and resilience of some metals has led to their frequent use in, for example, high-rise building and bridge construction, as well as most vehicles, many home appliances, tools, pipes, and railroad tracks. Precious metals were historically used as coinage, but in the modern era, coinage metals have extended to at least 23 of the chemical elements. There is also extensive use of multi-element metals such as titanium nitride or degenerate semiconductors in the semiconductor industry.
The history of refined metals is thought to begin with the use of copper about 11,000 years ago. Gold, silver, iron (as meteoric iron), lead, and brass were likewise in use before the first known appearance of bronze in the fifth millennium BCE. Subsequent developments include the production of early forms of steel; the discovery of sodium—the first light metal—in 1809; the rise of modern alloy steels; and, since the end of World War II, the development of more sophisticated alloys.
Properties
Form and structure
Most metals are shiny and lustrous, at least when polished, or fractured. Sheets of metal thicker than a few micrometres appear opaque, but gold leaf transmits green light. This is due to the freely moving electrons which reflect light.
Although most elemental metals have higher densities than nonmetals, there is a wide variation in their densities, lithium being the least dense (0.534 g/cm3) and osmium (22.59 g/cm3) the most dense. Some of the 6d transition metals are expected to be denser than osmium, but their known isotopes are too unstable for bulk production to be possible Magnesium, aluminium and titanium are light metals of significant commercial importance. Their respective densities of 1.7, 2.7, and 4.5 g/cm3 can be compared to those of the older structural metals, like iron at 7.9 and copper at 8.9 g/cm3. The most common lightweight metals are aluminium and magnesium alloys.
Metals are typically malleable and ductile, deforming under stress without cleaving. The nondirectional nature of metallic bonding contributes to the ductility of most metallic solids, where the Peierls stress is relatively low allowing for dislocation motion, and there are also many combinations of planes and directions for plastic deformation. Due to their having close packed arrangements of atoms the Burgers vector of the dislocations are fairly small, which also means that the energy needed to produce one is small. In contrast, in an ionic compound like table salt the Burgers vectors are much larger and the energy to move a dislocation is far higher. Reversible elastic deformation in metals can be described well by Hooke's Law for the restoring forces, where the stress is linearly proportional to the strain.
A temperature change may lead to the movement of structural defects in the metal such as grain boundaries, point vacancies, line and screw dislocations, stacking faults and twins in both crystalline and non-crystalline metals. Internal slip, creep, and metal fatigue may also ensue.
The atoms of simple metallic substances are often in one of three common crystal structures, namely body-centered cubic (bcc), face-centered cubic (fcc), and hexagonal close-packed (hcp). In bcc, each atom is positioned at the center of a cube of eight others. In fcc and hcp, each atom is surrounded by twelve others, but the stacking of the layers differs. Some metals adopt different structures depending on the temperature.
Many other metals with different elements have more complicated structures, such as rock-salt structure in titanium nitride or perovskite (structure) in some nickelates.
Electrical and thermal
The electronic structure of metals means they are relatively good conductors of electricity. The electrons all have different momenta, which average to zero when there is no external voltage. When a voltage is applied some move a little faster in a given direction, some a little slower so there is a net drift velocity which leads to an electric current. This involves small changes in which wavefunctions the electrons are in, changing to those with the higher momenta. Quantum mechanics dictates that one can only have one electron in a given state, the Pauli exclusion principle. Therefore there have to be empty delocalized electron states (with the higher momenta) available at the highest occupied energies as sketched in the Figure. In a semiconductor like silicon or a nonmetal like strontium titanate there is an energy gap between the highest filled states of the electrons and the lowest unfilled, so no accessible states with slightly higher momenta. Consequently, semiconductors and nonmetals are poor conductors, although they can carry some current when doped with elements that introduce additional partially occupied energy states at higher temperatures.
The elemental metals have electrical conductivity values of from 6.9 × 103 S/cm for manganese to 6.3 × 105 S/cm for silver. In contrast, a semiconducting metalloid such as boron has an electrical conductivity 1.5 × 10−6 S/cm. With one exception, metallic elements reduce their electrical conductivity when heated. Plutonium increases its electrical conductivity when heated in the temperature range of around −175 to +125 °C, with anomalously large thermal expansion coefficient and a phase change from monoclinic to face-centered cubic near 100 °C. There is evidence that this and comparable behavior in transuranic elements is due to more complex relativistic and spin interactions which are not captured in simple models.
All of the metallic alloys as well as conducting ceramics and polymers are metals by the same definition; for instance titanium nitride has delocalized states at the Fermi level. They have electrical conductivities similar to those of elemental metals. Liquid forms are also metallic conductors or electricity, for instance mercury. In normal conditions no gases are metallic conductors. However, a plasma (physics) is a metallic conductor and the charged particles in a plasma have many properties in common with those of electrons in elemental metals, particularly for white dwarf stars.
Metals are relatively good conductors of heat, which in metals is transported mainly by the conduction electrons. At higher temperatures the electrons can occupy slightly higher energy levels given by Fermi–Dirac statistics. These have slightly higher momenta (kinetic energy) and can pass on thermal energy. The empirical Wiedemann–Franz law states that in many metals the ratio between thermal and electrical conductivities is proportional to temperature, with a proportionality constant that is roughly the same for all metals.
The contribution of a metal's electrons to its heat capacity and thermal conductivity, and the electrical conductivity of the metal itself can be approximately calculated from the free electron model. However, this does not take into account the detailed structure of the metal's ion lattice. Taking into account the positive potential caused by the arrangement of the ion cores enables consideration of the electronic band structure and binding energy of a metal. Various models are applicable, the simplest being the nearly free electron model. Modern methods such as density functional theory are typically used.
Chemical
The elements which form metals usually form cations through electron loss. Most will react with oxygen in the air to form oxides over various timescales (potassium burns in seconds while iron rusts over years) which depend upon whether the native oxide forms a passivation layer that acts as a diffusion barrier. Some others, like palladium, platinum, and gold, do not react with the atmosphere at all; gold can form compounds where it gains an electron (aurides, e.g. caesium auride). The oxides of elemental metals are often basic. However, oxides with very high oxidation states such as CrO3, Mn2O7, and OsO4 often have strictly acidic reactions; and oxides of the less electropositive metals such as BeO, Al2O3, and PbO, can display both basic and acidic properties. The latter are termed amphoteric oxides.
Periodic table distribution of elemental metals
The elements that form exclusively metallic structures under ordinary conditions are shown in yellow on the periodic table below. The remaining elements either form covalent network structures (light blue), molecular covalent structures (dark blue), or remain as single atoms (violet). Astatine (At), francium (Fr), and the elements from fermium (Fm) onwards are shown in gray because they are extremely radioactive and have never been produced in bulk. Theoretical and experimental evidence suggests that these uninvestigated elements should be metals, except for oganesson (Og) which DFT calculations indicate would be a semiconductor.
The situation changes with pressure: at extremely high pressures, all elements (and indeed all substances) are expected to metallize. Arsenic (As) has both a stable metallic allotrope and a metastable semiconducting allotrope at standard conditions. A similar situation affects carbon (C): graphite is metallic, but diamond is not.
Alloys
In the context of metals, an alloy is a substance having metallic properties which is composed of two or more elements. Often at least one of these is a metallic element; the term "alloy" is sometimes used more generally as in silicon–germanium alloys. An alloy may have a variable or fixed composition. For example, gold and silver form an alloy in which the proportions of gold or silver can be varied; titanium and silicon form an alloy TiSi2 in which the ratio of the two components is fixed (also known as an intermetallic compound).
Most pure metals are either too soft, brittle, or chemically reactive for practical use. Combining different ratios of metals and other elements in alloys modifies the properties to produce desirable characteristics, for instance more ductile, harder, resistant to corrosion, or have a more desirable color and luster. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steel) make up the largest proportion both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low-, mid-, and high-carbon steels, with increasing carbon levels reducing ductility and toughness. The addition of silicon will produce cast irons, while the addition of chromium, nickel, and molybdenum to carbon steels (more than 10%) results in stainless steels with enhanced corrosion resistance.
Other significant metallic alloys are those of aluminum, titanium, copper, and magnesium. Copper alloys have been known since prehistory—bronze gave the Bronze Age its name—and have many applications today, most importantly in electrical wiring. The alloys of the other three metals have been developed relatively recently; due to their chemical reactivity they need electrolytic extraction processes. The alloys of aluminum, titanium, and magnesium are valued for their high strength-to-weight ratios; magnesium can also provide electromagnetic shielding. These materials are ideal for situations where high strength-to-weight ratio is more important than material cost, such as in aerospace and some automotive applications.
Alloys specially designed for highly demanding applications, such as jet engines, may contain more than ten elements.
Categories
Metals can be categorised by their composition, physical or chemical properties. Categories described in the subsections below include ferrous and non-ferrous metals; brittle metals and refractory metals; white metals; heavy and light metals; base, noble, and precious metals as well as both metallic ceramics and polymers.
Ferrous and non-ferrous metals
The term "ferrous" is derived from the Latin word meaning "containing iron". This can include pure iron, such as wrought iron, or an alloy such as steel. Ferrous metals are often magnetic, but not exclusively. Non-ferrous metals and alloys lack appreciable amounts of iron.
Brittle elemental metal
While nearly all elemental metals are malleable or ductile, a few—beryllium, chromium, manganese, gallium, and bismuth—are brittle. Arsenic and antimony, if admitted as metals, are brittle. Low values of the ratio of bulk elastic modulus to shear modulus (Pugh's criterion) are indicative of intrinsic brittleness. A material is brittle if it is hard for dislocations to move, which is often associated with large Burgers vectors and only a limited number of slip planes.
Refractory metal
A refractory metal is a metal that is very resistant to heat and wear. Which metals belong to this category varies; the most common definition includes niobium, molybdenum, tantalum, tungsten, and rhenium as well as their alloys. They all have melting points above 2000 °C, and a high hardness at room temperature. Several compounds such as titanium nitride are also described as refractory metals.
White metal
A white metal is any of a range of white-colored alloys with relatively low melting points used mainly for decorative purposes. In Britain, the fine art trade uses the term "white metal" in auction catalogues to describe foreign silver items which do not carry British Assay Office marks, but which are nonetheless understood to be silver and are priced accordingly.
Heavy and light metals
A heavy metal is any relatively dense metal, either single element or multielement. Magnesium, aluminium and titanium alloys are light metals of significant commercial importance. Their densities of 1.7, 2.7 and 4.5 g/cm3 range from 19 to 56% of the densities of other structural metals, such as iron (7.9) and copper (8.9) and their alloys.
Base, noble, and precious metals
The term base metal refers to a metal that is easily oxidized or corroded, such as reacting easily with dilute hydrochloric acid (HCl) to form a metal chloride and hydrogen. The term is normally used for the elements, and examples include iron, nickel, lead, and zinc. Copper is considered a base metal as it is oxidized relatively easily, although it does not react with HCl.
The term noble metal (also for elements) is commonly used in opposition to base metal. Noble metals are less reactive, resistant to corrosion or oxidation, unlike most base metals. They tend to be precious metals, often due to perceived rarity. Examples include gold, platinum, silver, rhodium, iridium, and palladium.
In alchemy and numismatics, the term base metal is contrasted with precious metal, that is, those of high economic value. Most coins today are made of base metals with low intrinsic value; in the past, coins frequently derived their value primarily from their precious metal content; gold, silver, platinum, and palladium each have an ISO 4217 currency code. Currently they have industrial uses such as platinum and palladium in catalytic converters, are used in jewellery and also a role as investments and a store of value. Palladium and platinum, as of summer 2024, were valued at slightly less than half the price of gold, while silver is substantially less expensive.
Valve metals
In electrochemistry, a valve metal is a metal which passes current in only one direction due to the formation of any insulating oxide later.
Metallic ceramics
There are many ceramic compounds which have metallic electrical conduction, but are not simple combinations of metallic elements. (They are not the same as cermets which are composites of a non-conducting ceramic and a conducting metal.) One set, the transition metal nitrides has significant ionic character to the bonding, so can be classified as both ceramics and metals. They have partially filled states at the Fermi level so are good thermal and electrical conductors, and there is often significant charge transfer from the transition metal atoms to the nitrogen. However, unlike most elemental metals, ceramic metals are often not particularly ductile. Their uses are widespread, for instance titanium nitride finds use in orthopedic devices and as a wear resistant coating. In many cases their utility depends upon there being effective deposition methods so they can be used as thin film coatings.
Metallic polymers
There are many polymers which have metallic electrical conduction, typically associated with extended aromatic components such as in the polymers indicated in the Figure. The conduction of the aromatic regions is similar to that of graphite, so is highly directional.
Half metal
A half-metal is any substance that acts as a conductor to electrons of one spin orientation, but as an insulator or semiconductor to those of the opposite spin. They were first described in 1983, as an explanation for the electrical properties of manganese-based Heusler alloys. Although all half-metals are ferromagnetic (or ferrimagnetic), most ferromagnets are not half-metals. Many of the known examples of half-metals are oxides, sulfides, or Heusler alloys.
Semimetal
A semimetal is a material with a small energy overlap between the bottom of the conduction band and the top of the valence band, but they do not overlap in momentum space. Unlike a regular metal, semimetals have charge carriers of both types (holes and electrons), although the charge carriers typically occur in much smaller numbers than in a real metal. In this respect they resemble degenerate semiconductors. This explains why the electrical properties of semimetals are partway between those of metals and semiconductors. There are additional types, in particular Weyl and Dirac semimetals.
The classic elemental semimetallic elements are arsenic, antimony, bismuth, α-tin (gray tin) and graphite. There are also chemical compounds, such as mercury telluride (HgTe), and some conductive polymers.
Lifecycle
Formation
Metallic elements up to the vicinity of iron (in the periodic table) are largely made via stellar nucleosynthesis. In this process, lighter elements from hydrogen to silicon undergo successive fusion reactions inside stars, releasing light and heat and forming heavier elements with higher atomic numbers.
Heavier elements are not usually formed this way since fusion reactions involving such nuclei would consume rather than release energy. Rather, they are largely synthesised (from elements with a lower atomic number) by neutron capture, with the two main modes of this repetitive capture being the s-process and the r-process. In the s-process ("s" stands for "slow"), singular captures are separated by years or decades, allowing the less stable nuclei to beta decay, while in the r-process ("rapid"), captures happen faster than nuclei can decay. Therefore, the s-process takes a more-or-less clear path: for example, stable cadmium-110 nuclei are successively bombarded by free neutrons inside a star until they form cadmium-115 nuclei which are unstable and decay to form indium-115 (which is nearly stable, with a half-life times the age of the universe). These nuclei capture neutrons and form indium-116, which is unstable, and decays to form tin-116, and so on. In contrast, there is no such path in the r-process. The s-process stops at bismuth due to the short half-lives of the next two elements, polonium and astatine, which decay to bismuth or lead. The r-process is so fast it can skip this zone of instability and go on to create heavier elements such as thorium and uranium.
Metals condense in planets as a result of stellar evolution and destruction processes. Stars lose much of their mass when it is ejected late in their lifetimes, and sometimes thereafter as a result of a neutron star merger, thereby increasing the abundance of elements heavier than helium in the interstellar medium. When gravitational attraction causes this matter to coalesce and collapse new stars and planets are formed.
Abundance and occurrence
The Earth's crust is made of approximately 25% of metallic elements by weight, of which 80% are light metals such as sodium, magnesium, and aluminium. Despite the overall scarcity of some heavier metals such as copper, they can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes.
Metallic elements are primarily found as lithophiles (rock-loving) or chalcophiles (ore-loving). Lithophile elements are mainly the s-block elements, the more reactive of the d-block elements, and the f-block elements. They have a strong affinity for oxygen and mostly exist as relatively low-density silicate minerals. Chalcophile elements are mainly the less reactive d-block elements, and the period 4–6 p-block metals. They are usually found in (insoluble) sulfide minerals. Being denser than the lithophiles, hence sinking lower into the crust at the time of its solidification, the chalcophiles tend to be less abundant than the lithophiles.
On the other hand, gold is a siderophile, or iron-loving element. It does not readily form compounds with either oxygen or sulfur. At the time of the Earth's formation, and as the most noble (inert) of metallic elements, gold sank into the core due to its tendency to form high-density metallic alloys. Consequently, it is relatively rare. Some other (less) noble ones—molybdenum, rhenium, the platinum group metals (ruthenium, rhodium, palladium, osmium, iridium, and platinum), germanium, and tin—can be counted as siderophiles but only in terms of their primary occurrence in the Earth (core, mantle, and crust), rather the crust. These otherwise occur in the crust, in small quantities, chiefly as chalcophiles (less so in their native form).
The rotating fluid outer core of the Earth's interior, which is composed mostly of iron, is thought to be the source of Earth's protective magnetic field. The core lies above Earth's solid inner core and below its mantle. If it could be rearranged into a column having a footprint it would have a height of nearly 700 light years. The magnetic field shields the Earth from the charged particles of the solar wind, and cosmic rays that would otherwise strip away the upper atmosphere (including the ozone layer that limits the transmission of ultraviolet radiation).
Extraction
Metallic elements are often extracted from the Earth by mining ores that are rich sources of the requisite elements, such as bauxite. Ores are located by prospecting techniques, followed by the exploration and examination of deposits. Mineral sources are generally divided into surface mines, which are mined by excavation using heavy equipment, and subsurface mines. In some cases, the sale price of the metal(s) involved make it economically feasible to mine lower concentration sources.
Once the ore is mined, the elements must be extracted, usually by chemical or electrolytic reduction. Pyrometallurgy uses high temperatures to convert ore into raw metals, while hydrometallurgy employs aqueous chemistry for the same purpose.
When a metallic ore is an ionic compound, the ore must usually be smelted—heated with a reducing agent—to extract the pure metal. Many common metals, such as iron, are smelted using carbon as a reducing agent. Some metals, such as aluminum and sodium, have no commercially practical reducing agent, and are extracted using electrolysis instead.
Sulfide ores are not reduced directly to the metal but are roasted in air to convert them to oxides.
Recycling
Demand for metals is closely linked to economic growth given their use in infrastructure, construction, manufacturing, and consumer goods. During the 20th century, the variety of metals used in society grew rapidly. Today, the development of major nations, such as China and India, and technological advances, are fueling ever more demand. The result is that mining activities are expanding, and more and more of the world's metal stocks are above ground in use, rather than below ground as unused reserves. An example is the in-use stock of copper. Between 1932 and 1999, copper in use in the U.S. rose from 73 g to 238 g per person.
Metals are inherently recyclable, so in principle, can be used over and over again, minimizing these negative environmental impacts and saving energy. For example, 95% of the energy used to make aluminum from bauxite ore is saved by using recycled material.
Globally, metal recycling is generally low. In 2010, the International Resource Panel, hosted by the United Nations Environment Programme published reports on metal stocks that exist within society and their recycling rates. The authors of the report observed that the metal stocks in society can serve as huge mines above ground. They warned that the recycling rates of some rare metals used in applications such as mobile phones, battery packs for hybrid cars and fuel cells are so low that unless future end-of-life recycling rates are dramatically stepped up these critical metals will become unavailable for use in modern technology.
History
Prehistory
Copper, which occurs in native form, may have been the first metal discovered given its distinctive appearance, heaviness, and malleability. Gold, silver, iron (as meteoric iron), and lead were likewise discovered in prehistory. Forms of brass, an alloy of copper and zinc made by concurrently smelting the ores of these metals, originate from this period (although pure zinc was not isolated until the 13th century). The malleability of the solid metals led to the first attempts to craft metal ornaments, tools, and weapons. Meteoric iron containing nickel was discovered from time to time and, in some respects this was superior to any industrial steel manufactured up to the 1880s when alloy steels become prominent.
Antiquity
The discovery of bronze (an alloy of copper with arsenic or tin) enabled people to create metal objects which were harder and more durable than previously possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper ("Chalcolithic") predecessors. Initially, bronze was made of copper and arsenic (forming arsenic bronze) by smelting naturally or artificially mixed ores of copper and arsenic. The earliest artifacts so far known come from the Iranian plateau in the fifth millennium BCE. It was only later that tin was used, becoming the major non-copper ingredient of bronze in the late third millennium BCE. Pure tin itself was first isolated in 1800 BCE by Chinese and Japanese metalworkers.
Mercury was known to ancient Chinese and Indians before 2000 BCE, and found in Egyptian tombs dating from 1500 BCE.
The earliest known production of steel, an iron-carbon alloy, is seen in pieces of ironware excavated from an archaeological site in Anatolia (Kaman-Kalehöyük) which are nearly 4,000 years old, dating from 1800 BCE.
From about 500 BCE sword-makers of Toledo, Spain, were making early forms of alloy steel by adding a mineral called wolframite, which contained tungsten and manganese, to iron ore (and carbon). The resulting Toledo steel came to the attention of Rome when used by Hannibal in the Punic Wars. It soon became the basis for the weaponry of Roman legions; such swords were, "stronger in composition than any existing sword and, because… [they] would not break, provided a psychological advantage to the Roman soldier."
In pre-Columbian America, objects made of tumbaga, an alloy of copper and gold, started being produced in Panama and Costa Rica between 300 and 500 CE. Small metal sculptures were common and an extensive range of tumbaga (and gold) ornaments comprised the usual regalia of persons of high status.
At around the same time indigenous Ecuadorians were combining gold with a naturally-occurring platinum alloy containing small amounts of palladium, rhodium, and iridium, to produce miniatures and masks of a white gold-platinum alloy. The metal workers involved heated gold with grains of the platinum alloy until the gold melted. After cooling, the resulting conglomeration was hammered and reheated repeatedly until it became homogenous, equivalent to melting all the metals (attaining the melting points of the platinum group metals concerned was beyond the technology of the day).
Middle Ages
Arabic and medieval alchemists believed that all metals and matter were composed of the principle of sulfur, the father of all metals and carrying the combustible property, and the principle of mercury, the mother of all metals and carrier of the liquidity, fusibility, and volatility properties. These principles were not necessarily the common substances sulfur and mercury found in most laboratories. This theory reinforced the belief that all metals were destined to become gold in the bowels of the earth through the proper combinations of heat, digestion, time, and elimination of contaminants, all of which could be developed and hastened through the knowledge and methods of alchemy.
Arsenic, zinc, antimony, and bismuth became known, although these were at first called semimetals or bastard metals on account of their immalleability. Albertus Magnus is believed to have been the first to isolate arsenic from a compound in 1250, by heating soap together with arsenic trisulfide. Metallic zinc, which is brittle if impure, was isolated in India by 1300 AD. The first description of a procedure for isolating antimony is in the 1540 book De la pirotechnia by Vannoccio Biringuccio. Bismuth was described by Agricola in De Natura Fossilium (c. 1546); it had been confused in early times with tin and lead because of its resemblance to those elements.
The Renaissance
The first systematic text on the arts of mining and metallurgy was De la Pirotechnia (1540) by Vannoccio Biringuccio, which treats the examination, fusion, and working of metals.
Sixteen years later, Georgius Agricola published De Re Metallica in 1556, an account of the profession of mining, metallurgy, and the accessory arts and sciences, an extensive treatise on the chemical industry through the sixteenth century.
He gave the following description of a metal in his De Natura Fossilium (1546):
Metal is a mineral body, by nature either liquid or somewhat hard. The latter may be melted by the heat of the fire, but when it has cooled down again and lost all heat, it becomes hard again and resumes its proper form. In this respect it differs from the stone which melts in the fire, for although the latter regain its hardness, yet it loses its pristine form and properties.
Traditionally there are six different kinds of metals, namely gold, silver, copper, iron, tin, and lead. There are really others, for quicksilver is a metal, although the Alchemists disagree with us on this subject, and bismuth is also. The ancient Greek writers seem to have been ignorant of bismuth, wherefore Ammonius rightly states that there are many species of metals, animals, and plants which are unknown to us. Stibium when smelted in the crucible and refined has as much right to be regarded as a proper metal as is accorded to lead by writers. If when smelted, a certain portion be added to tin, a bookseller's alloy is produced from which the type is made that is used by those who print books on paper.
Each metal has its own form which it preserves when separated from those metals which were mixed with it. Therefore neither electrum nor Stannum [not meaning our tin] is of itself a real metal, but rather an alloy of two metals. Electrum is an alloy of gold and silver, Stannum of lead and silver. And yet if silver be parted from the electrum, then gold remains and not electrum; if silver be taken away from Stannum, then lead remains and not Stannum.
Whether brass, however, is found as a native metal or not, cannot be ascertained with any surety. We only know of the artificial brass, which consists of copper tinted with the colour of the mineral calamine. And yet if any should be dug up, it would be a proper metal. Black and white copper seem to be different from the red kind.
Metal, therefore, is by nature either solid, as I have stated, or fluid, as in the unique case of quicksilver.
But enough now concerning the simple kinds.
Platinum, the third precious metal after gold and silver, was discovered in Ecuador during the period 1736 to 1744 by the Spanish astronomer Antonio de Ulloa and his colleague the mathematician Jorge Juan y Santacilia. Ulloa was the first person to write a scientific description of the metal, in 1748.
In 1789, the German chemist Martin Heinrich Klaproth isolated an oxide of uranium, which he thought was the metal itself. Klaproth was subsequently credited as the discoverer of uranium. It was not until 1841, that the French chemist Eugène-Melchior Péligot, prepared the first sample of uranium metal. Henri Becquerel subsequently discovered radioactivity in 1896 using uranium.
In the 1790s, Joseph Priestley and the Dutch chemist Martinus van Marum observed the effect of metal surfaces on the dehydrogenation of alcohol, a development which subsequently led, in 1831, to the industrial scale synthesis of sulphuric acid using a platinum catalyst.
In 1803, cerium was the first of the lanthanide metals to be discovered, in Bastnäs, Sweden by Jöns Jakob Berzelius and Wilhelm Hisinger, and independently by Martin Heinrich Klaproth in Germany. The lanthanide metals were regarded as oddities until the 1960s when methods were developed to more efficiently separate them from one another. They have subsequently found uses in cell phones, magnets, lasers, lighting, batteries, catalytic converters, and in other applications enabling modern technologies.
Other metals discovered and prepared during this time were cobalt, nickel, manganese, molybdenum, tungsten, and chromium; and some of the platinum group metals, palladium, osmium, iridium, and rhodium.
Light metallic elements
All elemental metals discovered before 1809 had relatively high densities; their heaviness was regarded as a distinguishing criterion. From 1809 onward, light metals such as sodium, potassium, and strontium were isolated. Their low densities challenged conventional wisdom as to the nature of metals. They behaved chemically as metals however, and were subsequently recognized as such.
Aluminium was discovered in 1824 but it was not until 1886 that an industrial large-scale production method was developed. Prices of aluminium dropped and aluminium became widely used in jewelry, everyday items, eyeglass frames, optical instruments, tableware, and foil in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal many uses at the time. During World War I, major governments demanded large shipments of aluminium for light and strong airframes.
While pure metallic titanium (99.9%) was first prepared in 1910 it was not used outside the laboratory until 1932. In the 1950s and 1960s, the Soviet Union pioneered the use of titanium in military and submarine applications as part of programs related to the Cold War. Starting in the early 1950s, titanium came into use in military aviation, particularly in high-performance jets, starting with aircraft such as the F-100 Super Sabre and Lockheed A-12 and SR-71.
Metallic scandium was produced for the first time in 1937. The first pound of 99% pure scandium metal was produced in 1960. Production of aluminium-scandium alloys began in 1971 following a U.S. patent. Aluminium-scandium alloys were also developed in the USSR.
The age of steel
The modern era in steelmaking began with the introduction of Henry Bessemer's Bessemer process in 1855, the raw material for which was pig iron. His method let him produce steel in large quantities cheaply, thus mild steel came to be used for most purposes for which wrought iron was formerly used. The Gilchrist-Thomas process (or basic Bessemer process) was an improvement to the Bessemer process, made by lining the converter with a basic material to remove phosphorus.
Due to its high tensile strength and low cost, steel came to be a major component used in buildings, infrastructure, tools, ships, automobiles, machines, appliances, and weapons.
In 1872, the Englishmen Clark and Woods patented an alloy that would today be considered a stainless steel. The corrosion resistance of iron-chromium alloys had been recognized in 1821 by French metallurgist Pierre Berthier. He noted their resistance against attack by some acids and suggested their use in cutlery. Metallurgists of the 19th century were unable to produce the combination of low carbon and high chromium found in most modern stainless steels, and the high-chromium alloys they could produce were too brittle to be practical. It was not until 1912 that the industrialization of stainless steel alloys occurred in England, Germany, and the United States.
The last stable metallic elements
By 1900 three metals with atomic numbers less than lead (#82), the heaviest stable metal, remained to be discovered: elements 71, 72, 75.
Von Welsbach, in 1906, proved that the old ytterbium also contained a new element (#71), which he named cassiopeium. Urbain proved this simultaneously, but his samples were very impure and only contained trace quantities of the new element. Despite this, his chosen name lutetium was adopted.
In 1908, Ogawa found element 75 in thorianite but assigned it as element 43 instead of 75 and named it nipponium. In 1925 Walter Noddack, Ida Eva Tacke, and Otto Berg announced its separation from gadolinite and gave it the present name, rhenium.
Georges Urbain claimed to have found element 72 in rare-earth residues, while Vladimir Vernadsky independently found it in orthite. Neither claim was confirmed due to World War I, and neither could be confirmed later, as the chemistry they reported does not match that now known for hafnium. After the war, in 1922, Coster and Hevesy found it by X-ray spectroscopic analysis in Norwegian zircon. Hafnium was thus the last stable element to be discovered, though rhenium was the last to be correctly recognized.
By the end of World War II scientists had synthesized four post-uranium elements, all of which are radioactive (unstable) metals: neptunium (in 1940), plutonium (1940–41), and curium and americium (1944), representing elements 93 to 96. The first two of these were eventually found in nature as well. Curium and americium were by-products of the Manhattan project, which produced the world's first atomic bomb in 1945. The bomb was based on the nuclear fission of uranium, a metal first thought to have been discovered nearly 150 years earlier.
Post-World War II developments
Superalloys
Superalloys composed of combinations of Fe, Ni, Co, and Cr, and lesser amounts of W, Mo, Ta, Nb, Ti, and Al were developed shortly after World War II for use in high performance engines, operating at elevated temperatures (above 650 °C (1,200 °F)). They retain most of their strength under these conditions, for prolonged periods, and combine good low-temperature ductility with resistance to corrosion or oxidation. Superalloys can now be found in a wide range of applications including land, maritime, and aerospace turbines, and chemical and petroleum plants.
Transcurium metals
The successful development of the atomic bomb at the end of World War II sparked further efforts to synthesize new elements, nearly all of which are, or are expected to be, metals, and all of which are radioactive. It was not until 1949 that element 97 (Berkelium), next after element 96 (Curium), was synthesized by firing alpha particles at an americium target. In 1952, element 100 (Fermium) was found in the debris of the first hydrogen bomb explosion; hydrogen, a nonmetal, had been identified as an element nearly 200 years earlier. Since 1952, elements 101 (Mendelevium) to 118 (Oganesson) have been synthesized.
Bulk metallic glasses
A metallic glass (also known as an amorphous or glassy metal) is a solid metallic material, usually an alloy, with a disordered atomic-scale structure. Most pure and alloyed metals, in their solid state, have atoms arranged in a highly ordered crystalline structure. In contrast these have a non-crystalline glass-like structure. But unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivity. Amorphous metals are produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloying. The first reported metallic glass was an alloy (Au75Si25) produced at Caltech in 1960. More recently, batches of amorphous steel with three times the strength of conventional steel alloys have been produced. Currently, the most important applications rely on the special magnetic properties of some ferromagnetic metallic glasses. The low magnetization loss is used in high-efficiency transformers. Theft control ID tags and other article surveillance schemes often use metallic glasses because of these magnetic properties.
Shape-memory alloys
A shape-memory alloy (SMA) is an alloy that "remembers" its original shape and when deformed returns to its pre-deformed shape when heated. While the shape memory effect had been first observed in 1932, in an Au-Cd alloy, it was not until 1962, with the accidental discovery of the effect in a Ni-Ti alloy that research began in earnest, and another ten years before commercial applications emerged. SMA's have applications in robotics and automotive, aerospace, and biomedical industries. There is another type of SMA, called a ferromagnetic shape-memory alloy (FSMA), that changes shape under strong magnetic fields. These materials are of interest as the magnetic response tends to be faster and more efficient than temperature-induced responses.
Quasicrystalline alloys
In 1984, Israeli metallurgist Dan Shechtman found an aluminum-manganese alloy having five-fold symmetry, in breach of crystallographic convention at the time which said that crystalline structures could only have two-, three-, four-, or six-fold symmetry. Due to reservation about the scientific community's reaction, it took him two years to publish the results for which he was awarded the Nobel Prize in Chemistry in 2011. Since this time, hundreds of quasicrystals have been reported and confirmed. They exist in many metallic alloys (and some polymers). Quasicrystals are found most often in aluminum alloys (Al-Li-Cu, Al-Mn-Si, Al-Ni-Co, Al-Pd-Mn, Al-Cu-Fe, Al-Cu-V, etc.), but numerous other compositions are also known (Cd-Yb, Ti-Zr-Ni, Zn-Mg-Ho, Zn-Mg-Sc, In-Ag-Yb, Pd-U-Si, etc.). Quasicrystals effectively have infinitely large unit cells. Icosahedrite Al63Cu24Fe13, the first quasicrystal found in nature, was discovered in 2009. Most quasicrystals have ceramic-like properties including low electrical conductivity (approaching values seen in insulators) and low thermal conductivity, high hardness, brittleness, and resistance to corrosion, and non-stick properties. Quasicrystals have been used to develop heat insulation, LEDs, diesel engines, and new materials that convert heat to electricity. New applications may take advantage of the low coefficient of friction and the hardness of some quasicrystalline materials, for example embedding particles in plastic to make strong, hard-wearing, low-friction plastic gears. Other potential applications include selective solar absorbers for power conversion, broad-wavelength reflectors, and bone repair and prostheses applications where biocompatibility, low friction, and corrosion resistance are required.
Complex metallic alloys
Complex metallic alloys (CMAs) are intermetallic compounds characterized by large unit cells comprising some tens up to thousands of atoms; the presence of well-defined clusters of atoms (frequently with icosahedral symmetry); and partial disorder within their crystalline lattices. They are composed of two or more metallic elements, sometimes with metalloids or chalcogenides added. They include, for example, NaCd2, with 348 sodium atoms and 768 cadmium atoms in the unit cell. Linus Pauling attempted to describe the structure of NaCd2 in 1923, but did not succeed until 1955. At first called "giant unit cell crystals", interest in CMAs, as they came to be called, did not pick up until 2002, with the publication of a paper called "Structurally Complex Alloy Phases", given at the 8th International Conference on Quasicrystals. Potential applications of CMAs include as heat insulation; solar heating; magnetic refrigerators; using waste heat to generate electricity; and coatings for turbine blades in military engines.
High-entropy alloys
High entropy alloys (HEAs) such as AlLiMgScTi are composed of equal or nearly equal quantities of five or more metals. Compared to conventional alloys with only one or two base metals, HEAs have considerably better strength-to-weight ratios, higher tensile strength, and greater resistance to fracturing, corrosion, and oxidation. Although HEAs were described as early as 1981, significant interest did not develop until the 2010s; they continue to be a focus of research in materials science and engineering because of their desirable properties.
MAX phase
In a Max phase, M is an early transition metal, A is an A group element (mostly group IIIA and IVA, or groups 13 and 14), and X is either carbon or nitrogen. Examples are Hf2SnC and Ti4AlN3. Such alloys have high electrical and thermal conductivity, thermal shock resistance, damage tolerance, machinability, high elastic stiffness, and low thermal expansion coefficients. They can be polished to a metallic luster because of their excellent electrical conductivities. During mechanical testing, it has been found that polycrystalline Ti3SiC2 cylinders can be repeatedly compressed at room temperature, up to stresses of 1 GPa, and fully recover upon the removal of the load. Some MAX phases are also highly resistant to chemical attack (e.g. Ti3SiC2) and high-temperature oxidation in air (Ti2AlC, Cr2AlC2, and Ti3AlC2). Potential applications for MAX phase alloys include: as tough, machinable, thermal shock-resistant refractories; high-temperature heating elements; coatings for electrical contacts; and neutron irradiation resistant parts for nuclear applications.
| Physical sciences | Chemistry | null |
372787 | https://en.wikipedia.org/wiki/Sika%20deer | Sika deer | The sika deer (Cervus nippon), also known as the northern spotted deer or the Japanese deer, is a species of deer native to much of East Asia and introduced to other parts of the world. Previously found from northern Vietnam in the south to the Russian Far East in the north, it was hunted to the brink of extinction in the 19th century. Protection laws were enacted in the mid-20th century, leading to a rapid recovery of their population from the 1950s to the 1980s.
Etymology
Its name comes from , the Japanese word for "deer". In Japan, the species is known as the . In Chinese, it is known as due to the spots resembling plum blossoms.
Taxonomy
The sika deer is a member of the genus Cervus, a group of deer also known as the "true" deer, within the larger deer family, Cervidae. Formerly, sika were grouped together in this genus with nine other diverse species; these animals have since been found to be genetically different, and reclassified elsewhere under different genera. Currently, deer species within the genus Cervus are the sika, the red deer (C. elaphus) of Scotland, Eurasia and Northern Africa (introduced in Argentina, Australia, New Zealand), and the wapiti, or elk (C. canadensis), of North America, Siberia and North-Central Asia.
DNA evidence indicates that the species formerly placed under Cervus are not as closely related as once thought, resulting in the creation of several new genera. The ancestor of all Cervus species probably originated in Central Asia and possibly resembled the sika deer. Members of this genus can crossbreed and produce hybrids in areas where they coexist. This includes sika and wapiti; in the Scottish Highlands, the interbreeding of native Scottish red deer with introduced sika has been deemed a serious threat to the gene pool of the Scottish deer. However, by comparison, an invasive sika deer in the United States cannot reproduce with a North American white-tailed, mule or black-tailed deer, all of which are placed in a separate genus, Odocoileus.
Subspecies
Serious genetic pollution has occurred in many populations, especially in China, so the status of many subspecies remains unclear. The status of C. n. hortulorum is particularly uncertain and might in fact be of mixed origin, hence it is not listed here.
C. n. aplodontus, northern Honshu
C. n. grassianus, Shanxi, China
C. n. keramae, Kerama Islands of the Ryukyu Islands, Japan
C. n. kopschi, southern China
C. n. mandarinus, northern and northeastern China
C. n. mantchuricus, northeastern China, Korea, and Russian Far East
C. n. nippon (type species), southern Honshu, Shikoku, and Kyushu
C. n. pseudaxis, northern Vietnam
C. n. pulchellus, Tsushima Island
C. n. sichuanicus, western China
†C. n. sintikuensis, Taiwan
C. n. soloensis, Southern Philippines (anciently introduced to Jolo island; of unknown subspecies origin, probably extinct)
C. n. taiouanus, Taiwan
C. n. yakushimae, Yakushima, Japan
C. n. yesoensis, Hokkaido, Japan
Description
The sika deer is one of the few deer species that does not lose its spots upon reaching maturity. Spot patterns vary with region. The mainland subspecies have larger and more obvious spots, in contrast to the Taiwanese and Japanese subspecies, whose spots are nearly invisible. Many introduced populations are from Japan, so they also lack significant spots.
The color of the pelage ranges from mahogany to black, and white individuals are also known. During winter, the coat becomes darker and shaggier and the spots less prominent, and a mane forms on the back of the males' necks. They are medium-sized herbivores, though they show notable size variation across their several subspecies and considerable sexual dimorphism, with males invariably much larger than females. They can vary from tall at the shoulder and from in head-and-body length. The tail measures about long.
The largest subspecies is the Manchurian sika deer (C. n. mantchuricus), in which males commonly weigh about and females weigh , with large stags scaling up to , although there had been records of Yezo sika deer bulls weighing up to . On the other end of the size spectrum, in the Japanese sika deer (C. n. nippon), males weigh and females weigh . All sikas are compact and dainty-legged, with short, trim, wedge-shaped heads and a boisterous disposition. When alarmed, they often display a distinctive flared rump, much like the American elk.
Sika stags have stout, upright antlers with an extra buttress up from the brow tine and a very thick wall. A forward-facing intermediate tine breaks the line to the top, which is usually forked. Occasionally, sika antlers develop some palmation (flat areas). Females carry a pair of distinctive black bumps on the forehead. Antlers can range from to more than , depending on the subspecies. Stags also have distinctive manes during their mating period (rut).
These deer have well-developed metatarsal and preorbital glands. The volatile components of these glands were examined from a free-ranging female. The metatarsal gland contained 35 compounds: long-chain carboxylic acids, straight-chain aldehydes, long-chain alcohols, a ketone, and cholesterol. The components of the preorbital gland were C14 through C18 straight and branched-chain fatty acids.
Behavior
Sika deer can be active throughout the day, though in areas with heavy human disturbance, they tend to be nocturnal. Seasonal migration is known to occur in mountainous areas, such as Japan, with winter ranges being up to lower in elevation than summer ranges.
Lifestyles vary between individuals, with some occurring alone while others are found in single-sex groups. Large herds gather in autumn and winter. Males spend most years alone occasionally forming herds together. Females with fawns only form herds during birthing season. The sika deer is a highly vocal species, with over 10 individual sounds, ranging from soft whistles to loud screams.
Sika males are territorial and keep harems of females during their rut, which peaks from early September through November, but may last well into the winter. Territory size varies with habitat type and size of the buck; strong, prime bucks may hold up to . Territories are marked by a series of shallow pits or "scrapes", which is digging holes (up to 1.6 m in wide and 0.3 m in deep) with forefeet or antlers, into which the males urinate and from which emanates a strong, musky odor. Fights between rival males for territorial disputes, which occur by using hooves and antlers, are sometimes fierce and long and may even be fatal.
The gestation period lasts for seven months. Hinds (does) give birth to a single fawn, weighing , which is nursed for up to ten months. The mother hides her fawn in thick undergrowth immediately after giving birth, and the fawn stays very quiet and still while it waits until the mother returns to nurse it. The fawn becomes independent 10 to 12 months after birth, and attains sexual maturity at 16 to 18 months in both sexes. The average lifespan is 15 to 18 years in captivity, although one case is recorded as living 25 years and 5 months.
The sika deer may interbreed with the red deer, the closest relative; hybrid descendants may have adaptive advantages over purebred relatives.
In Nara Prefecture, Japan, the deer are also known as "bowing deer", as they bow their heads before being fed special . However, deer bow heads to signal that they are about to headbutt. Therefore, when a human "bows" to a deer, the deer may take it as a challenge, and will assume the same stance before charging and attempting to headbutt the person. Deer headbutt both for play and to assert dominance, as do goats. Sika deer are found throughout the city of Nara and its many parks and temples like Tōdai-ji, as they are considered to be the messengers of the Shinto gods.
Habitat
Sika deer are found in the temperate and subtropical forests of eastern Asia, preferring areas with dense understory, and where snowfall does not exceed . They tend to forage in patchy clearings of forests. Introduced populations are found in areas with similar habitats to their native ranges, including Western and Central Europe, Eastern United States, and New Zealand.
Population
Sika deer inhabit temperate and subtropical woodlands, often in areas suitable for farming and other human exploitation. Their range encompasses some of the most densely populated areas in the world, where forests were cleared hundreds of years ago. Their population status varies significantly in different countries. Although the species as a whole is thriving, it is endangered and extinct in many areas.
Japan has by far the largest native sika population in the world. The population was estimated to be between 170,000 and 330,000 individuals in 1993, mainly due to recent conservation efforts and the extinction of its main predator, the Japanese wolf (Canis lupus hodophilax), over a century ago. Without its main predator, the population of sika exploded and it is now overpopulated in many areas, posing a threat to both forests and farmlands. Efforts are now being made to control its population instead of conserving it. None of its subspecies is endangered except the Kerama deer (C. n. keramae) on the tiny Kerama Islands.
In 2015, Japanese Ministry of the Environment estimated the population at 3,080,000 in Japan, including Hokkaido.
China used to have the largest population of sika, but thousands of years of hunting and habitat loss have reduced the population to less than 1,000. Of the five subspecies in China, the North China sika deer (C. n. mandarinus) is believed to be extinct in the wild since the 1930s; the Shanxi sika deer (C. n. grassianus) has not been seen in the wild since the 1980s and is also believed to be extinct in the wild. The status of Manchurian sika deer in China is unclear, though it is also believed to be extinct, and the sightings there are actually feral populations.
The South China sika deer (C. n. kopschi) and Sichuan sika deer (C. n. sichuanicus) are the only subspecies known to remain in the wild in China. The former exists in fragmented populations of around 300 in southeast China, while the latter is found in a single population of over 400. The feral population is likely to be much higher than the wild, though most of them are descended from domesticated sikas of mixed subspecies. All of the subspecies are present in captivity, but a lack of suitable habitats and government efforts prevent their reintroduction.
The Formosan sika deer (C. n. taioanus) has been extinct in the wild for almost two decades before individuals from zoos were introduced to Kenting National Park; the population now numbers 200. Reintroduction programs are also under way in Vietnam, where the Vietnamese sika deer (C. n. pseudaxis) is extinct or nearly so.
Russia has a relatively large and stable population of 8,500–9,000 individuals of the Manchurian subspecies, but this is limited to a small area in Primorsky Krai. Small populations might exist in North Korea, but the political situation makes investigation impossible. The original stock of sika deer in South Korea is extinct, with only captive stock raised for medicine from other parts of the deer's habitat. But in June 2020, an unmanned camera located a doe and fawn which might hold proof for Korea's last native sika deer, although the claim is contested.
Introduced populations
Sika deer have been introduced into a number of other countries, including Estonia, Latvia, Lithuania, Austria, Belgium, Denmark, France, Germany, Ireland, Netherlands, Norway, Switzerland, Russia, Romania, New Zealand, Australia, the Philippines (Jolo Island), Poland, Sweden, Finland, Canada, the United Kingdom, and the United States (in Delaware, Maryland, Oklahoma, Nebraska, Pennsylvania, Wisconsin, Virginia, Indiana, Michigan, Minnesota, Maine, New York, Texas, and Wyoming). In many cases, they were originally introduced as ornamental animals in parklands, but have established themselves in the wild. On Spieden Island in the San Juan Islands of Washington, they were introduced as a game animal.
In the UK and Ireland, several distinct feral populations now exist, in addition to about 1000 individuals in deer parks. Some of these are in isolated areas, for example on the island of Lundy, but others are contiguous with populations of the native red deer. Since the two species sometimes hybridize, a serious conservation concern exists. In research which rated the negative impact of introduced mammals in Europe, the sika deer was found to be among the most damaging to the environment and economy, along with the brown rat and muskrat.
In the 1900s, King Edward VII presented a pair of sika deer to John, the second Baron Montagu of Beaulieu. This pair escaped into Sowley Wood and were the basis of the sika to be found in the New Forest today. They were so prolific, culling had to be introduced in the 1930s to control their numbers.
Hunting
Across its original range and in many areas to which it has been introduced, the sika is regarded as a particularly prized and elusive sportsman's quarry. In Britain, Ireland, and mainland Europe, sika display very different survival strategies and escape tactics from the indigenous deer. They have a marked tendency to use concealment in circumstances when red deer, for example, would flee, and have been seen to squat and lie belly-flat when danger threatens.
In the British Isles, sika are widely regarded as a serious threat to new and established woodlands, and public and private forestry bodies adopt policies of rigorous year-round culling.
The main predators of sika deer include tigers, wolves, leopards, and brown bears. Lynx and golden eagles target fawns.
Velvet antler
Velvet antler (dried immature antlers) is a popular ingredient in traditional Chinese medicine, and sika in China were domesticated long ago for the antler trade, along with several other species. In Taiwan, both Formosan sika deer and Formosan sambar deer (Cervus unicolor swinhoei) have been farmed for velvet antlers. Japan is the only country in eastern Asia where sika deer were not farmed for velvet antlers.
Other deer raised for the antler trade were Thorold's deer (Cervus albirostris), central Asian red deer (Cervus canadensis affinis), and American elk (Cervus canadensis canadensis).
Cultural significance
In Shinto, the Shika Deer is considered a kind of messenger between mortals and the kami.
| Biology and health sciences | Deer | Animals |
372895 | https://en.wikipedia.org/wiki/Oxyanion | Oxyanion | An oxyanion, or oxoanion, is an ion with the generic formula (where A represents a chemical element and O represents an oxygen atom). Oxyanions are formed by a large majority of the chemical elements. The formulae of simple oxyanions are determined by the octet rule. The corresponding oxyacid of an oxyanion is the compound . The structures of condensed oxyanions can be rationalized in terms of AOn polyhedral units with sharing of corners or edges between polyhedra. The oxyanions (specifically, phosphate and polyphosphate esters) adenosine monophosphate (AMP), adenosine diphosphate (ADP) and adenosine triphosphate (ATP) are important in biology.
Monomeric oxyanions
The formula of monomeric oxyanions, , is dictated by the oxidation state of the element A and its position in the periodic table. Elements of the first row are limited to a maximum coordination number of 4. However, none of the first row elements has a monomeric oxyanion with that coordination number. Instead, carbonate () and nitrate () have a trigonal planar structure with π bonding between the central atom and the oxygen atoms. This π bonding is favoured by the similarity in size of the central atom and oxygen.
The oxyanions of second-row elements in the group oxidation state are tetrahedral. Tetrahedral units are found in olivine minerals, , but the anion does not have a separate existence as the oxygen atoms are surrounded tetrahedrally by cations in the solid state. Phosphate (), sulfate (), and perchlorate () ions can be found as such in various salts. Many oxyanions of elements in lower oxidation state obey the octet rule and this can be used to rationalize the formulae adopted. For example, chlorine(V) has two valence electrons so it can accommodate three electron pairs from bonds with oxide ions. The charge on the ion is +5 − 3 × 2 = −1, and so the formula is . The structure of the ion is predicted by VSEPR theory to be pyramidal, with three bonding electron pairs and one lone pair. In a similar way,
The oxyanion of chlorine(III) has the formula , and is bent with two lone pairs and two bonding pairs.
In the third and subsequent rows of the periodic table, 6-coordination is possible, but isolated octahedral oxyanions are not known because they would carry too high an electrical charge. Thus molybdenum(VI) does not form , but forms the tetrahedral molybdate anion, . MoO6 units are found in condensed molybdates. Fully protonated oxyanions with an octahedral structure are found in such species as and . In addition, orthoperiodate can be only partially deprotonated, with
H3IO6^{2-} \ _{\longrightarrow}^{\longleftarrow} \ H2IO6^{3-} \ + \ H^{+} having pKa=11.60.
Naming
The naming of monomeric oxyanions follows the following rules.
Here the halogen group (group 7A, 17) is referred to as group VII and the noble gases group (group 8A) is referred to as group VIII.
If central atom is not in Group VII or VIII
If central atom is in Group VII or VIII
Condensation reactions
In aqueous solution, oxyanions with high charge can undergo condensation reactions, such as in the formation of the dichromate ion, :
2 CrO4^2- + 2 H+ <=> Cr2O7^2- + H2O
The driving force for this reaction is the reduction of electrical charge density on the anion and the elimination of the hydronium () ion. The amount of order in the solution is decreased, releasing a certain amount of entropy which makes the Gibbs free energy more negative and favors the forward reaction. It is an example of an acid–base reaction with the monomeric oxyanion acting as a base and the condensed oxyanion acting as its conjugate acid. The reverse reaction is a hydrolysis reaction, as a water molecule, acting as a base, is split. Further condensation may occur, particularly with anions of higher charge, as occurs with adenosine phosphates.
The conversion of ATP to ADP is a hydrolysis reaction and is an important source of energy in biological systems.
The formation of most silicate minerals can be viewed as the result of a de-condensation reaction in which silica reacts with a basic oxide, an acid–base reaction in the Lux–Flood sense.
\overset{base}{CaO} + \overset{acid}{SiO2} -> CaSiO3
Structures and formulae of polyoxyanions
A polyoxyanion is a polymeric oxyanion in which multiple oxyanion monomers, usually regarded as polyhedra, are joined by sharing corners or edges. When two corners of a polyhedron are shared the resulting structure may be a chain or a ring. Short chains occur, for example, in polyphosphates. Inosilicates, such as pyroxenes, have a long chain of tetrahedra each sharing two corners. The same structure occurs in so-called meta-vanadates, such as ammonium metavanadate, .
The formula of the oxyanion is obtained as follows: each nominal silicon ion () is attached to two nominal oxide ions () and has a half share in two others. Thus the stoichiometry and charge are given by:
A ring can be viewed as a chain in which the two ends have been joined. Cyclic triphosphate, is an example.
When three corners are shared the structure extends into two dimensions. In amphiboles, (of which asbestos is an example) two chains are linked together by sharing of a third corner on alternate places along the chain. This results in an ideal formula and a linear chain structure which explains the fibrous nature of these minerals. Sharing of all three corners can result in a sheet structure, as in mica, , in which each silicon has one oxygen to itself and a half-share in three others. Crystalline mica can be cleaved into very thin sheets.
The sharing of all four corners of the tetrahedra results in a 3-dimensional structure, such as in quartz. Aluminosilicates are minerals in which some silicon is replaced by aluminium. However, the oxidation state of aluminium is one less than that of silicon, so the replacement must be accompanied by the addition of another cation. The number of possible combinations of such a structure is very large, which is, in part, the reason why there are so many aluminosilicates.
Octahedral units are common in oxyanions of the larger transition metals. Some compounds, such as salts of the chain-polymeric ion, even contain both tetrahedral and octahedral units. Edge-sharing is common in ions containing octahedral building blocks and the octahedra are usually distorted to reduce the strain at the bridging oxygen atoms. This results in 3-dimensional structures called polyoxometalates. Typical examples occur in the Keggin structure of the phosphomolybdate ion. Edge sharing is an effective means of reducing electrical charge density, as can be seen with the hypothetical condensation reaction involving two octahedra:
2 MO6^{\mathit{n}-}{} + 4H+ -> Mo2O10^{(\mathit{n}-4) - }{} + 2 H2O
Here, the average charge on each M atom is reduced by 2. The efficacy of edge-sharing is demonstrated by the following reaction, which occurs when an alkaline aqueous solution of molybdate is acidified.
7 MoO4^2- + 8 H+ <=> Mo7O24^6- + 4 H2O
The tetrahedral molybdate ion is converted into a cluster of 7 edge-linked octahedra giving an average charge on each molybdenum of . The heptamolybdate cluster is so stable that clusters with between 2 and 6 molybdate units have not been detected even though they must be formed as intermediates.
Heuristic for acidity
The pKa of the related acids can be guessed from the number of double bonds to oxygen. Thus perchloric acid is a very strong acid while hypochlorous acid is very weak. A simple rule usually works to within about 1 pH unit.
Acid–base properties
Most oxyanions are weak bases and can be protonated to give acids or acid salts. For example, the phosphate ion can be successively protonated to form phosphoric acid.
PO4^3- + H+ <=> HPO4^2-
HPO4^2- + H+ <=> H2PO4-
H2PO4- + H+ <=> H3PO4
The extent of protonation in aqueous solution will depend on the acid dissociation constants and pH. For example, AMP (adenosine monophosphate) has a pKa value of 6.21, so at pH 7 it will be about 10% protonated. Charge neutralization is an important factor in these protonation reactions. By contrast, the univalent anions perchlorate and permanganate ions are very difficult to protonate and so the corresponding acids are strong acids.
Although acids such as phosphoric acid are written as , the protons are attached to oxygen atoms forming hydroxyl groups, so the formula can also be written as to better reflect the structure. Sulfuric acid may be written as ; this is the molecule observed in the gas phase.
The phosphite ion, , is a strong base, and so always carries at least one proton. In this case the proton is attached directly to the phosphorus atom with the structure . In forming this ion, the phosphite ion is behaving as a Lewis base and donating a pair of electrons to the Lewis acid, .
As mentioned above, a condensation reaction is also an acid–base reaction. In many systems, both protonation and condensation reactions can occur. The case of the chromate ion provides a relatively simple example. In the predominance diagram for chromate, shown at the right, pCr stands for the negative logarithm of the chromium concentration and pH stands for the negative logarithm of ion concentration. There are two independent equilibria. Equilibrium constants are defined as follows.
{|
|CrO4^2- + H+ <=> HCrO4-
|
|
|-
|2 HCrO4- <=> Cr2O7^2- + H2O
|
|
|}
The predominance diagram is interpreted as follows.
The chromate ion, , is the predominant species at high pH. As pH rises the chromate ion becomes ever more predominant, until it is the only species in solutions with pH > 6.75.
At pH < pK1 the hydrogen chromate ion, is predominant in dilute solution.
The dichromate ion, , is predominant in more concentrated solutions, except at high pH.
The species and are not shown as they are formed only at very low pH.
Predominance diagrams can become very complicated when many polymeric species can be formed, such as in vanadates, molybdates, and tungstates. Another complication is that many of the higher polymers are formed extremely slowly, such that equilibrium may not be attained even in months, leading to possible errors in the equilibrium constants and the predominance diagram.
| Physical sciences | Oxyanionic salts | Chemistry |
373065 | https://en.wikipedia.org/wiki/Polynomial%20ring | Polynomial ring | In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates (traditionally also called variables) with coefficients in another ring, often a field.
Often, the term "polynomial ring" refers implicitly to the special case of a polynomial ring in one indeterminate over a field. The importance of such polynomial rings relies on the high number of properties that they have in common with the ring of the integers.
Polynomial rings occur and are often fundamental in many parts of mathematics such as number theory, commutative algebra, and algebraic geometry. In ring theory, many classes of rings, such as unique factorization domains, regular rings, group rings, rings of formal power series, Ore polynomials, graded rings, have been introduced for generalizing some properties of polynomial rings.
A closely related notion is that of the ring of polynomial functions on a vector space, and, more generally, ring of regular functions on an algebraic variety.
Definition (univariate case)
Let be a field or (more generally) a commutative ring.
The polynomial ring in over , which is denoted , can be defined in several equivalent ways. One of them is to define as the set of expressions, called polynomials in , of the form
where , the coefficients of , are elements of , if , and are symbols, which are considered as "powers" of , and follow the usual rules of exponentiation: , , and for any nonnegative integers and . The symbol is called an indeterminate or variable. (The term of "variable" comes from the terminology of polynomial functions. However, here, has no value (other than itself), and cannot vary, being a constant in the polynomial ring.)
Two polynomials are equal when the corresponding coefficients of each are equal.
One can think of the ring as arising from by adding one new element that is external to , commutes with all elements of , and has no other specific properties. This can be used for an equivalent definition of polynomial rings.
The polynomial ring in over is equipped with an addition, a multiplication and a scalar multiplication that make it a commutative algebra. These operations are defined according to the ordinary rules for manipulating algebraic expressions. Specifically, if
and
then
and
where ,
and
In these formulas, the polynomials and are extended by adding "dummy terms" with zero coefficients, so that all and that appear in the formulas are defined. Specifically, if , then for .
The scalar multiplication is the special case of the multiplication where is reduced to its constant term (the term that is independent of ); that is
It is straightforward to verify that these three operations satisfy the axioms of a commutative algebra over . Therefore, polynomial rings are also called polynomial algebras.
Another equivalent definition is often preferred, although less intuitive, because it is easier to make it completely rigorous, which consists in defining a polynomial as an infinite sequence of elements of , having the property that only a finite number of the elements are nonzero, or equivalently, a sequence for which there is some so that for . In this case, and are considered as alternate notations for
the sequences and , respectively. A straightforward use of the operation rules shows that the expression
is then an alternate notation for the sequence
.
Terminology
Let
be a nonzero polynomial with
The constant term of is It is zero in the case of the zero polynomial.
The degree of , written is the largest such that the coefficient of is not zero.
The leading coefficient of is
In the special case of the zero polynomial, all of whose coefficients are zero, the leading coefficient is undefined, and the degree has been variously left undefined, defined to be , or defined to be a .
A constant polynomial is either the zero polynomial, or a polynomial of degree zero.
A nonzero polynomial is monic if its leading coefficient is
Given two polynomials and , if the degree of the zero polynomial is defined to be one has
and, over a field, or more generally an integral domain,
It follows immediately that, if is an integral domain, then so is .
It follows also that, if is an integral domain, a polynomial is a unit (that is, it has a multiplicative inverse) if and only if it is constant and is a unit in .
Two polynomials are associated if either one is the product of the other by a unit.
Over a field, every nonzero polynomial is associated to a unique monic polynomial.
Given two polynomials, and , one says that divides , is a divisor of , or is a multiple of , if there is a polynomial such that .
A polynomial is irreducible if it is not the product of two non-constant polynomials, or equivalently, if its divisors are either constant polynomials or have the same degree.
Polynomial evaluation
Let be a field or, more generally, a commutative ring, and a ring containing . For any polynomial in and any element in , the substitution of with in defines an element of , which is denoted . This element is obtained by carrying on in after the substitution the operations indicated by the expression of the polynomial. This computation is called the evaluation of at . For example, if we have
we have
(in the first example , and in the second one ). Substituting for itself results in
explaining why the sentences "Let be a polynomial" and "Let be a polynomial" are equivalent.
The polynomial function defined by a polynomial is the function from into that is defined by If is an infinite field, two different polynomials define different polynomial functions, but this property is false for finite fields. For example, if is a field with elements, then the polynomials and both define the zero function.
For every in , the evaluation at , that is, the map defines an algebra homomorphism from to , which is the unique homomorphism from to that fixes , and maps to . In other words, has the following universal property:
For every ring containing , and every element of , there is a unique algebra homomorphism from to that fixes , and maps to .
As for all universal properties, this defines the pair up to a unique isomorphism, and can therefore be taken as a definition of .
The image of the map , that is, the subset of obtained by substituting for in elements of , is denoted . For example, , and the simplification rules for the powers of a square root imply
Univariate polynomials over a field
If is a field, the polynomial ring has many properties that are similar to those of the ring of integers Most of these similarities result from the similarity between the long division of integers and the long division of polynomials.
Most of the properties of that are listed in this section do not remain true if is not a field, or if one considers polynomials in several indeterminates.
Like for integers, the Euclidean division of polynomials has a property of uniqueness. That is, given two polynomials and in , there is a unique pair of polynomials such that , and either or . This makes a Euclidean domain. However, most other Euclidean domains (except integers) do not have any property of uniqueness for the division nor an easy algorithm (such as long division) for computing the Euclidean division.
The Euclidean division is the basis of the Euclidean algorithm for polynomials that computes a polynomial greatest common divisor of two polynomials. Here, "greatest" means "having a maximal degree" or, equivalently, being maximal for the preorder defined by the degree. Given a greatest common divisor of two polynomials, the other greatest common divisors are obtained by multiplication by a nonzero constant (that is, all greatest common divisors of and are associated). In particular, two polynomials that are not both zero have a unique greatest common divisor that is monic (leading coefficient equal to ).
The extended Euclidean algorithm allows computing (and proving) Bézout's identity. In the case of , it may be stated as follows. Given two polynomials and of respective degrees and , if their monic greatest common divisor has the degree , then there is a unique pair of polynomials such that
and
(For making this true in the limiting case where or , one has to define as negative the degree of the zero polynomial. Moreover, the equality can occur only if and are associated.) The uniqueness property is rather specific to . In the case of the integers the same property is true, if degrees are replaced by absolute values, but, for having uniqueness, one must require .
Euclid's lemma applies to . That is, if divides , and is coprime with , then divides . Here, coprime means that the monic greatest common divisor is . Proof: By hypothesis and Bézout's identity, there are , , and such that and . So
The unique factorization property results from Euclid's lemma. In the case of integers, this is the fundamental theorem of arithmetic. In the case of , it may be stated as: every non-constant polynomial can be expressed in a unique way as the product of a constant, and one or several irreducible monic polynomials; this decomposition is unique up to the order of the factors. In other terms is a unique factorization domain. If is the field of complex numbers, the fundamental theorem of algebra asserts that a univariate polynomial is irreducible if and only if its degree is one. In this case the unique factorization property can be restated as: every non-constant univariate polynomial over the complex numbers can be expressed in a unique way as the product of a constant, and one or several polynomials of the form ; this decomposition is unique up to the order of the factors. For each factor, is a root of the polynomial, and the number of occurrences of a factor is the multiplicity of the corresponding root.
Derivation
The (formal) derivative of the polynomial
is the polynomial
In the case of polynomials with real or complex coefficients, this is the standard derivative. The above formula defines the derivative of a polynomial even if the coefficients belong to a ring on which no notion of limit is defined. The derivative makes the polynomial ring a differential algebra.
The existence of the derivative is one of the main properties of a polynomial ring that is not shared with integers, and makes some computations easier on a polynomial ring than on integers.
Square-free factorization
Lagrange interpolation
Polynomial decomposition
Factorization
Except for factorization, all previous properties of are effective, since their proofs, as sketched above, are associated with algorithms for testing the property and computing the polynomials whose existence are asserted. Moreover these algorithms are efficient, as their computational complexity is a quadratic function of the input size.
The situation is completely different for factorization: the proof of the unique factorization does not give any hint for a method for factorizing. Already for the integers, there is no known algorithm running on a classical (non-quantum) computer for factorizing them in polynomial time. This is the basis of the RSA cryptosystem, widely used for secure Internet communications.
In the case of , the factors, and the methods for computing them, depend strongly on . Over the complex numbers, the irreducible factors (those that cannot be factorized further) are all of degree one, while, over the real numbers, there are irreducible polynomials of degree 2, and, over the rational numbers, there are irreducible polynomials of any degree. For example, the polynomial is irreducible over the rational numbers, is factored as over the real numbers and, and as over the complex numbers.
The existence of a factorization algorithm depends also on the ground field. In the case of the real or complex numbers, Abel–Ruffini theorem shows that the roots of some polynomials, and thus the irreducible factors, cannot be computed exactly. Therefore, a factorization algorithm can compute only approximations of the factors. Various algorithms have been designed for computing such approximations, see Root finding of polynomials.
There is an example of a field such that there exist exact algorithms for the arithmetic operations of , but there cannot exist any algorithm for deciding whether a polynomial of the form is irreducible or is a product of polynomials of lower degree.
On the other hand, over the rational numbers and over finite fields, the situation is better than for integer factorization, as there are factorization algorithms that have a polynomial complexity. They are implemented in most general purpose computer algebra systems.
Minimal polynomial
If is an element of an associative -algebra , the polynomial evaluation at is the unique algebra homomorphism from into that maps to and does not affect the elements of itself (it is the identity map on ). It consists of substituting with in every polynomial. That is,
The image of this evaluation homomorphism is the subalgebra generated by , which is necessarily commutative.
If is injective, the subalgebra generated by is isomorphic to . In this case, this subalgebra is often denoted by . The notation ambiguity is generally harmless, because of the isomorphism.
If the evaluation homomorphism is not injective, this means that its kernel is a nonzero ideal, consisting of all polynomials that become zero when is substituted with . This ideal consists of all multiples of some monic polynomial, that is called the minimal polynomial of . The term minimal is motivated by the fact that its degree is minimal among the degrees of the elements of the ideal.
There are two main cases where minimal polynomials are considered.
In field theory and number theory, an element of an extension field of is algebraic over if it is a root of some polynomial with coefficients in . The minimal polynomial over of is thus the monic polynomial of minimal degree that has as a root. Because is a field, this minimal polynomial is necessarily irreducible over . For example, the minimal polynomial (over the reals as well as over the rationals) of the complex number is . The cyclotomic polynomials are the minimal polynomials of the roots of unity.
In linear algebra, the square matrices over form an associative -algebra of finite dimension (as a vector space). Therefore the evaluation homomorphism cannot be injective, and every matrix has a minimal polynomial (not necessarily irreducible). By Cayley–Hamilton theorem, the evaluation homomorphism maps to zero the characteristic polynomial of a matrix. It follows that the minimal polynomial divides the characteristic polynomial, and therefore that the degree of the minimal polynomial is at most .
Quotient ring
In the case of , the quotient ring by an ideal can be built, as in the general case, as a set of equivalence classes. However, as each equivalence class contains exactly one polynomial of minimal degree, another construction is often more convenient.
Given a polynomial of degree , the quotient ring of by the ideal generated by can be identified with the vector space of the polynomials of degrees less than , with the "multiplication modulo " as a multiplication, the multiplication modulo consisting of the remainder under the division by of the (usual) product of polynomials. This quotient ring is variously denoted as or simply
The ring is a field if and only if is an irreducible polynomial. In fact, if is irreducible, every nonzero polynomial of lower degree is coprime with , and Bézout's identity allows computing and such that ; so, is the multiplicative inverse of modulo . Conversely, if is reducible, then there exist polynomials of degrees lower than such that ; so are nonzero zero divisors modulo , and cannot be invertible.
For example, the standard definition of the field of the complex numbers can be summarized by saying that it is the quotient ring
and that the image of in is denoted by . In fact, by the above description, this quotient consists of all polynomials of degree one in , which have the form , with and in The remainder of the Euclidean division that is needed for multiplying two elements of the quotient ring is obtained by replacing by in their product as polynomials (this is exactly the usual definition of the product of complex numbers).
Let be an algebraic element in a -algebra . By algebraic, one means that has a minimal polynomial . The first ring isomorphism theorem asserts that the substitution homomorphism induces an isomorphism of onto the image of the substitution homomorphism. In particular, if is a simple extension of generated by , this allows identifying and This identification is widely used in algebraic number theory.
Modules
The structure theorem for finitely generated modules over a principal ideal domain applies to
K[X], when K is a field. This means that every finitely generated module over K[X] may be decomposed into a direct sum of a free module and finitely many modules of the form , where P is an irreducible polynomial over K and k a positive integer.
Definition (multivariate case)
Given symbols called indeterminates, a monomial (also called power product)
is a formal product of these indeterminates, possibly raised to a nonnegative power. As usual, exponents equal to one and factors with a zero exponent can be omitted. In particular,
The tuple of exponents is called the multidegree or exponent vector of the monomial. For a less cumbersome notation, the abbreviation
is often used. The degree of a monomial , frequently denoted or , is the sum of its exponents:
A polynomial in these indeterminates, with coefficients in a field , or more generally a ring, is a finite linear combination of monomials
with coefficients in . The degree of a nonzero polynomial is the maximum of the degrees of its monomials with nonzero coefficients.
The set of polynomials in denoted is thus a vector space (or a free module, if is a ring) that has the monomials as a basis.
is naturally equipped (see below) with a multiplication that makes a ring, and an associative algebra over , called the polynomial ring in indeterminates over (the definite article the reflects that it is uniquely defined up to the name and the order of the indeterminates. If the ring is commutative, is also a commutative ring.
Operations in
Addition and scalar multiplication of polynomials are those of a vector space or free module equipped by a specific basis (here the basis of the monomials). Explicitly, let
where and are finite sets of exponent vectors.
The scalar multiplication of and a scalar is
The addition of and is
where if and if Moreover, if one has for some the corresponding zero term is removed from the result.
The multiplication is
where is the set of the sums of one exponent vector in and one other in (usual sum of vectors). In particular, the product of two monomials is a monomial whose exponent vector is the sum of the exponent vectors of the factors.
The verification of the axioms of an associative algebra is straightforward.
Polynomial expression
A polynomial expression is an expression built with scalars (elements of ), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers.
As all these operations are defined in a polynomial expression represents a polynomial, that is an element of The definition of a polynomial as a linear combination of monomials is a particular polynomial expression, which is often called the canonical form, normal form, or expanded form of the polynomial.
Given a polynomial expression, one can compute the expanded form of the represented polynomial by expanding with the distributive law all the products that have a sum among their factors, and then using commutativity (except for the product of two scalars), and associativity for transforming the terms of the resulting sum into products of a scalar and a monomial; then one gets the canonical form by regrouping the like terms.
The distinction between a polynomial expression and the polynomial that it represents is relatively recent, and mainly motivated by the rise of computer algebra, where, for example, the test whether two polynomial expressions represent the same polynomial may be a nontrivial computation.
Categorical characterization
If is a commutative ring, the polynomial ring has the following universal property: for every commutative -algebra , and every -tuple of elements of , there is a unique algebra homomorphism from to that maps each to the corresponding This homomorphism is the evaluation homomorphism that consists in substituting with in every polynomial.
As it is the case for every universal property, this characterizes the pair up to a unique isomorphism.
This may also be interpreted in terms of adjoint functors. More precisely, let and be respectively the categories of sets and commutative -algebras (here, and in the following, the morphisms are trivially defined). There is a forgetful functor that maps algebras to their underlying sets. On the other hand, the map defines a functor in the other direction. (If is infinite, is the set of all polynomials in a finite number of elements of .)
The universal property of the polynomial ring means that and are adjoint functors. That is, there is a bijection
This may be expressed also by saying that polynomial rings are free commutative algebras, since they are free objects in the category of commutative algebras. Similarly, a polynomial ring with integer coefficients is the free commutative ring over its set of variables, since commutative rings and commutative algebras over the integers are the same thing.
Graded structure
Univariate over a ring vs. multivariate
A polynomial in can be considered as a univariate polynomial in the indeterminate over the ring by regrouping the terms that contain the same power of that is, by using the identity
which results from the distributivity and associativity of ring operations.
This means that one has an algebra isomorphism
that maps each indeterminate to itself. (This isomorphism is often written as an equality, which is justified by the fact that polynomial rings are defined up to a unique isomorphism.)
In other words, a multivariate polynomial ring can be considered as a univariate polynomial over a smaller polynomial ring. This is commonly used for proving properties of multivariate polynomial rings, by induction on the number of indeterminates.
The main such properties are listed below.
Properties that pass from to
In this section, is a commutative ring, is a field, denotes a single indeterminate, and, as usual, is the ring of integers. Here is the list of the main ring properties that remain true when passing from to .
If is an integral domain then the same holds for (since the leading coefficient of a product of polynomials is, if not zero, the product of the leading coefficients of the factors).
In particular, and are integral domains.
If is a unique factorization domain then the same holds for . This results from Gauss's lemma and the unique factorization property of where is the field of fractions of .
In particular, and are unique factorization domains.
If is a Noetherian ring, then the same holds for .
In particular, and are Noetherian rings; this is Hilbert's basis theorem.
If is a Noetherian ring, then where "" denotes the Krull dimension.
In particular, and
If is a regular ring, then the same holds for ; in this case, one has where "" denotes the global dimension.
In particular, and are regular rings, and The latter equality is Hilbert's syzygy theorem.
Several indeterminates over a field
Polynomial rings in several variables over a field are fundamental in invariant theory and algebraic geometry. Some of their properties, such as those described above can be reduced to the case of a single indeterminate, but this is not always the case. In particular, because of the geometric applications, many interesting properties must be invariant under affine or projective transformations of the indeterminates. This often implies that one cannot select one of the indeterminates for a recurrence on the indeterminates.
Bézout's theorem, Hilbert's Nullstellensatz and Jacobian conjecture are among the most famous properties that are specific to multivariate polynomials over a field.
Hilbert's Nullstellensatz
The Nullstellensatz (German for "zero-locus theorem") is a theorem, first proved by David Hilbert, which extends to the multivariate case some aspects of the fundamental theorem of algebra. It is foundational for algebraic geometry, as establishing a strong link between the algebraic properties of and the geometric properties of algebraic varieties, that are (roughly speaking) set of points defined by implicit polynomial equations.
The Nullstellensatz, has three main versions, each being a corollary of any other. Two of these versions are given below. For the third version, the reader is referred to the main article on the Nullstellensatz.
The first version generalizes the fact that a nonzero univariate polynomial has a complex zero if and only if it is not a constant. The statement is: a set of polynomials in has a common zero in an algebraically closed field containing , if and only if does not belong to the ideal generated by , that is, if is not a linear combination of elements of with polynomial coefficients.
The second version generalizes the fact that the irreducible univariate polynomials over the complex numbers are associate to a polynomial of the form The statement is: If is algebraically closed, then the maximal ideals of have the form
Bézout's theorem
Bézout's theorem may be viewed as a multivariate generalization of the version of the fundamental theorem of algebra that asserts that a univariate polynomial of degree has complex roots, if they are counted with their multiplicities.
In the case of bivariate polynomials, it states that two polynomials of degrees and in two variables, which have no common factors of positive degree, have exactly common zeros in an algebraically closed field containing the coefficients, if the zeros are counted with their multiplicity and include the zeros at infinity.
For stating the general case, and not considering "zero at infinity" as special zeros, it is convenient to work with homogeneous polynomials, and consider zeros in a projective space. In this context, a projective zero of a homogeneous polynomial is, up to a scaling, a -tuple of elements of that is different from , and such that . Here, "up to a scaling" means that and are considered as the same zero for any nonzero In other words, a zero is a set of homogeneous coordinates of a point in a projective space of dimension .
Then, Bézout's theorem states: Given homogeneous polynomials of degrees in indeterminates, which have only a finite number of common projective zeros in an algebraically closed extension of , the sum of the multiplicities of these zeros is the product
Jacobian conjecture
Generalizations
Polynomial rings can be generalized in a great many ways, including polynomial rings with generalized exponents, power series rings, noncommutative polynomial rings, skew polynomial rings, and polynomial rigs.
Infinitely many variables
One slight generalization of polynomial rings is to allow for infinitely many indeterminates. Each monomial still involves only a finite number of indeterminates (so that its degree remains finite), and each polynomial is a still a (finite) linear combination of monomials. Thus, any individual polynomial involves only finitely many indeterminates, and any finite computation involving polynomials remains inside some subring of polynomials in finitely many indeterminates. This generalization has the same property of usual polynomial rings, of being the free commutative algebra, the only difference is that it is a free object over an infinite set.
One can also consider a strictly larger ring, by defining as a generalized polynomial an infinite (or finite) formal sum of monomials with a bounded degree. This ring is larger than the usual polynomial ring, as it includes infinite sums of variables. However, it is smaller than the ring of power series in infinitely many variables. Such a ring is used for constructing the ring of symmetric functions over an infinite set.
Generalized exponents
A simple generalization only changes the set from which the exponents on the variable are drawn. The formulas for addition and multiplication make sense as long as one can add exponents: . A set for which addition makes sense (is closed and associative) is called a monoid. The set of functions from a monoid N to a ring R which are nonzero at only finitely many places can be given the structure of a ring known as R[N], the monoid ring of N with coefficients in R. The addition is defined component-wise, so that if , then for every n in N. The multiplication is defined as the Cauchy product, so that if , then for each n in N, cn is the sum of all aibj where i, j range over all pairs of elements of N which sum to n.
When N is commutative, it is convenient to denote the function a in R[N] as the formal sum:
and then the formulas for addition and multiplication are the familiar:
and
where the latter sum is taken over all i, j in N that sum to n.
Some authors such as go so far as to take this monoid definition as the starting point, and regular single variable polynomials are the special case where N is the monoid of non-negative integers. Polynomials in several variables simply take N to be the direct product of several copies of the monoid of non-negative integers.
Several interesting examples of rings and groups are formed by taking N to be the additive monoid of non-negative rational numbers, . | Mathematics | Abstract algebra | null |
373352 | https://en.wikipedia.org/wiki/Galactic%20Center | Galactic Center | The Galactic Center is the barycenter of the Milky Way and a corresponding point on the rotational axis of the galaxy. Its central massive object is a supermassive black hole of about 4 million solar masses, which is called Sagittarius A*, a compact radio source which is almost exactly at the galactic rotational center. The Galactic Center is approximately away from Earth in the direction of the constellations Sagittarius, Ophiuchus, and Scorpius, where the Milky Way appears brightest, visually close to the Butterfly Cluster (M6) or the star Shaula, south to the Pipe Nebula.
There are around 10 million stars within one parsec of the Galactic Center, dominated by red giants, with a significant population of massive supergiants and Wolf–Rayet stars from star formation in the region around 1 million years ago. The core stars are a small part within the much wider galactic bulge.
Discovery
Because of interstellar dust along the line of sight, the Galactic Center cannot be studied at visible, ultraviolet, or soft (low-energy) X-ray wavelengths. The available information about the Galactic Center comes from observations at gamma ray, hard (high-energy) X-ray, infrared, submillimetre, and radio wavelengths.
Immanuel Kant stated in Universal Natural History and Theory of the Heavens (1755) that a large star was at the center of the Milky Way Galaxy, and that Sirius might be the star. Harlow Shapley stated in 1918 that the halo of globular clusters surrounding the Milky Way seemed to be centered on the star swarms in the constellation of Sagittarius, but the dark molecular clouds in the area blocked the view for optical astronomy.
In the early 1940s Walter Baade at Mount Wilson Observatory took advantage of wartime blackout conditions in nearby Los Angeles, to conduct a search for the center with the Hooker Telescope. He found that near the star Alnasl (Gamma Sagittarii), there is a one-degree-wide void in the interstellar dust lanes, which provides a relatively clear view of the swarms of stars around the nucleus of the Milky Way Galaxy. This gap has been known as Baade's Window ever since.
At Dover Heights in Sydney, Australia, a team of radio astronomers from the Division of Radiophysics at the CSIRO, led by Joseph Lade Pawsey, used "sea interferometry" to discover some of the first interstellar and intergalactic radio sources, including Taurus A, Virgo A and Centaurus A. By 1954 they had built an fixed dish antenna and used it to make a detailed study of an extended, extremely powerful belt of radio emission that was detected in Sagittarius. They named an intense point-source near the center of this belt Sagittarius A, and realised that it was located at the very center of the Galaxy, despite being some 32 degrees south-west of the conjectured galactic center of the time.
In 1958 the International Astronomical Union (IAU) decided to adopt the position of Sagittarius A as the true zero coordinate point for the system of galactic latitude and longitude. In the equatorial coordinate system the location is: RA , Dec (J2000 epoch).
In July 2022, astronomers reported the discovery of massive amounts of prebiotic molecules, including some associated with RNA, in the Galactic Center of the Milky Way Galaxy.
Distance to the Galactic Center
The exact distance between the Solar System and the Galactic Center is not certain, although estimates since 2000 have remained within the range . The latest estimates from geometric-based methods and standard candles yield the following distances to the Galactic Center:
or ()
()
()
7.94 or ()
or ()
()
()
()
()
kpc ()
An accurate determination of the distance to the Galactic Center as established from variable stars (e.g. RR Lyrae variables) or standard candles (e.g. red-clump stars) is hindered by numerous effects, which include: an ambiguous reddening law; a bias for smaller values of the distance to the Galactic Center because of a preferential sampling of stars toward the near side of the Galactic bulge owing to interstellar extinction; and an uncertainty in characterizing how a mean distance to a group of variable stars found in the direction of the Galactic bulge relates to the distance to the Galactic Center.
The nature of the Milky Way's bar, which extends across the Galactic Center, is also actively debated, with estimates for its half-length and orientation spanning between 1–5 kpc (short or a long bar) and 10–50°. Certain authors advocate that the Milky Way features two distinct bars, one nestled within the other. The bar is delineated by red-clump stars (see also red giant); however, RR Lyrae variables do not trace a prominent Galactic bar. The bar may be surrounded by a ring called the 5-kpc ring that contains a large fraction of the molecular hydrogen present in the Milky Way, and most of the Milky Way's star formation activity. Viewed from the Andromeda Galaxy, it would be the brightest feature of the Milky Way.
Supermassive black hole
The complex astronomical radio source Sagittarius A appears to be located almost exactly at the Galactic Center and contains an intense compact radio source, Sagittarius A*, which coincides with a supermassive black hole at the center of the Milky Way. Accretion of gas onto the black hole, probably involving an accretion disk around it, would release energy to power the radio source, itself much larger than the black hole.
A study in 2008 which linked radio telescopes in Hawaii, Arizona and California (Very-long-baseline interferometry) measured the diameter of Sagittarius A* to be 44 million kilometers (0.3 AU). For comparison, the radius of Earth's orbit around the Sun is about 150 million kilometers (1.0 AU), whereas the distance of Mercury from the Sun at closest approach (perihelion) is 46 million kilometers (0.3 AU). Thus, the diameter of the radio source is slightly less than the distance from Mercury to the Sun.
Scientists at the Max Planck Institute for Extraterrestrial Physics in Germany using Chilean telescopes have confirmed the existence of a supermassive black hole at the Galactic Center, on the order of 4.3 million solar masses. Later studies have estimated a mass of 3.7 million or 4.1 million solar masses.
On 5 January 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers.
Gamma- and X-ray emitting Fermi bubbles
In November 2010, it was announced that two large elliptical lobe structures of energetic plasma, termed bubbles, which emit gamma- and X-rays, were detected astride the Milky Way galaxy's core. Termed Fermi or eRosita bubbles, they extend up to about 25,000 light years above and below the Galactic Center. The galaxy's diffuse gamma-ray fog hampered prior observations, but the discovery team led by D. Finkbeiner, building on research by G. Dobler, worked around this problem. The 2014 Bruno Rossi Prize went to Tracy Slatyer, Douglas Finkbeiner, and Meng Su "for their discovery, in gamma rays, of the large unanticipated Galactic structure called the Fermi bubbles".
The origin of the bubbles is being researched. The bubbles are connected and seemingly coupled, via energy transport, to the galactic core by columnar structures of energetic plasma termed chimneys. In 2020, for the first time, the lobes were seen in visible light and optical measurements were made. By 2022, detailed computer simulations further confirmed that the bubbles were caused by the Sagittarius A* black hole.
Stellar population
The central cubic parsec around Sagittarius A* contains around 10 million stars. Although most of them are old red giant stars, the Galactic Center is also rich in massive stars. More than 100 OB and Wolf–Rayet stars have been identified there so far. They seem to have all been formed in a single star formation event a few million years ago. The existence of these relatively young stars was a surprise to experts, who expected the tidal forces from the central black hole to prevent their formation.
This paradox of youth is even stronger for stars that are on very tight orbits around Sagittarius A*, such as S2 and S0-102. The scenarios invoked to explain this formation involve either star formation in a massive star cluster offset from the Galactic Center that would have migrated to its current location once formed, or star formation within a massive, compact gas accretion disk around the central black-hole. Current evidence favors the latter theory, as formation through a large accretion disk is more likely to lead to the observed discrete edge of the young stellar cluster at roughly 0.5 parsec. Most of these 100 young, massive stars seem to be concentrated within one or two disks, rather than randomly distributed within the central parsec. This observation however does not allow definite conclusions to be drawn at this point.
Star formation does not seem to be occurring currently at the Galactic Center, although the Circumnuclear Disk of molecular gas that orbits the Galactic Center at two parsecs seems a fairly favorable site for star formation. Work presented in 2002 by Antony Stark and Chris Martin mapping the gas density in a 400-light-year region around the Galactic Center has revealed an accumulating ring with a mass several million times that of the Sun and near the critical density for star formation.
They predict that in approximately 200 million years, there will be an episode of starburst in the Galactic Center, with many stars forming rapidly and undergoing supernovae at a hundred times the current rate. This starburst may also be accompanied by the formation of galactic relativistic jets, as matter falls into the central black hole. It is thought that the Milky Way undergoes a starburst of this sort every 500 million years.
In addition to the paradox of youth, there is a "conundrum of old age" associated with the distribution of the old stars at the Galactic Center. Theoretical models had predicted that the old stars—which far outnumber young stars—should have a steeply-rising density near the black hole, a so-called Bahcall–Wolf cusp. Instead, it was discovered in 2009 that the density of the old stars peaks at a distance of roughly 0.5 parsec from Sgr A*, then falls inward: instead of a dense cluster, there is a "hole", or core, around the black hole.
Several suggestions have been put forward to explain this puzzling observation, but none is completely satisfactory. For instance, although the black hole would eat stars near it, creating a region of low density, this region would be much smaller than a parsec. Because the observed stars are a fraction of the total number, it is theoretically possible that the overall stellar distribution is different from what is observed, although no plausible models of this sort have been proposed yet.
Gallery
In May 2021, NASA published new images of the Galactic Center, based on surveys from Chandra X-ray Observatory and other telescopes. Images are about 2.2 degrees (1,000 light years) across and 4.2 degrees (2,000 light years) long.
| Physical sciences | Notable galaxies | null |
373601 | https://en.wikipedia.org/wiki/Motte-and-bailey%20castle | Motte-and-bailey castle | A motte-and-bailey castle is a European fortification with a wooden or stone keep situated on a raised area of ground called a motte, accompanied by a walled courtyard, or bailey, surrounded by a protective ditch and palisade. Relatively easy to build with unskilled labour, but still militarily formidable, these castles were built across northern Europe from the 10th century onwards, spreading from Normandy and Anjou in France, into the Holy Roman Empire, as well as the Low Countries it controlled, in the 11th century, when these castles were popularized in the area that became the Netherlands. The Normans introduced the design into England and Wales. Motte-and-bailey castles were adopted in Scotland, Ireland, and Denmark in the 12th and 13th centuries. By the end of the 13th century, the design was largely superseded by alternative forms of fortification, but the earthworks remain a prominent feature in many countries.
Architecture
Structures
A motte-and-bailey castle was made up of two structures: a motte (a type of mound – often artificial – topped with a wooden or stone structure known as a keep); and at least one bailey (a fortified enclosure built next to the motte). The constructive elements themselves are ancient, but the term motte-and-bailey is a relatively modern one and is not medieval in origin. The word is the French version of the Latin , and in France, the word , generally used for a clump of turf, came to refer to a turf bank, and by the 12th century was used to refer to the castle design itself. The word "bailey" comes from the Norman-French , or , referring to a low yard. In medieval sources, the Latin term was used to describe the bailey complex within these castles.
One contemporary account of these structures comes from Jean de Colmieu around 1130, describing the Calais region in northern France. De Colmieu described how the nobles would build "a mound of earth as high as they can and dig a ditch about it as wide and deep as possible. The space on top of the mound is enclosed by a palisade of very strong hewn logs, strengthened at intervals by as many towers as their means can provide. Inside the enclosure is a citadel, or keep, which commands the whole circuit of the defences. The entrance to the fortress is by means of a bridge, which, rising from the outer side of the moat and supported on posts as it ascends, reaches to the top of the mound". At Durham Castle, contemporaries described how the motte-and-bailey superstructure arose from the "tumulus of rising earth" with a keep rising "into thin air, strong within and without" with a "stalwart house ... glittering with beauty in every part".
Motte
Mottes were made out of earth and flattened on top, and it can be very hard to determine whether a mound is artificial or natural without excavation. Some were also built over older artificial structures, such as Bronze Age barrows. The size of mottes varied considerably, with these mounds being 3 metres to 30 metres in height (10–100 feet), and from in diameter. This minimum height of for mottes is usually intended to exclude smaller mounds which often had non-military purposes. In England and Wales, only 7% of mottes were taller than high; 24% were between , and 69% were less than tall. A motte was protected by a ditch around it, which would typically have also been a source of the earth and soil for constructing the mound itself.
A keep and a protective wall would usually be built on top of the motte. Some walls would be large enough to have a wall-walk around them, and the outer walls of the motte and the wall-walk could be strengthened by filling in the gap between the wooden walls with earth and stones, allowing it to carry more weight; this was called a garillum. Smaller mottes could support only simple towers with room for a few soldiers, whilst larger mottes could be equipped with a much grander building. Many wooden keeps were designed with bretèches, or brattices, small balconies that projected from the upper floors of the building, allowing defenders to cover the base of the fortification wall. The early 12th-century chronicler Lambert of Ardres described the wooden keep on top of the motte at the castle of Ardres, where the "first storey was on the surface of the ground, where were cellars and granaries, and great boxes, tuns, casks, and other domestic utensils. In the storey above were the dwelling and common living rooms of the residents in which were the larders, the rooms of the bakers and butlers, and the great chamber in which the lord and his wife slept ... In the upper storey of the house were garret rooms ... In this storey also the watchmen and the servants appointed to keep the house took their sleep". Wooden structures on mottes could be protected by skins and hides to prevent their being easily set alight during a siege.
Bailey
The bailey was an enclosed courtyard overlooked by the high motte and surrounded by a wooden fence called a palisade and another ditch. The bailey was often kidney-shaped to fit against a circular motte but could be made in other shapes according to the terrain. The bailey would contain a wide number of buildings, including a hall, kitchens, a chapel, barracks, stores, stables, forges or workshops, and was the centre of the castle's economic activity. The bailey was connected to the motte by a bridge, or, as often seen in England, by steps cut into the motte. Typically the ditch of the motte and the bailey joined, forming a figure of eight around the castle. Wherever possible, nearby streams and rivers would be dammed or diverted, creating water-filled moats, artificial lakes and other forms of water defences.
In practice, there was a wide number of variations to this common design. A castle could have more than one bailey: at Warkworth Castle an inner and an outer bailey was constructed, or alternatively, several baileys could flank the motte, as at Windsor Castle. Some baileys had two mottes, such as those at Lincoln. Some mottes could be square instead of round, such as at Cabal Tump (Herefordshire). Instead of single ditches, occasionally double-ditch defences were built, as seen at Berkhamsted. Local geography and the intent of the builder produced many unique designs.
Construction and maintenance
Various methods were used to build mottes. Where a natural hill could be used, scarping could produce a motte without the need to create an artificial mound, but more commonly much of the motte would have to be constructed by hand. Four methods existed for building a mound and a tower: the mound could either be built first, and a tower placed on top of it; the tower could alternatively be built on the original ground surface and then buried within the mound; the tower could potentially be built on the original ground surface and then partially buried within the mound, the buried part forming a cellar beneath; or the tower could be built first, and the mound added later.
Regardless of the sequencing, artificial mottes had to be built by piling up earth; this work was undertaken by hand, using wooden shovels and hand-barrows, possibly with picks as well in the later periods. Larger mottes took disproportionately more effort to build than their smaller equivalents, because of the volumes of earth involved. The largest mottes in England, such as that of Thetford Castle, are estimated to have required up to 24,000 man-days of work; smaller ones required perhaps as little as 1,000. Contemporary accounts talk of some mottes being built in a matter of days, although these low figures have led to suggestions by historians that either these figures were an underestimate, or that they refer to the construction of a smaller design than that later seen on the sites concerned. Taking into account estimates of the likely available manpower during the period, historians estimate that the larger mottes might have taken between four and nine months to build. This contrasted favourably with stone keeps of the period, which typically took up to ten years to build. Very little skilled labour was required to build motte and bailey castles, which made them very attractive propositions if forced peasant labour was available, as was the case after the Norman invasion of England. Where the local workforce had to be paid – such as at Clones in Ireland, built in 1211 using imported labourers – the costs would rise quickly, in this case reaching £20.
The type of soil would make a difference to the design of the motte, as clay soils could support a steeper motte, whilst sandier soils meant that a motte would need a more gentle incline. Where available, layers of different sorts of earth, such as clay, gravel and chalk, would be used alternatively to build in strength to the design. Layers of turf could also be added to stabilise the motte as it was built up, or a core of stones placed as the heart of the structure to provide strength. Similar issues applied to the defensive ditches, where designers found that the wider the ditch was dug, the deeper and steeper the sides of the scarp could be, making it more defensive. Although militarily a motte was, as Norman Pounds describes it, "almost indestructible", they required frequent maintenance. Soil wash was a problem, particularly with steeper mounds, and mottes could be clad with wood or stone slabs to protect them. Over time, some mottes suffered from subsidence or damage from flooding, requiring repairs and stabilisation work.
Although motte-and-bailey castles are the best-known castle design, they were not always the most numerous in any given area. A popular alternative was the ringwork castle, involving a palisade being built on top of a raised earth rampart, protected by a ditch. The choice of motte and bailey or ringwork was partially driven by terrain, as mottes were typically built on low ground, and on deeper clay and alluvial soils. Another factor may have been speed, as ringworks were faster to build than mottes. Some ringwork castles were later converted into motte-and-bailey designs, by filling in the centre of the ringwork to produce a flat-topped motte. The reasons for why this decision was taken are unclear; motte-and-bailey castles may have been felt to be more prestigious, or easier to defend; another theory is that like the terpen in the Netherlands, or Vorburg and Hauptburg in Lower Rhineland, raising the height of the castle was done to create a drier site.
History
Emergence of the design
The motte-and-bailey castle is a particularly western and northern European phenomenon, most numerous in France and Britain, but also seen in Denmark, Germany, Southern Italy and occasionally beyond. European castles first emerged between the Loire river and the Rhine in the 9th and 10th centuries, after the fall of the Carolingian Empire resulted in its territory being divided among individual lords and princes and local territories became threatened by the Magyars and the Norse. Against this background, various explanations have been put forward to explain the origins and spread of the motte-and-bailey design across western and northern Europe; there is often a tension among the academic community between explanations that stress military and social reasons for the rise of this design. One suggestion is that these castles were built particularly in order to protect against external attack – the Angevins, it is argued, began to build them to protect against the Viking raids, and the design spread to deal with the attacks along the Slav and Hungarian frontiers. Another argument is that, given the links between this style of castle and the Norman style, who were of Viking descent, it was in fact originally a Viking design, transported to Normandy and Anjou. The motte-and-bailey castle was certainly effective against assault, although as historian André Debord suggests, the historical and archaeological record of the military operation of motte-and-bailey castles remains relatively limited.
An alternative approach focuses on the links between this form of castle and what can be termed a feudal mode of society. The spread of motte-and-bailey castles was usually closely tied to the creation of local fiefdoms and feudal landowners, and areas without this method of governance rarely built these castles. Yet another theory suggests that the design emerged as a result of the pressures of space on ringworks and that the earliest motte-and-baileys were converted ringworks. Finally, there may be a link between the local geography and the building of motte-and-bailey castles, which are usually built on low-lying areas, in many cases subject to regular flooding. Regardless of the reasons behind the initial popularity of the motte-and-bailey design, however, there is widespread agreement that the castles were first widely adopted in Normandy and Angevin territory in the 10th and 11th centuries.
Initial development, 10th and 11th centuries
The earliest purely documentary evidence for motte-and-bailey castles in Normandy and Angers comes from between 1020 and 1040, but a combination of documentary and archaeological evidence pushes the date for the first motte and bailey castle, at Vincy, back to 979. The castles were built by the more powerful lords of Anjou in the late 10th and 11th centuries, in particular Fulk III and his son, Geoffrey II, who built a great number of them between 987 and 1060. Many of these earliest castles would have appeared quite crude and rustic by later standards, belying the power and prestige of their builders. William the Conqueror, as the Duke of Normandy, is believed to have adopted the motte-and-bailey design from neighbouring Anjou. Duke William went on to prohibit the building of castles without his consent through the Consuetudines et Justicie, with his legal definition of castles centring on the classic motte-and-bailey features of ditching, banking and palisading.
By the 11th century, castles were built throughout the Holy Roman Empire, which then spanned central Europe. They now typically took the form of an enclosure on a hilltop, or, on lower ground, a tall, free-standing tower (German Bergfried). The largest castles had well-defined inner and outer courts, but no mottes. The motte-and-bailey design began to spread into Alsace and the northern Alps from France during the first half of the 11th century, spreading further into Bohemia and Austria in the subsequent years. This form of castle was closely associated with the colonisation of newly cultivated areas within the Empire, as new lords were granted lands by the emperor and built castles close to the local gród, or town. motte-and-bailey castle building substantially enhanced the prestige of local nobles, and it has been suggested that their early adoption was because they were a cheaper way of imitating the more prestigious Höhenburgen built on high ground, but this is usually regarded as unlikely. In many cases, bergfrieds were converted into motte and bailey designs by burying existing castle towers within the mounds.
In England, William invaded from Normandy in 1066, resulting in three phases of castle building in England, around 80% of which were in the motte-and-bailey pattern. The first of these was the establishment by the new king of royal castles in key strategic locations, including many towns. These urban castles could make use of the existing town's walls and fortification, but typically required the demolition of local houses to make space for them. This could cause extensive damage: records suggest that in Lincoln 166 houses were destroyed in the construction of Lincoln Castle, and that 113 were destroyed for the castle in Norwich and 27 for the castle in Cambridge. The second and third waves of castle building in the late-11th century were led by the major magnates and then the more junior knights on their new estates. Some regional patterns in castle building can be seen – relatively few castles were built in East Anglia compared to the west of England or the Marches, for example; this was probably due to the relatively settled and prosperous nature of the east of England and reflected a shortage of unfree labour for constructing mottes. In Wales, the first wave of the Norman castles was again predominantly made of wood in a mixture of motte-and-bailey and ringwork designs. The Norman invaders spread up the valleys, using this form of castle to occupy their new territories. After the Norman conquest of England and Wales, the building of motte-and-bailey castles in Normandy accelerated as well, resulting in a broad swath of these castles across the Norman territories, around 741 motte-and-bailey castles in England and Wales alone.
Further expansion, 12th and 13th centuries
Having become well established in Normandy, Germany and Britain, motte-and-bailey castles began to be adopted elsewhere, mainly in northern Europe, during the 12th and 13th centuries. Conflict through the Low Countries encouraged castle building in a number of regions from the late 12th century to the 14th century. In Flanders, the first motte and bailey castles began relatively early at the end of the 11th century. The rural motte-and-bailey castles followed the traditional design, but the urban castles often lacked the traditional baileys, using parts of the town to fulfil this role instead. Motte-and-bailey castles in Flanders were particularly numerous in the south along the Lower Rhine, a fiercely contested border. Further along the coast in Friesland, the relatively decentralised, egalitarian society initially discouraged the building of motte and bailey castles, although terpen, raised "dwelling mounds" which lacked towers and were usually lower in height than a typical motte, were created instead. By the end of the medieval period, however, the terpen gave way to hege wieren, non-residential defensive towers, often on motte-like mounds, owned by the increasingly powerful nobles and landowners. On Zeeland the local lords had a high degree of independence during the 12th and 13th centuries, owing to the wider conflict for power between neighbouring Flanders and Friesland. The Zeeland lords had also built terpen mounds, but these gave way to larger werven constructions–effectively mottes–which were later termed bergen. Sometimes both terpen and werven are called vliedburg, or "refuge castles". During the 12th and 13th centuries a number of terpen mounds were turned into werven mottes, and some new werven mottes were built from scratch. Around 323 known or probable motte and bailey castles of this design are believed to have been built within the borders of the modern Netherlands.
In neighbouring Denmark, motte-and-bailey castles appeared somewhat later in the 12th and 13th centuries and in more limited numbers than elsewhere, due to the less feudal society. Except for a handful of motte and bailey castles in Norway, built in the first half of the 11th century and including the royal residence in Oslo, the design did not play a role further north in Scandinavia.
The Norman expansion into Wales slowed in the 12th century but remained an ongoing threat to the remaining native rulers. In response, the Welsh princes and lords began to build their own castles, frequently motte-and-bailey designs, usually in wood. There are indications that this may have begun from 1111 onwards under Prince Cadwgan ap Bleddyn, with the first documentary evidence of a native Welsh castle being at Cymmer in 1116. These timber castles, including Tomen y Rhodywdd, Tomen y Faerdre, Gaer Penrhôs, were of equivalent quality to the equivalent Norman fortifications in the area, and it can prove difficult to distinguish the builders of some sites from the archaeological evidence alone.
Motte-and-bailey castles in Scotland emerged as a consequence of the centralising of royal authority in the 12th century. David I encouraged Norman and French nobles to settle in Scotland, introducing a feudal mode of landholding and the use of castles as a way of controlling the contested lowlands. The quasi-independent polity of Galloway, which had resisted the rule of David and his predecessors, was a particular focus for this colonisation. The size of these Scottish castles, primarily wooden motte and bailey constructions, varied considerably, from larger designs such as the Bass of Inverurie to smaller castles like Balmaclellan.
Motte-and-bailey castles were introduced to Ireland following the Norman invasion of Ireland that began between 1166 and 1171 under first Richard de Clare and then Henry II of England, with the occupation of southern and eastern Ireland by a number of Anglo-Norman barons. The rapid Norman success depended on key economic and military advantages; their cavalry enabled Norman successes in battles, and castles enabled them to control the newly conquered territories. The new lords rapidly built castles to protect their possessions; most of these were motte-and-bailey constructions, many of them strongly defended. Unlike Wales, the indigenous Irish lords do not appear to have constructed their own castles in any significant number during the period. Between 350 and 450 motte-and-bailey castles are believed to remain today, although the identification of these earthwork remains can be contentious.
A small number of motte-and-bailey castles were built outside of northern Europe. In the late-12th century, the Normans invaded southern Italy and Sicily; although they had the technology to build more modern designs, in many cases wooden motte-and-bailey castles were built instead for reasons of speed. The Italians came to refer to a range of different castle types as motta, however, and there may not have been as many genuine motte-and-bailey castles in southern Italy as was once thought on the basis of the documentary evidence alone. In addition, there is evidence of the Norman crusaders building a motte and bailey using sand and wood in Egypt in 1221 during the Fifth Crusade.
Conversion and decline, 13th–14th centuries
Motte-and-bailey castles became a less popular design in the mid-medieval period. In France, they were not built after the start of the 12th century, and mottes ceased to be built in most of England after around 1170, although they continued to be erected in Wales and along the Marches. Many motte-and-bailey castles were occupied relatively briefly; in England, many had been abandoned or allowed to lapse into disrepair by the 12th century. In the Low Countries and Germany, a similar transition occurred in the 13th and 14th centuries.
One factor was the introduction of stone into castle buildings. The earliest stone castles had emerged in the 10th century, with stone keeps being built on mottes along the Catalonia frontier and several, including Château de Langeais, in Angers. Although wood was a more powerful defensive material than was once thought, stone became increasingly popular for military and symbolic reasons. Some existing motte-and-bailey castles were converted to stone, with the keep and the gatehouse usually the first parts to be upgraded. Shell keeps were built on many mottes, circular stone shells running around the top of the motte, sometimes protected by a further chemise, or low protective wall, around the base. By the 14th century, a number of motte and bailey castles had been converted into powerful stone fortresses.
Newer castle designs placed less emphasis on mottes. Square Norman keeps built in stone became popular following the first such construction in Langeais in 994. Several were built in England and Wales after the conquest; by 1216 there were around 100 in the country. These massive keeps could be either erected on top of settled, well-established mottes or could have mottes built around them – so-called "buried" keeps. The ability of mottes, especially newly built mottes, to support the heavier stone structures, was limited, and many needed to be built on fresh ground. Concentric castles, relying on several lines of baileys and defensive walls, made increasingly little use of keeps or mottes at all.
Across Europe, motte-and-bailey construction came to an end. At the end of the 12th century, the Welsh rulers began to build castles in stone, primarily in the principality of North Wales and usually along the higher peaks where mottes were unnecessary. In Flanders, a decline came in the 13th century as feudal society changed. In the Netherlands, cheap brick started to be used in castles from the 13th century onwards in place of earthworks, and many mottes were levelled, to help develop the surrounding, low-lying fields; these "levelled mottes" are a particularly Dutch phenomenon. In Denmark, motte and baileys gave way in the 14th century to a castrum-curia model, where the castle was built with a fortified bailey and a fortified mound, somewhat smaller than the typical motte. By the 12th century, the castles in Western Germany began to thin in number, due to changes in land ownership, and various mottes were abandoned. In Germany and Denmark, motte-and-bailey castles also provided the model for the later wasserburg, or "water castle", a stronghold and bailey construction surrounded by water, and widely built in the late medieval period.
Today
In England, motte-and-bailey earthworks were put to various uses over later years; in some cases, mottes were turned into garden features in the 18th century, or reused as military defences during the Second World War. Today, almost no mottes of motte-and-bailey castles remain in regular use in Europe, with one of the few exceptions being that at Windsor Castle, converted for the storage of royal documents. Another example is Durham Castle in northern England, where the round tower is used for student accommodation.
The landscape of northern Europe remains scattered with mottes and their earthworks, and many form popular tourist attractions.
| Technology | Fortification | null |
373775 | https://en.wikipedia.org/wiki/Comb | Comb | A comb is a tool consisting of a shaft that holds a row of teeth for pulling through the hair to clean, untangle, or style it. Combs have been used since prehistoric times, having been discovered in very refined forms from settlements dating back to 5,000 years ago in Persia.
Weaving combs made of whalebone dating to the middle and late Iron Age have been found on archaeological digs in Orkney and Somerset.
Description
Combs are made of a shaft and teeth that are placed at a perpendicular angle to the shaft. Combs can be made out of a number of materials, most commonly plastic, metal, or wood. In antiquity, horn and whalebone was sometimes used. Combs made from ivory and tortoiseshell were once common but concerns for the animals that produce them have reduced their usage. Wooden combs are largely made of boxwood, cherry wood, or other fine-grained wood. Good quality wooden combs are usually handmade and polished.
Combs come in various shapes and sizes depending on what they are used for. A hairdressing comb may have a thin, tapered handle for parting hair and close teeth. Common hair combs usually have wider teeth halfway and finer teeth for the rest of the comb. Hot combs were used solely for straightening hair during the colonial era in North America.
A hairbrush comes in both manual and electric models. It is larger than a comb, and is also commonly used for shaping, styling, and cleaning hair. A combination comb and hairbrush was patented in the 19th century.
Uses
Combs can be used for many purposes. Historically, their main purpose was securing long hair in place, decorating the hair, matting sections of hair for dreadlocks, or keeping a kippah or skullcap in place. In Spain, a peineta is a large decorative comb used to keep a mantilla in place.
In industry and craft, combs are used in separating cotton fibres from seeds and other debris (the cotton gin, a mechanized version of the comb, is one of the machines that ushered in the Industrial Revolution). A comb is used to distribute colors in paper marbling to make the swirling colour patterns in comb-marbled paper.
Combs are also a tool used by police investigators to collect hair and dandruff samples that can be used in ascertaining dead or living persons' identities, as well as their state of health and toxicological profiles.
Hygiene
Sharing combs is a common cause of parasitic infections much like sharing a hat, as one user can leave a comb with eggs or live parasites, facilitating the transmission of lice, fleas, mites, fungi, and other undesirables. Siblings are also more likely to pass on nits to each other if they share a comb.
Making music
Stringing a plant's leaf or a piece of paper over one side of the comb and humming with cropped lips on the opposite side dramatically increases the high-frequency harmonic content of the hum produced by the human voice box, and the resulting spread sound spectrum can be modulated by changing the resonating frequency of the oral cavity. This was the inspiration for the kazoo, a membranophone.
The comb is also a lamellophone. Comb teeth have harmonic qualities of their own, determined by their shape, length, and material. A comb with teeth of unequal length, capable of producing different notes when picked, eventually evolved into the thumb piano and music box.
Types
Chinese combs
In China, combs are referred to by the generic term () or () and originated about 6000 years ago during the late Neolithic period. Chinese combs are referred as () when referring to thick-tooth comb and () when referred to thin-tooth comb. A form of produced in Changzhou is the Changzhou comb; the Palace Comb Factory, also called Changzhou combs Factory, found in the city of Changzhou started to operate since the 5th century and continues to produce handmade wooden combs up to this day. were also introduced in Japan during the Nara period where they were referred by the generic name .
Japanese combs
In Japan, combs are referred to as . Indigenous Japanese started to be used by Japanese people about 6000 years ago in the Jōmon era. In the Nara period, Chinese combs from the Tang dynasty were introduced in Japan. Another form of comb in Japan is the Satsuma comb, which started to appear around the 17th century and was produced by the samurai warriors of the Satsuma clan as a side job.
Liturgical comb
A liturgical comb is a decorated comb with used ceremonially in both Catholic and Orthodox Christianity during the Middle Ages, and in Byzantine Rite up to this day.
Nit comb
Specialized combs such as "flea combs" or "nit combs" can be used to remove macroscopic parasites and cause them damage by combing. A comb with teeth fine enough to remove nits is sometimes called a "fine-toothed comb", as in the metaphoric usage "go over [something] with a fine-toothed comb", meaning to search closely and in detail. Sometimes in this meaning, "fine-toothed comb" has been reanalysed as "fine toothcomb" and then shortened to "toothcomb", or changed into forms such as "the finest of toothcombs".
Afro pick
An Afro pick is a type of comb having long, thick teeth which is usually used on kinky or Afro-textured hair. It is longer and thinner than the typical comb, and it is sometimes worn in the hair. The history of the Afro pick dates back at least 5,000 years, as a practical tool that may also have cultural and political meaning.
Unbreakable plastic comb
An unbreakable plastic comb is a comb that, despite being made of plastic rather than (more expensive) metal, does not shatter into multiple pieces if dropped on a hard surface such as bathroom tiles, a hardwood floor, or pavement. Such combs were introduced in the mid-twentieth century. Today most plastic combs are unbreakable as technology has reached a point of understanding the causation of brittleness in these products.
Modern artisan combs
Modern artisan combs crafted from a wide variety of new and recycled materials have become popular over recent years. Used skateboard decks, vinyl records, brass, titanium alloy, acrylic, sterling silver, and exotic wood are a few of the materials being used.
Gallery
| Biology and health sciences | Hygiene products | Health |
374054 | https://en.wikipedia.org/wiki/Black%20rhinoceros | Black rhinoceros | The black rhinoceros (Diceros bicornis), sometimes also called the hook-lipped rhinoceros, is a species of rhinoceros, native to eastern Africa and southern Africa, including Angola, Botswana, Kenya, Malawi, Mozambique, Namibia, South Africa, Eswatini, Tanzania, Zambia, and Zimbabwe. Although the species is referred to as black, its colours vary from brown to grey. It is the only extant species of the genus Diceros.
The other African rhinoceros is the white rhinoceros (Ceratotherium simum). The word "white" in the name "white rhinoceros" is often said to be a misinterpretation of the Afrikaans word (Dutch ) meaning wide, referring to its square upper lip, as opposed to the pointed or hooked lip of the black rhinoceros. These species are now sometimes referred to as the square-lipped (for white) or hook-lipped (for black) rhinoceros.
The species overall is classified as critically endangered (even though the south-western black rhinoceros is classified as near threatened) and is threatened by multiple factors including poaching and habitat reduction. Three subspecies have been declared extinct, including the western black rhinoceros, which was declared extinct by the International Union for Conservation of Nature (IUCN) in 2011. The IUCN estimates that 3,142 mature individuals remain in the wild.
Taxonomy
The species was first named Rhinoceros bicornis by Carl Linnaeus in the 10th edition of his Systema naturae in 1758. The name means "double-horned rhinoceros". There is some confusion about what exactly Linnaeus conceived under this name as this species was probably based upon the skull of a single-horned Indian rhinoceros (Rhinoceros unicornis), with a second horn artificially added by the collector. Such a skull is known to have existed and Linnaeus even mentioned India as origin of this species. However he also referred to reports from early travellers about a double-horned rhino in Africa and when it emerged that there is only one, single-horned species of rhino in India, Rhinoceros bicornis was used to refer to the African rhinos (the white rhino only became recognised in 1812). In 1911 this was formally fixed and the Cape of Good Hope officially declared the type locality of the species.
Subspecies
The intraspecific variation in the black rhinoceros has been discussed by various authors and is not finally settled. The most accepted scheme considers seven or eight subspecies, of which three became extinct in historical times and one is on the brink of extinction:
Southern black rhinoceros or Cape black rhinoceros (D. b. bicornis) – Extinct. Once abundant from the Cape of Good Hope to Transvaal, South Africa and probably into the south of Namibia, this was the largest subspecies. It became extinct due to excessive hunting and habitat destruction around 1850.
North-eastern black rhinoceros (D. b. brucii) – Extinct. Formerly central Sudan, Eritrea, northern and southeastern Ethiopia, Djibouti and northern and southeastern Somalia. Relict populations in northern Somalia vanished during the early 20th century.
Chobe black rhinoceros (D. b. chobiensis) – A local subspecies restricted to the Chobe Valley in southeastern Angola, Namibia (Zambezi Region) and northern Botswana. Nearly extinct, possibly only one surviving specimen in Botswana.
Uganda black rhinoceros (D. b. ladoensis) – Former distribution from South Sudan, across Uganda into western Kenya and southwesternmost Ethiopia. Black rhinos are considered extinct across most of this area and its conservational status is unclear. Probably surviving in Kenyan reserves.
Western black rhinoceros (D. b. longipes) – Extinct. Once lived in South Sudan, northern Central African Republic, southern Chad, northern Cameroon, northeastern Nigeria and south-eastern Niger. The range possibly stretched west to the Niger River in western Niger, though this is unconfirmed. The evidence from Liberia and Burkina Faso mainly rests upon the existence of indigenous names for the rhinoceros. A far greater former range in West Africa as proposed earlier is doubted by a 2004 study. The last known wild specimens lived in northern Cameroon. In 2006 an intensive survey across its putative range in Cameroon failed to locate any, leading to fears that it was extinct in the wild. On 10 November 2011 the IUCN declared the western black rhinoceros extinct.
Eastern black rhinoceros (D. b. michaeli) – Had a historical distribution from South Sudan, Uganda, Ethiopia, down through Kenya into north-central Tanzania. Today, its range is limited primarily to Kenya, Rwanda and Tanzania. In addition, its population is in South Africa's Addo Elephant National Park.
South-central black rhinoceros (D. b. minor) – Most widely distributed subspecies, characterised by a compact body, proportionally large head and prominent skin-folds. Ranged from north-eastern South Africa (KwaZulu-Natal) to northeastern Tanzania and southeastern Kenya. Preserved in reserves throughout most of its former range but probably extinct in eastern Angola, southern Democratic Republic of Congo and possibly Mozambique. Extinct but reintroduced in Malawi, Botswana, and Zambia. It also ranges in parts of Namibia and inhabit national parks in South Africa.
South-western black rhinoceros (D. b. occidentalis) – A small subspecies, adapted to survival in desert and semi-desert conditions. Originally distributed in north-western Namibia and southwestern Angola, today restricted to wildlife reserves in Namibia with sporadic sightings in Angola. These populations are often referred to D. b. bicornis or D. b. minor, but some experts consider them a subspecies in their own right.
The most widely adopted alternative scheme only recognizes five subspecies or "eco-types": D. b. bicornis, D. b. brucii, D. b. longipes, D. b. michaeli, and D. b. minor. This concept is also used by the IUCN, listing three surviving subspecies and recognizing D. b. brucii and D. b. longipes as extinct. The most important difference to the above scheme is the inclusion of the extant southwestern subspecies from Namibia in D. b. bicornis instead of in its own subspecies, whereupon the nominal subspecies is considered extant.
Evolution
The rhinoceros originated in the Eocene about fifty million years ago alongside other members of Perissodactyla. The last common ancestor of living rhinoceroses belonging to the subfamily Rhinocerotinae is suggested to have lived around 16 million years ago.
The white rhinoceros and black rhinoceros are more closely related to each other than to other living rhinoceroses. Rhinoceroses closely related to the black and the white rhinoceros were present in Africa by the Late Miocene about 10 million years ago, and possibly as early as 17 million years ago. The two species may have descended from the Eurasian species "Ceratotherium" neumayri, but this is disputed. A 2021 genetic analysis estimated the split between the white and black rhinoceros at around 7 million years ago. After this split, the direct ancestor of Diceros bicornis, Diceros praecox was present in the Pliocene of East Africa (Ethiopia, Kenya, Tanzania). D. bicornis is suggested to have evolved from this species during the Late Pliocene – Early Pleistocene, with the oldest definitive record at the Pliocene–Pleistocene boundary c. 2.5 million years ago at Koobi Fora, Kenya.
A cladogram showing the relationships of recent and Late Pleistocene rhinoceros species (minus Stephanorhinus hemitoechus) based on whole nuclear genomes, after Liu et al., 2021:
Description
An adult black rhinoceros stands high at the shoulder and is in length. An adult typically weighs from , however unusually large male specimens have been reported at up to . The cows are smaller than the bulls. Two horns on the skull are made of keratin with the larger front horn typically long, exceptionally up to .
The longest known black rhinoceros horn measured nearly in length. Sometimes a third, smaller horn may develop. These horns are used for defense, intimidation, and digging up roots and breaking branches during feeding. The black rhino is smaller than the white rhino and close in size to the Javan rhino of Indonesia. It has a pointed and prehensile upper lip, which it uses to grasp leaves and twigs when feeding, whereas the white rhinoceros has square lips used for eating grass. The black rhinoceros can also be distinguished from the white rhinoceros by its size, smaller skull, and ears; and by the position of the head, which is held higher than the white rhinoceros, since the black rhinoceros is a browser and not a grazer.
Their thick-layered skin helps to protect black rhinos from thorns and sharp grasses. Their skin harbors external parasites, such as mites and ticks, which may be eaten by oxpeckers and egrets. Such behaviour was originally thought to be an example of mutualism, but recent evidence suggests that oxpeckers may be parasites instead, feeding on rhino blood. It is commonly assumed that black rhinos have poor eyesight, relying more on hearing and smell. However, studies have shown that their eyesight is comparatively good, at about the level of a rabbit. Their ears have a relatively wide rotational range to detect sounds. An excellent sense of smell alerts rhinos to the presence of predators.
Distribution
Prehistorical range
As with many other components of the African large mammal fauna, black rhinos probably had a wider range in the northern part of the continent in prehistoric times than today. However this seems to have not been as extensive as that of the white rhino. Unquestionable fossil remains have not yet been found in this area and the abundant petroglyphs found across the Sahara desert are often too schematic to unambiguously decide whether they depict black or white rhinos. Petroglyphs from the Eastern Desert of southeastern Egypt relatively convincingly show the occurrence of black rhinos in these areas in prehistoric times.
Historical and extant range
The natural range of the black rhino included most of southern and eastern Africa, except the Congo Basin, the tropical rainforest areas along the Bight of Benin, the Ethiopian Highlands, and the Horn of Africa. Its former native occurrence in the extremely dry parts of the Kalahari Desert of southwestern Botswana and northwestern South Africa is uncertain. It was abundant in an area stretching from Eritrea and Sudan through South Sudan to southeastern Niger, especially around Lake Chad. Its occurrence further to the west is questionable, although this is often claimed in literature.
Today it is found only in protected nature reserves, having vanished from many countries in which it once thrived, especially in the west and north of its former range. The remaining populations are highly scattered. Some specimens have been relocated from their habitat to better protected locations, sometimes across national frontiers. The black rhino has been successfully reintroduced to Malawi since 1993, where it became extinct in 1990. Similarly it was reintroduced to Zambia (North Luangwa National Park) in 2008, where it had become extinct in 1998, and to Botswana (extinct in 1992, reintroduced in 2003).
In May 2017, 18 eastern black rhinos were translocated from South Africa to the Akagera National Park in Rwanda. The park had around 50 rhinos in the 1970s but the numbers dwindled to zero by 2007. In September 2017, the birth of a calf raised the population to 19. The park has dedicated rhino monitoring teams to protect the animals from poaching.
In October 2017, The governments of Chad and South Africa reached an agreement to transfer six black rhinos from South Africa to Zakouma National Park in Chad. Once established, this will be the northernmost population of the species. The species was wiped out from Chad in the 1970s and is under severe pressure from poaching in South Africa. The agreement calls for South African experts to assess the habitat, local management capabilities, security and the infrastructure before the transfer can take place.
Behavior
Black rhinos are generally thought to be solitary, with the only strong bond between a mother and her calf. In addition, bulls and cows have a consort relationship during mating, also subadults and young adults frequently form loose associations with older individuals of either sex. They are not very territorial and often intersect other rhino territories. Home ranges vary depending on season and the availability of food and water. Generally they have smaller home ranges and larger density in habitats that have plenty of food and water available, and vice versa if resources are not readily available. Sex and age of an individual black rhino influence home range and size, with ranges of cows larger than those of bulls, especially when accompanied by a calf. In the Serengeti home ranges are around , while in the Ngorongoro it is between . Black rhinos have also been observed to have a certain area they tend to visit and rest frequently called "houses" which are usually on a high ground level. These "home" ranges can vary from 2.6 km2 to 133 km2 with smaller home ranges having more abundant resources than larger home ranges.
Black rhinos in captivity and reservations sleep patterns have been recently studied to show that males sleep longer on average than females by nearly double the time. Other factors that play a role in their sleeping patterns is the location of where they decide to sleep. Although they do not sleep any longer in captivity, they do sleep at different times due to their location in captivity, or section of the park.
Black rhinos have a reputation for being extremely aggressive, and charge readily at perceived threats. They have even been observed to charge tree trunks and termite mounds. Black rhinos will fight each other, and they have the highest rates of mortal combat recorded for any mammal: about 50 percent of males and 30 percent of females die from combat-related injuries. Adult rhinos normally have no natural predators, due to their imposing size, thick skin, and deadly horns. However, adult black rhinos have fallen prey to crocodiles in exceptional circumstances. Calves and, very seldom, small sub-adults may be preyed upon by lions as well.
Black rhinos follow the same trails that elephants use to get from foraging areas to water holes. They also use smaller trails when they are browsing. They are very fast and can get up to speeds of running on their toes.
While it was assumed all rhinoceros are short-sighted, a study involving black rhinoceros retinas suggests they have better eyesight than previously assumed.
Diet
Black rhinos are herbivorous browsers that eat leafy plants, twigs, branches, shoots, thorny wood bushes, small trees, legumes, fruit, and grass. The optimum habitat seems to be one consisting of thick scrub and bushland, often with some woodland, which supports the highest densities. Their diet can reduce the number of woody plants, which may benefit grazers (who focus on leaves and stems of grass), but not competing browsers (who focus on leaves, stems of trees, shrubs or herbs). It has been known to eat up to 220 species of plants. They have a significantly restricted diet with a preference for a few key plant species and a tendency to select leafy species in the dry season. The plant species they seem to be most attracted to when not in dry season are the woody plants. There are 18 species of woody plants known to the diet of the black rhinoceros, and 11 species that could possibly be a part of their diet too. Black rhinos also have a tendency to choose food based on quality over quantity, where researchers find more populations in areas where the food has better quality. Black rhinos show a preference for Acacia species, as well as plants in the family Euphorbiaceae.
In accordance with their feeding habit, adaptations of the chewing apparatus have been described for rhinos. The black rhinoceros has a two phased chewing activity with a cutting ectoloph and more grinding lophs on the lingual side. The black rhinoceros can also be considered a more challenging herbivore to feed in captivity compared to its grazing relatives. They can live up to 5 days without water during drought. Black rhinos live in several habitats including bushlands, Riverine woodland, marshes, and their least favorable, grasslands. Habitat preferences are shown in two ways, the amount of sign found in the different habitats, and the habitat content of home ranges and core areas. Habitat types are also identified based on the composition of dominant plant types in each area. Different subspecies live in different habitats including Vachellia and Senegalia savanna, Euclea bushlands, Albany thickets, and even desert.
They browse for food in the morning and evening. They are selective browsers but, studies done in Kenya show that they do add the selection material with availability in order to satisfy their nutritional requirements. In the hottest part of the day they are most inactive- resting, sleeping, and wallowing in mud. Wallowing helps cool down body temperature during the day and protects against parasites. When black rhinos browse they use their lips to strip the branches of their leaves. Competition with elephants is causing the black rhinoceros to shift its diet. The black rhinoceros alters its selectivity with the absence of the elephant.
There is some variance in the exact chemical composition of rhinoceros horns. This variation is directly linked to diet and can be used as a means of rhino identification. Horn composition has helped scientists pinpoint the original location of individual rhinos, allowing for law enforcement to more accurately and more frequently identify and penalize poachers.
Communication
Black rhinos use several forms of communication. Due to their solitary nature, scent marking is often used to identify themselves to other black rhinos. Urine spraying occurs on trees and bushes, around water holes and feeding areas. Cows urine spray more often when receptive for breeding. Defecation sometimes occurs in the same spot used by different black rhinos, such as around feeding stations and watering tracks. Coming upon these spots, rhinos will smell to see who is in the area and add their own marking. When presented with adult feces, bulls and cows respond differently than when they are presented with subadult feces. The urine and feces of one black rhinoceros helps other black rhinoceroses to determine its age, sex, and identity. Less commonly they will rub their heads or horns against tree trunks to scent-mark.
The black rhino has powerful tube-shaped ears that can freely rotate in all directions. This highly developed sense of hearing allows black rhinos to detect sound over vast distances.
Reproduction
The adults are solitary in nature, coming together only for mating. Mating does not have a seasonal pattern but births tend to be towards the end of the rainy season in more arid environments.
When in season the cows will mark dung piles. Bulls will follow cows when they are in season; when she defecates he will scrape and spread the dung, making it more difficult for rival adult bulls to pick up her scent trail.
Courtship behaviors before mating include snorting and sparring with the horns among males. Another courtship behavior is called bluff and bluster, where the black rhino will snort and swing its head from side to side aggressively before running away repeatedly. Breeding pairs stay together for 2–3 days and sometimes even weeks. They mate several times a day over this time and copulation lasts for a half-hour, or even longer than one hour.
The gestation period for a black rhino is 15 months. The single calf weighs about at birth, and can follow its mother around after just three days. Weaning occurs at around 2 years of age for the offspring. The mother and calf stay together for 2–3 years until the next calf is born; female calves may stay longer, forming small groups. The young are occasionally taken by hyenas and lions. Sexual maturity is reached from 5 to 7 years old for females, and 7 to 8 years for males. The life expectancy in natural conditions (without poaching pressure) is from 35 to 50 years.
Conservation
For most of the 20th century the continental black rhino was the most numerous of all rhino species. Around 1900 there were probably several hundred thousand living in Africa. During the latter half of the 20th century their numbers were severely reduced from an estimated 70,000 in the late 1960s to only 10,000 to 15,000 in 1981. In the early 1990s the number dipped below 2,500, and in 2004 it was reported that only 2,410 black rhinos remained. According to the International Rhino Foundation—housed in Yulee, Florida at White Oak Conservation, which breeds black rhinos—the total African population had recovered to 4,240 by 2008 (which suggests that the 2004 number was low). By 2009 the population of 5,500 was either steady or slowly increasing.
In 1992, nine black rhinos were brought from Chete National Park, Zimbabwe to Australia via Cocos Island. After the natural deaths of the males in the group, four males were brought in from United States and have since adapted well to captivity and new climate. Calves and some subadults are preyed on by lions, but predation is rarely taken into account in managing the black rhinoceros. This is a major flaw because predation should be considered when attributing cause to the poor performance of the black rhinoceros population. In 2002 only ten western black rhinos remained in Cameroon, and in 2006 intensive surveys across its putative range failed to locate any, leading to fears that this subspecies had become extinct. In 2011 the IUCN declared the western black rhino extinct. There was a conservation effort in which black rhinos were translocated, but their population did not improve, as they did not like to be in an unfamiliar habitat.
Under CITES Appendix I all international commercial trade of the black rhino horn is prohibited since 1977. China though having joined CITES since 8 April 1981 is the largest importer of black rhino horns. However, this is a trade in which not only do the actors benefit, but so do the nation states ignoring them as well. Nevertheless, people continue to remove the rhino from its natural environment and allow for a dependence on human beings to save them from endangerment. Parks and reserves have been made for protecting the rhinos with armed guards keeping watch, but even still many poachers get through and harm the rhinos for their horns. Many have considered extracting rhino horns in order to deter poachers from slaughtering these animals or potentially bringing them to other breeding grounds such as the US and Australia. This method of extracting the horn, known as dehorning, consists of tranquilizing the rhino then sawing the horn almost completely off to decrease initiative for poaching, although the effectiveness of this in reducing poaching is not known and rhino mothers are known to use their horns to fend off predators.
The only rhino subspecies that has recovered somewhat from the brink of extinction is the southern white rhinoceros, whose numbers now are estimated around 14,500, up from fewer than 50 in the first decade of the 20th century.
But there seems to be hope for the black rhinoceros in recovering their gametes from dead rhinos in captivity. This shows promising results for producing black rhinoceros embryos, which can be used for testing sperm in vitro.
A January 2014 auction for a permit to hunt a black rhinoceros in Namibia sold for $350,000 at a fundraiser hosted by the Dallas Safari Club. The auction drew considerable criticism as well as death threats directed towards members of the club and the man who purchased the permit. This permit was issued for 1 of 18 black rhinoceros specifically identified by Namibia's Ministry of Environment and Tourism as being past breeding age and considered a threat to younger rhinos. The $350,000 that the hunter paid for the permit was used by the Namibian government to fund anti-poaching efforts in the country.
In 2022 South Africa granted permits to hunt 10 black rhinos, stating that the population is growing.
Threats
Today, there are various threats posed to black rhinos including habitat changes, illegal poaching, and competing species. Civil disturbances, such as war, have made mentionably negative effects on the black rhinoceros populations in since the 1960s in countries including, but not limited to, Chad, Cameroon, Rwanda, Mozambique, and Somalia. In the Addo Elephant National Park in South Africa, the African bush elephant (Loxodonta africana) is posing slight concern involving the black rhinoceroses who also inhabit the area. Both animals are browsers; however, the elephant's diet consists of a wider variety of foraging capacity, while the black rhinoceros primarily sticks to dwarf shrubs. The black rhinoceros has been found to eat grass as well; however, the shortening of its range of available food could be potentially problematic.
Black rhinos face problems associated with the minerals they ingest. They have become adjusted to ingesting less iron in the wild due to their evolutionary progression, which poses a problem when placed in captivity. These rhinoceroses can overload on iron, which leads to build up in the lungs, liver, spleen and small intestine.
Not only do these rhinoceros face threats being in the wild, but in captivity too. Black rhinoceros have become more susceptible to disease in captivity with high rates of mortality.
Illegal poaching for the international rhino horn trade is the main and most detrimental threat. The killing of these animals is not unique to modern-day society. The Chinese have maintained reliable documents of these happenings dating back to 1200 B.C. The ancient Chinese often hunted rhino horn for the making of wine cups, as well as the rhino's skin to manufacture imperial crowns, belts and armor for soldiers. A major market for rhino horn has historically been in the Middle East nations to make ornately carved handles for ceremonial daggers called jambiyas. Demand for these exploded in the 1970s, causing the black rhinoceros population to decline 96% between 1970 and 1992. The horn is also used in traditional Chinese medicine, and is said by herbalists to be able to revive comatose patients, facilitate exorcisms and various methods of detoxification, and cure fevers. It is also hunted for the Chinese superstitious belief that the horns allow direct access to Heaven due to their unique location and hollow nature. The purported effectiveness of the use of rhino horn in treating any illness has not been confirmed, or even suggested, by medical science. In June 2007, the first-ever documented case of the medicinal sale of black rhino horn in the United States (confirmed by genetic testing of the confiscated horn) occurred at a traditional Chinese medicine supply store in Portland, Oregon's Chinatown.
| Biology and health sciences | Perissodactyla | Animals |
374163 | https://en.wikipedia.org/wiki/Crevasse | Crevasse | A crevasse is a deep crack that forms in a glacier or ice sheet. Crevasses form as a result of the movement and resulting stress associated with the shear stress generated when two semi-rigid pieces above a plastic substrate have different rates of movement. The resulting intensity of the shear stress causes a breakage along the faces.
Description
Crevasses often have vertical or near-vertical walls, which can then melt and create seracs, arches, and other ice formations. These walls sometimes expose layers that represent the glacier's stratigraphy. Crevasse size often depends upon the amount of liquid water present in the glacier. A crevasse may be as deep as and as wide as
The presence of water in a crevasse can significantly increase its penetration. Water-filled crevasses may reach the bottom of glaciers or ice sheets and provide a direct hydrologic connection between the surface, where significant summer melting occurs, and the bed of the glacier, where additional water may moisten and lubricate the bed and accelerate ice flow. Direct drains of water from the top of a glacier, known as moulins, can also contribute the lubrication and acceleration of ice flow.
Types
Longitudinal crevasses form parallel to flow where the glacier width is expanding. They develop in areas of tensile stress, such as where a valley widens or bends. They are typically concave down and form an angle greater than 45° with the margin.
Splaying crevasses appear along the edges of a glacier and result from shear stress from the margin of the glacier and longitudinal compressing stress from lateral extension. They extend from the glacier's margin and are concave up with respect to glacier flow, making an angle less than 45° with the margin.
Transverse crevasses are the most common crevasse type. They form in a zone of longitudinal extension where the principal stresses are parallel to the direction of glacier flow, creating extensional tensile stress. These crevasses stretch across the glacier transverse to the flow direction, or cross-glacier. They generally form where a valley becomes steeper.
Dangers
Falling into glacial crevasses can be dangerous and life-threatening. Some glacial crevasses (such as on the Khumbu Icefall at Mount Everest) can be deep, which can cause fatal injuries upon falling. Hypothermia is often a cause of death when falling into a crevasse.
A crevasse may be covered, but not necessarily filled, by a snow bridge made of the previous years' accumulation and snow drifts. The result is that crevasses are rendered invisible, and thus potentially lethal to anyone attempting to navigate across a glacier. Occasionally a snow bridge over an old crevasse may begin to sag, providing some landscape relief, but this cannot be relied upon.
The danger of falling into a crevasse can be minimized by roping together multiple climbers into a rope team, and the use of friction knots.
| Physical sciences | Glaciology | Earth science |
374215 | https://en.wikipedia.org/wiki/Programmed%20cell%20death | Programmed cell death | Programmed cell death (PCD; sometimes referred to as cellular suicide) is the death of a cell as a result of events inside of a cell, such as apoptosis or autophagy. PCD is carried out in a biological process, which usually confers advantage during an organism's lifecycle. For example, the differentiation of fingers and toes in a developing human embryo occurs because cells between the fingers apoptose; the result is that the digits are separate. PCD serves fundamental functions during both plant and animal tissue development.
Apoptosis and autophagy are both forms of programmed cell death. Necrosis is the death of a cell caused by external factors such as trauma or infection and occurs in several different forms. Necrosis was long seen as a non-physiological process that occurs as a result of infection or injury, but in the 2000s, a form of programmed necrosis, called necroptosis, was recognized as an alternative form of programmed cell death. It is hypothesized that necroptosis can serve as a cell-death backup to apoptosis when the apoptosis signaling is blocked by endogenous or exogenous factors such as viruses or mutations. Most recently, other types of regulated necrosis have been discovered as well, which share several signaling events with necroptosis and apoptosis.
History
The concept of "programmed cell-death" was used by Lockshin & Williams in 1964 in relation to insect tissue development, around eight years before "apoptosis" was coined. The term PCD has, however, been a source of confusion and Durand and Ramsey have developed the concept by providing mechanistic and evolutionary definitions. PCD has become the general terms that refers to all the different types of cell death that have a genetic component.
The first insight into the mechanism came from studying BCL2, the product of a putative oncogene activated by chromosome translocations often found in follicular lymphoma. Unlike other cancer genes, which promote cancer by stimulating cell proliferation, BCL2 promoted cancer by stopping lymphoma cells from being able to kill themselves.
PCD has been the subject of increasing attention and research efforts. This trend has been highlighted with the award of the 2002 Nobel Prize in Physiology or Medicine to Sydney Brenner (United Kingdom), H. Robert Horvitz (US) and John E. Sulston (UK).
Types
Apoptosis or Type I cell-death.
Autophagic or Type II cell-death. (Cytoplasmic: characterized by the formation of large vacuoles that eat away organelles in a specific sequence prior to the destruction of the nucleus.)
Apoptosis
Apoptosis is the process of programmed cell death (PCD) that may occur in multicellular organisms. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, and chromosomal DNA fragmentation. It is now thought that- in a developmental context- cells are induced to positively commit suicide whilst in a homeostatic context; the absence of certain survival factors may provide the impetus for suicide. There appears to be some variation in the morphology and indeed the biochemistry of these suicide pathways; some treading the path of "apoptosis", others following a more generalized pathway to deletion, but both usually being genetically and synthetically motivated. There is some evidence that certain symptoms of "apoptosis" such as endonuclease activation can be spuriously induced without engaging a genetic cascade, however, presumably true apoptosis and programmed cell death must be genetically mediated. It is also becoming clear that mitosis and apoptosis are toggled or linked in some way and that the balance achieved depends on signals received from appropriate growth or survival factors.
Extrinsic Vs. Intrinsic Pathways
There are two different potential pathways that may be followed when apoptosis is needed. There is the extrinsic pathway and the intrinsic pathway. Both pathways involve the use of caspases - crucial to cell death.
Extrinsic Pathway
The extrinsic pathway involves specific receptor ligand interaction. Either the FAS ligand binds to the FAS receptor or the TNF-alpha ligand can bind to the TNF receptor. In both situations there is the activation of initiator caspase. The extrinsic pathway can be activated in two ways. The first way is through fast ligan TNF-alpha binding or through a cytotoxic t-cell. The cytotoxic T-cell can attach itself to a membrane, facilitating the release of granzyme B. Granzyme B perforates the target cell membrane and in turn allows the release of perforin. Finally, perforin creates a pore in the membrane, and releases the caspases which leads to the activation of caspase 3. This initiator caspase may cause the cleaving of inactive caspase 3, causing it to become cleaved caspase 3. This is the final molecule needed to trigger cell death.
Intrinsic Pathway
The intrinsic pathway is caused by cell damage such as DNA damage or UV exposure. This pathway takes place in the mitochondria and is mediated by sensors called Bcl sensors, and two proteins called BAX and BAK. These proteins are found in a majority of higher mammals as they are able to pierce the mitochondrial outer membrane - making them an integral part of mediating cell death by apoptosis. They do this by orchestrating the formation of pores within the membrane - essential to the release of cytochrome c. However, cytochrome c is only released if the mitochondrial membrane is compromised. Once cytochrome c is detected, the apoptosome complex is formed. This complex activates the executioner caspase which causes cell death. This killing of the cells may be essential as it prevents cellular overgrowth which can result in disease such as cancer. There are another two proteins worth mentioning that inhibit the release of cytochrome c in the mitochondria. Bcl-2 and Bcl-xl are anti-apoptotic and therefore prevent cell death. There is a potential mutation that can occur in that causes the overactivity of Bcl-2. It is the translocation between chromosomes 14 and 18. This over activity can result in the development of follicular lymphoma.
Autophagy
Macroautophagy, often referred to as autophagy, is a catabolic process that results in the autophagosomic-lysosomal degradation of bulk cytoplasmic contents, abnormal protein aggregates, and excess or damaged organelles.
Autophagy is generally activated by conditions of nutrient deprivation but has also been associated with physiological as well as pathological processes such as development, differentiation, neurodegenerative diseases, stress, infection and cancer.
Mechanism
A critical regulator of autophagy induction is the kinase mTOR, which when activated, suppresses autophagy and when not activated promotes it. Three related serine/threonine kinases, UNC-51-like kinase -1, -2, and -3 (ULK1, ULK2, UKL3), which play a similar role as the yeast Atg1, act downstream of the mTOR complex. ULK1 and ULK2 form a large complex with the mammalian homolog of an autophagy-related (Atg) gene product (mAtg13) and the scaffold protein FIP200. Class III PI3K complex, containing hVps34, Beclin-1, p150 and Atg14-like protein or ultraviolet irradiation resistance-associated gene (UVRAG), is required for the induction of autophagy.
The ATG genes control the autophagosome formation through ATG12-ATG5 and LC3-II (ATG8-II) complexes. ATG12 is conjugated to ATG5 in a ubiquitin-like reaction that requires ATG7 and ATG10. The Atg12–Atg5 conjugate then interacts non-covalently with ATG16 to form a large complex. LC3/ATG8 is cleaved at its C terminus by ATG4 protease to generate the cytosolic LC3-I. LC3-I is conjugated to phosphatidylethanolamine (PE) also in a ubiquitin-like reaction that requires Atg7 and Atg3. The lipidated form of LC3, known as LC3-II, is attached to the autophagosome membrane.
Autophagy and apoptosis are connected both positively and negatively, and extensive crosstalk exists between the two. During nutrient deficiency, autophagy functions as a pro-survival mechanism, however, excessive autophagy may lead to cell death, a process morphologically distinct from apoptosis. Several pro-apoptotic signals, such as TNF, TRAIL, and FADD, also induce autophagy. Additionally, Bcl-2 inhibits Beclin-1-dependent autophagy, thereby functioning both as a pro-survival and as an anti-autophagic regulator.
Other types
Besides the above two types of PCD, other pathways have been discovered.
Called "non-apoptotic programmed cell-death" (or "caspase-independent programmed cell-death" or "necroptosis"), these alternative routes to death are as efficient as apoptosis and can function as either backup mechanisms or the main type of PCD.
Other forms of programmed cell death include anoikis, almost identical to apoptosis except in its induction; cornification, a form of cell death exclusive to the epidermis; excitotoxicity; ferroptosis, an iron-dependent form of cell death and Wallerian degeneration.
Necroptosis is a programmed form of necrosis, or inflammatory cell death. Conventionally, necrosis is associated with unprogrammed cell death resulting from cellular damage or infiltration by pathogens, in contrast to orderly, programmed cell death via apoptosis. Nemosis is another programmed form of necrosis that takes place in fibroblasts.
Eryptosis is a form of suicidal erythrocyte death.
Aponecrosis is a hybrid of apoptosis and necrosis and refers to an incomplete apoptotic process that is completed by necrosis.
NETosis is the process of cell-death generated by neutrophils, resulting in NETs.
Paraptosis is another type of nonapoptotic cell death that is mediated by MAPK through the activation of IGF-1. It's characterized by the intracellular formation of vacuoles and swelling of mitochondria.
Pyroptosis, an inflammatory type of cell death, is uniquely mediated by caspase 1, an enzyme not involved in apoptosis, in response to infection by certain microorganisms.
Plant cells undergo particular processes of PCD similar to autophagic cell death. However, some common features of PCD are highly conserved in both plants and metazoa.
Atrophic factors
An atrophic factor is a force that causes a cell to die. Only natural forces on the cell are considered to be atrophic factors, whereas, for example, agents of mechanical or chemical abuse or lysis of the cell are considered not to be atrophic factors. Common types of atrophic factors are:
Decreased workload
Loss of innervation
Diminished blood supply
Inadequate nutrition
Loss of endocrine stimulation
Senility
Compression
Role in the development of the nervous system
The initial expansion of the developing nervous system is counterbalanced by the removal of neurons and their processes. During the development of the nervous system almost 50% of developing neurons are naturally removed by programmed cell death (PCD). PCD in the nervous system was first recognized in 1896 by John Beard. Since then several theories were proposed to understand its biological significance during neural development.
Role in neural development
PCD in the developing nervous system has been observed in proliferating as well as post-mitotic cells. One theory suggests that PCD is an adaptive mechanism to regulate the number of progenitor cells. In humans, PCD in progenitor cells starts at gestational week 7 and remains until the first trimester. This process of cell death has been identified in the germinal areas of the cerebral cortex, cerebellum, thalamus, brainstem, and spinal cord among other regions. At gestational weeks 19–23, PCD is observed in post-mitotic cells. The prevailing theory explaining this observation is the neurotrophic theory which states that PCD is required to optimize the connection between neurons and their afferent inputs and efferent targets. Another theory proposes that developmental PCD in the nervous system occurs in order to correct for errors in neurons that have migrated ectopically, innervated incorrect targets, or have axons that have gone awry during path finding. It is possible that PCD during the development of the nervous system serves different functions determined by the developmental stage, cell type, and even species.
The neurotrophic theory
The neurotrophic theory is the leading hypothesis used to explain the role of programmed cell death in the developing nervous system. It postulates that in order to ensure optimal innervation of targets, a surplus of neurons is first produced which then compete for limited quantities of protective neurotrophic factors and only a fraction survive while others die by programmed cell death. Furthermore, the theory states that predetermined factors regulate the amount of neurons that survive and the size of the innervating neuronal population directly correlates to the influence of their target field.
The underlying idea that target cells secrete attractive or inducing factors and that their growth cones have a chemotactic sensitivity was first put forth by Santiago Ramon y Cajal in 1892. Cajal presented the idea as an explanation for the "intelligent force" axons appear to take when finding their target but admitted that he had no empirical data. The theory gained more attraction when experimental manipulation of axon targets yielded death of all innervating neurons. This developed the concept of target derived regulation which became the main tenet in the neurotrophic theory. Experiments that further supported this theory led to the identification of the first neurotrophic factor, nerve growth factor (NGF).
Peripheral versus central nervous system
Different mechanisms regulate PCD in the peripheral nervous system (PNS) versus the central nervous system (CNS). In the PNS, innervation of the target is proportional to the amount of the target-released neurotrophic factors NGF and NT3. Expression of neurotrophin receptors, TrkA and TrkC, is sufficient to induce apoptosis in the absence of their ligands. Therefore, it is speculated that PCD in the PNS is dependent on the release of neurotrophic factors and thus follows the concept of the neurotrophic theory.
Programmed cell death in the CNS is not dependent on external growth factors but instead relies on intrinsically derived cues. In the neocortex, a 4:1 ratio of excitatory to inhibitory interneurons is maintained by apoptotic machinery that appears to be independent of the environment. Supporting evidence came from an experiment where interneuron progenitors were either transplanted into the mouse neocortex or cultured in vitro. Transplanted cells died at the age of two weeks, the same age at which endogenous interneurons undergo apoptosis. Regardless of the size of the transplant, the fraction of cells undergoing apoptosis remained constant. Furthermore, disruption of TrkB, a receptor for brain derived neurotrophic factor (Bdnf), did not affect cell death. It has also been shown that in mice null for the proapoptotic factor Bax (Bcl-2-associated X protein) a larger percentage of interneurons survived compared to wild type mice. Together these findings indicate that programmed cell death in the CNS partly exploits Bax-mediated signaling and is independent of BDNF and the environment. Apoptotic mechanisms in the CNS are still not well understood, yet it is thought that apoptosis of interneurons is a self-autonomous process.
Nervous system development in its absence
Programmed cell death can be reduced or eliminated in the developing nervous system by the targeted deletion of pro-apoptotic genes or by the overexpression of anti-apoptotic genes. The absence or reduction of PCD can cause serious anatomical malformations but can also result in minimal consequences depending on the gene targeted, neuronal population, and stage of development. Excess progenitor cell proliferation that leads to gross brain abnormalities is often lethal, as seen in caspase-3 or caspase-9 knockout mice which develop exencephaly in the forebrain. The brainstem, spinal cord, and peripheral ganglia of these mice develop normally, however, suggesting that the involvement of caspases in PCD during development depends on the brain region and cell type. Knockout or inhibition of apoptotic protease activating factor 1 (APAF1), also results in malformations and increased embryonic lethality. Manipulation of apoptosis regulator proteins Bcl-2 and Bax (overexpression of Bcl-2 or deletion of Bax) produces an increase in the number of neurons in certain regions of the nervous system such as the retina, trigeminal nucleus, cerebellum, and spinal cord. However, PCD of neurons due to Bax deletion or Bcl-2 overexpression does not result in prominent morphological or behavioral abnormalities in mice. For example, mice overexpressing Bcl-2 have generally normal motor skills and vision and only show impairment in complex behaviors such as learning and anxiety. The normal behavioral phenotypes of these mice suggest that an adaptive mechanism may be involved to compensate for the excess neurons.
Invertebrates and vertebrates
Learning about PCD in various species is essential in understanding the evolutionary basis and reason for apoptosis in development of the nervous system. During the development of the invertebrate nervous system, PCD plays different roles in different species. The similarity of the asymmetric cell death mechanism in the nematode and the leech indicates that PCD may have an evolutionary significance in the development of the nervous system. In the nematode, PCD occurs in the first hour of development leading to the elimination of 12% of non-gonadal cells including neuronal lineages. Cell death in arthropods occurs first in the nervous system when ectoderm cells differentiate and one daughter cell becomes a neuroblast and the other undergoes apoptosis. Furthermore, sex targeted cell death leads to different neuronal innervation of specific organs in males and females. In Drosophila, PCD is essential in segmentation and specification during development.
In contrast to invertebrates, the mechanism of programmed cell death is found to be more conserved in vertebrates. Extensive studies performed on various vertebrates show that PCD of neurons and glia occurs in most parts of the nervous system during development. It has been observed before and during synaptogenesis in the central nervous system as well as the peripheral nervous system. However, there are a few differences between vertebrate species. For example, mammals exhibit extensive arborization followed by PCD in the retina while birds do not. Although synaptic refinement in vertebrate systems is largely dependent on PCD, other evolutionary mechanisms also play a role.
In plant tissue
Programmed cell death in plants has a number of molecular similarities to animal apoptosis, but it also has differences, the most obvious being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies.
In "APL regulates vascular tissue identity in Arabidopsis", Martin Bonke and his colleagues had stated that one of the two long-distance transport systems in vascular plants, xylem, consists of several cell-types "the differentiation of which involves deposition of elaborate cell-wall thickenings and programmed cell-death." The authors emphasize that the products of plant PCD play an important structural role.
Basic morphological and biochemical features of PCD have been conserved in both plant and animal kingdoms. Specific types of plant cells carry out unique cell-death programs. These have common features with animal apoptosis—for instance, nuclear DNA degradation—but they also have their own peculiarities, such as nuclear degradation triggered by the collapse of the vacuole in tracheary elements of the xylem.
Janneke Balk and Christopher J. Leaver, of the Department of Plant Sciences, University of Oxford, carried out research on mutations in the mitochondrial genome of sun-flower cells. Results of this research suggest that mitochondria play the same key role in vascular plant PCD as in other eukaryotic cells.
PCD in pollen prevents inbreeding
During pollination, plants enforce self-incompatibility (SI) as an important means to prevent self-fertilization. Research on the corn poppy (Papaver rhoeas) has revealed that proteins in the pistil on which the pollen lands, interact with pollen and trigger PCD in incompatible (i.e., self) pollen. The researchers, Steven G. Thomas and Vernonica E. Franklin-Tong, also found that the response involves rapid inhibition of pollen-tube growth, followed by PCD.
In slime molds
The social slime mold Dictyostelium discoideum has the peculiarity of either adopting a predatory amoeba-like behavior in its unicellular form or coalescing into a mobile slug-like form when dispersing the spores that will give birth to the next generation.
The stalk is composed of dead cells that have undergone a type of PCD that shares many features of an autophagic cell-death: massive vacuoles forming inside cells, a degree of chromatin condensation, but no DNA fragmentation. The structural role of the residues left by the dead cells is reminiscent of the products of PCD in plant tissue.
D. discoideum is a slime mold, part of a branch that might have emerged from eukaryotic ancestors about a billion years before the present. It seems that they emerged after the ancestors of green plants and the ancestors of fungi and animals had differentiated. But, in addition to their place in the evolutionary tree, the fact that PCD has been observed in the humble, simple, six-chromosome D. discoideum has additional significance: It permits the study of a developmental PCD path that does not depend on caspases characteristic of apoptosis.
Evolutionary origin of mitochondrial apoptosis
The occurrence of programmed cell death in protists is possible, but it remains controversial. Some categorize death in those organisms as unregulated apoptosis-like cell death.
Biologists had long suspected that mitochondria originated from bacteria that had been incorporated as endosymbionts ("living together inside") of larger eukaryotic cells. It was Lynn Margulis who from 1967 on championed this theory, which has since become widely accepted. The most convincing evidence for this theory is the fact that mitochondria possess their own DNA and are equipped with genes and replication apparatus.
This evolutionary step would have been risky for the primitive eukaryotic cells, which began to engulf the energy-producing bacteria, as well as a perilous step for the ancestors of mitochondria, which began to invade their proto-eukaryotic hosts. This process is still evident today, between human white blood cells and bacteria. Most of the time, invading bacteria are destroyed by the white blood cells; however, it is not uncommon for the chemical warfare waged by prokaryotes to succeed, with the consequence known as infection by its resulting damage.
One of these rare evolutionary events, about two billion years before the present, made it possible for certain eukaryotes and energy-producing prokaryotes to coexist and mutually benefit from their symbiosis.
Mitochondriate eukaryotic cells live poised between life and death, because mitochondria still retain their repertoire of molecules that can trigger cell suicide. It is not clear why apoptotic machinery is maintained in the extant unicellular organisms. This process has now been evolved to happen only when programmed. to cells (such as feedback from neighbors, stress or DNA damage), mitochondria release caspase activators that trigger the cell-death-inducing biochemical cascade. As such, the cell suicide mechanism is now crucial to all of our lives.
DNA damage and apoptosis
Repair of DNA damages and apoptosis are two enzymatic processes essential for maintaining genome integrity in humans. Cells that are deficient in DNA repair tend to accumulate DNA damages, and when such cells are also defective in apoptosis they tend to survive even with excess DNA damage. Replication of DNA in such cells leads to mutations and these mutations may cause cancer (see Figure). Several enzymatic pathways have evolved for repairing different kinds of DNA damage, and it has been found that in five well studied DNA repair pathways particular enzymes have a dual role, where one role is to participate in repair of a specific class of damages and the second role is to induce apoptosis if the level of such DNA damage is beyond the cell's repair capability. These dual role proteins tend to protect against development of cancer. Proteins that function in such a dual role for each repair process are: (1) DNA mismatch repair, MSH2, MSH6, MLH1 and PMS2; (2) base excision repair, APEX1 (REF1/APE), poly(ADP-ribose) polymerase (PARP); (3) nucleotide excision repair, XPB, XPD (ERCC2), p53, p33(ING1b); (4) non-homologous end joining, the catalytic subunit of DNA-PK; (5) homologous recombinational repair, BRCA1, ATM, ATR, WRN, BLM, Tip60, p53.
Programmed death of entire organisms
Clinical significance
ABL
The BCR-ABL oncogene has been found to be involved in the development of cancer in humans.
c-Myc
c-Myc is involved in the regulation of apoptosis via its role in downregulating the Bcl-2 gene. Its role the disordered growth of tissue.
Metastasis
A molecular characteristic of metastatic cells is their altered expression of several apoptotic genes.
| Biology and health sciences | Cell processes | null |
374220 | https://en.wikipedia.org/wiki/Inverse%20trigonometric%20functions | Inverse trigonometric functions | In mathematics, the inverse trigonometric functions (occasionally also called antitrigonometric, cyclometric, or arcus functions) are the inverse functions of the trigonometric functions, under suitably restricted domains. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.
Notation
Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix: , , , etc. (This convention is used throughout this article.) This notation arises from the following geometric relationships:
when measuring in radians, an angle of radians will correspond to an arc whose length is , where is the radius of the circle. Thus in the unit circle, the cosine of x function is both the arc and the angle, because the arc of a circle of radius 1 is the same as the angle. Or, "the arc whose cosine is " is the same as "the angle whose cosine is ", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians. In computer programming languages, the inverse trigonometric functions are often called by the abbreviated forms , , .
The notations , , , etc., as introduced by John Herschel in 1813, are often used as well in English-language sources, much more than the also established , , – conventions consistent with the notation of an inverse function, that is useful (for example) to define the multivalued version of each inverse trigonometric function: However, this might appear to conflict logically with the common semantics for expressions such as (although only , without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for the reciprocal (multiplicative inverse) and inverse function.
The confusion is somewhat mitigated by the fact that each of the reciprocal trigonometric functions has its own name — for example, . Nevertheless, certain authors advise against using it, since it is ambiguous. Another precarious convention used by a small number of authors is to use an uppercase first letter, along with a “” superscript: , , , etc. Although it is intended to avoid confusion with the reciprocal, which should be represented by , , etc., or, better, by , , etc., it in turn creates yet another major source of ambiguity, especially since many popular high-level programming languages (e.g. Mathematica and MAGMA) use those very same capitalised representations for the standard trig functions, whereas others (Python, SymPy, NumPy, Matlab, MAPLE, etc.) use lower-case.
Hence, since 2009, the ISO 80000-2 standard has specified solely the "arc" prefix for the inverse functions.
Basic concepts
Principal values
Since none of the six trigonometric functions are one-to-one, they must be restricted in order to have inverse functions. Therefore, the result ranges of the inverse functions are proper (i.e. strict) subsets of the domains of the original functions.
For example, using in the sense of multivalued functions, just as the square root function could be defined from the function is defined so that For a given real number with there are multiple (in fact, countably infinitely many) numbers such that ; for example, but also etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each in the domain, the expression will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions.
The principal inverses are listed in the following table.
Note: Some authors define the range of arcsecant to be or because the tangent function is nonnegative on this domain. This makes some computations more consistent. For example, using this range, whereas with the range or we would have to write since tangent is nonnegative on but nonpositive on For a similar reason, the same authors define the range of arccosecant to be or
Domains
If is allowed to be a complex number, then the range of applies only to its real part.
Solutions to elementary trigonometric equations
Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of
Sine and cosecant begin their period at (where is an integer), finish it at and then reverse themselves over to
Cosine and secant begin their period at finish it at and then reverse themselves over to
Tangent begins its period at finishes it at and then repeats it (forward) over to
Cotangent begins its period at finishes it at and then repeats it (forward) over to
This periodicity is reflected in the general inverses, where is some integer.
The following table shows how inverse trigonometric functions may be used to solve equalities involving the six standard trigonometric functions.
It is assumed that the given values and all lie within appropriate ranges so that the relevant expressions below are well-defined.
Note that "for some " is just another way of saying "for some integer "
The symbol is logical equality and indicates that if the left hand side is true then so is the right hand side and, conversely, if the right hand side is true then so is the left hand side (see this footnote for more details and an example illustrating this concept).
where the first four solutions can be written in expanded form as:
For example, if then for some While if then for some where will be even if and it will be odd if The equations and have the same solutions as and respectively. In all equations above for those just solved (i.e. except for / and /), the integer in the solution's formula is uniquely determined by (for fixed and ).
With the help of integer parity
it is possible to write a solution to that doesn't involve the "plus or minus" symbol:
if and only if for some
And similarly for the secant function,
if and only if for some
where equals when the integer is even, and equals when it's odd.
Detailed example and explanation of the "plus or minus" symbol
The solutions to and involve the "plus or minus" symbol whose meaning is now clarified. Only the solution to will be discussed since the discussion for is the same.
We are given between and we know that there is an angle in some interval that satisfies We want to find this The table above indicates that the solution is
which is a shorthand way of saying that (at least) one of the following statement is true:
for some integer or
for some integer
As mentioned above, if (which by definition only happens when ) then both statements (1) and (2) hold, although with different values for the integer : if is the integer from statement (1), meaning that holds, then the integer for statement (2) is (because ).
However, if then the integer is unique and completely determined by
If (which by definition only happens when ) then (because and so in both cases is equal to ) and so the statements (1) and (2) happen to be identical in this particular case (and so both hold).
Having considered the cases and we now focus on the case where and So assume this from now on. The solution to is still
which as before is shorthand for saying that one of statements (1) and (2) is true. However this time, because and statements (1) and (2) are different and furthermore, exactly one of the two equalities holds (not both). Additional information about is needed to determine which one holds. For example, suppose that and that that is known about is that (and nothing more is known). Then
and moreover, in this particular case (for both the case and the case) and so consequently,
This means that could be either or Without additional information it is not possible to determine which of these values has.
An example of some additional information that could determine the value of would be knowing that the angle is above the -axis (in which case ) or alternatively, knowing that it is below the -axis (in which case ).
Equal identical trigonometric functions
Set of all solutions to elementary trigonometric equations
Thus given a single solution to an elementary trigonometric equation ( is such an equation, for instance, and because always holds, is always a solution), the set of all solutions to it are:
Transforming equations
The equations above can be transformed by using the reflection and shift identities:
These formulas imply, in particular, that the following hold:
where swapping swapping and swapping gives the analogous equations for respectively.
So for example, by using the equality the equation can be transformed into which allows for the solution to the equation (where ) to be used; that solution being:
which becomes:
where using the fact that and substituting proves that another solution to is:
The substitution may be used express the right hand side of the above formula in terms of instead of
Relationships between trigonometric functions and inverse trigonometric functions
Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by considering the geometry of a right-angled triangle, with one side of length 1 and another side of length then applying the Pythagorean theorem and definitions of the trigonometric ratios. It is worth noting that for arcsecant and arccosecant, the diagram assumes that is positive, and thus the result has to be corrected through the use of absolute values and the signum (sgn) operation.
Relationships among the inverse trigonometric functions
Complementary angles:
Negative arguments:
Reciprocal arguments:
The identities above can be used with (and derived from) the fact that and are reciprocals (i.e. ), as are and and and
Useful identities if one only has a fragment of a sine table:
Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real).
A useful form that follows directly from the table above is
.
It is obtained by recognizing that .
From the half-angle formula, , we get:
Arctangent addition formula
This is derived from the tangent addition formula
by letting
In calculus
Derivatives of inverse trigonometric functions
The derivatives for complex values of z are as follows:
Only for real values of x:
These formulas can be derived in terms of the derivatives of trigonometric functions. For example, if , then so
Expression as definite integrals
Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral:
When x equals 1, the integrals with limited domains are improper integrals, but still well-defined.
Infinite series
Similar to the sine and cosine functions, the inverse trigonometric functions can also be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative, , as a binomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative in a geometric series, and applying the integral definition above (see Leibniz series).
Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given above. For example, , , and so on. Another series is given by:
Leonhard Euler found a series for the arctangent that converges more quickly than its Taylor series:
(The term in the sum for n = 0 is the empty product, so is 1.)
Alternatively, this can be expressed as
Another series for the arctangent function is given by
where is the imaginary unit.
Continued fractions for arctangent
Two alternatives to the power series for arctangent are these generalized continued fractions:
The second of these is valid in the cut complex plane. There are two cuts, from −i to the point at infinity, going down the imaginary axis, and from i to the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed by Leonhard Euler; the second by Carl Friedrich Gauss utilizing the Gaussian hypergeometric series.
Indefinite integrals of inverse trigonometric functions
For real and complex values of z:
For real x ≥ 1:
For all real x not between -1 and 1:
The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant functions. The signum function is also necessary due to the absolute values in the derivatives of the two functions, which create two different solutions for positive and negative values of x. These can be further simplified using the logarithmic definitions of the inverse hyperbolic functions:
The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to the signum logarithmic function shown above.
All of these antiderivatives can be derived using integration by parts and the simple derivative forms shown above.
Example
Using (i.e. integration by parts), set
Then
which by the simple substitution yields the final result:
Extension to the complex plane
Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets and branch points. One possible way of defining the extension is:
where the part of the imaginary axis which does not lie strictly between the branch points (−i and +i) is the branch cut between the principal sheet and other sheets. The path of the integral must not cross a branch cut. For z not on a branch cut, a straight line path from 0 to z is such a path. For z on a branch cut, the path must approach from for the upper branch cut and from for the lower branch cut.
The arcsine function may then be defined as:
where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the branch cut between the principal sheet of arcsin and other sheets;
which has the same cut as arcsin;
which has the same cut as arctan;
where the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets;
which has the same cut as arcsec.
Logarithmic forms
These functions may also be expressed using complex logarithms. This extends their domains to the complex plane in a natural fashion. The following identities for principal values of the functions hold everywhere that they are defined, even on their branch cuts.
Generalization
Because all of the inverse trigonometric functions output an angle of a right triangle, they can be generalized by using Euler's formula to form a right triangle in the complex plane. Algebraically, this gives us:
or
where is the adjacent side, is the opposite side, and is the hypotenuse. From here, we can solve for .
or
Simply taking the imaginary part works for any real-valued and , but if or is complex-valued, we have to use the final equation so that the real part of the result isn't excluded. Since the length of the hypotenuse doesn't change the angle, ignoring the real part of also removes from the equation. In the final equation, we see that the angle of the triangle in the complex plane can be found by inputting the lengths of each side. By setting one of the three sides equal to 1 and one of the remaining sides equal to our input , we obtain a formula for one of the inverse trig functions, for a total of six equations. Because the inverse trig functions require only one input, we must put the final side of the triangle in terms of the other two using the Pythagorean Theorem relation
The table below shows the values of a, b, and c for each of the inverse trig functions and the equivalent expressions for that result from plugging the values into the equations above and simplifying.
The particular form of the simplified expression can cause the output to differ from the usual principal branch of each of the inverse trig functions. The formulations given will output the usual principal branch when using the and principal branch for every function except arccotangent in the column. Arccotangent in the column will output on its usual principal branch by using the and convention.
In this sense, all of the inverse trig functions can be thought of as specific cases of the complex-valued log function. Since these definition work for any complex-valued , the definitions allow for hyperbolic angles as outputs and can be used to further define the inverse hyperbolic functions. It's possible to algebraically prove these relations by starting with the exponential forms of the trigonometric functions and solving for the inverse function.
Example proof
Using the exponential definition of sine, and letting
(the positive branch is chosen)
Applications
Finding the angle of a right triangle
Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine and cosine, it follows that
Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the Pythagorean Theorem: where is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed.
For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle θ with the horizontal, where θ may be computed as follows:
In computer science and engineering
Two-argument variant of arctangent
The two-argument atan2 function computes the arctangent of y / x given y and x, but with a range of (−, ]. In other words, atan2(y, x) is the angle between the positive x-axis of a plane and the point (x, y) on it, with positive sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane, y < 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering.
In terms of the standard arctan function, that is with range of (−, ), it can be expressed as follows:
It also equals the principal value of the argument of the complex number x + iy.
This limited version of the function above may also be defined using the tangent half-angle formulae as follows:
provided that either x > 0 or y ≠ 0. However this fails if given x ≤ 0 and y = 0 so the expression is unsuitable for computational use.
The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted. These variations are detailed at atan2.
Arctangent function with location parameter
In many applications the solution of the equation is to come as close as possible to a given value . The adequate solution is produced by the parameter modified arctangent function
The function rounds to the nearest integer.
Numerical accuracy
For angles near 0 and , arccosine is ill-conditioned, and similarly with arcsine for angles near −/2 and /2. Computer applications thus need to consider the stability of inputs to these functions and the sensitivity of their calculations, or use alternate methods.
| Mathematics | Specific functions | null |
374222 | https://en.wikipedia.org/wiki/Triangular%20matrix | Triangular matrix | In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries above the main diagonal are zero. Similarly, a square matrix is called if all the entries below the main diagonal are zero.
Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix L and an upper triangular matrix U if and only if all its leading principal minors are non-zero.
Description
A matrix of the form
is called a lower triangular matrix or left triangular matrix, and analogously a matrix of the form
is called an upper triangular matrix or right triangular matrix. A lower or left triangular matrix is commonly denoted with the variable L, and an upper or right triangular matrix is commonly denoted with the variable U or R.
A matrix that is both upper and lower triangular is diagonal. Matrices that are similar to triangular matrices are called triangularisable.
A non-square (or sometimes any) matrix with zeros above (below) the diagonal is called a lower (upper) trapezoidal matrix. The non-zero entries form the shape of a trapezoid.
Examples
The matrix
is lower triangular, and
is upper triangular.
Forward and back substitution
A matrix equation in the form or is very easy to solve by an iterative process called forward substitution for lower triangular matrices and analogously back substitution for upper triangular matrices. The process is so called because for lower triangular matrices, one first computes , then substitutes that forward into the next equation to solve for , and repeats through to . In an upper triangular matrix, one works backwards, first computing , then substituting that back into the previous equation to solve for , and repeating through .
Notice that this does not require inverting the matrix.
Forward substitution
The matrix equation Lx = b can be written as a system of linear equations
Observe that the first equation () only involves , and thus one can solve for directly. The second equation only involves and , and thus can be solved once one substitutes in the already solved value for . Continuing in this way, the -th equation only involves , and one can solve for using the previously solved values for . The resulting formulas are:
A matrix equation with an upper triangular matrix U can be solved in an analogous way, only working backwards.
Applications
Forward substitution is used in financial bootstrapping to construct a yield curve.
Properties
The transpose of an upper triangular matrix is a lower triangular matrix and vice versa.
A matrix which is both symmetric and triangular is diagonal.
In a similar vein, a matrix which is both normal (meaning A*A = AA*, where A* is the conjugate transpose) and triangular is also diagonal. This can be seen by looking at the diagonal entries of A*A and AA*.
The determinant and permanent of a triangular matrix equal the product of the diagonal entries, as can be checked by direct computation.
In fact more is true: the eigenvalues of a triangular matrix are exactly its diagonal entries. Moreover, each eigenvalue occurs exactly k times on the diagonal, where k is its algebraic multiplicity, that is, its multiplicity as a root of the characteristic polynomial of A. In other words, the characteristic polynomial of a triangular n×n matrix A is exactly
,
that is, the unique degree n polynomial whose roots are the diagonal entries of A (with multiplicities).
To see this, observe that is also triangular and hence its determinant is the product of its diagonal entries .
Special forms
Unitriangular matrix
If the entries on the main diagonal of a (lower or upper) triangular matrix are all 1, the matrix is called (lower or upper) unitriangular.
Other names used for these matrices are unit (lower or upper) triangular, or very rarely normed (lower or upper) triangular. However, a unit triangular matrix is not the same as the unit matrix, and a normed triangular matrix has nothing to do with the notion of matrix norm.
All finite unitriangular matrices are unipotent.
Strictly triangular matrix
If all of the entries on the main diagonal of a (lower or upper) triangular matrix are also 0, the matrix is called strictly (lower or upper) triangular.
All finite strictly triangular matrices are nilpotent of index at most n as a consequence of the Cayley-Hamilton theorem.
Atomic triangular matrix
An atomic (lower or upper) triangular matrix is a special form of unitriangular matrix, where all of the off-diagonal elements are zero, except for the entries in a single column. Such a matrix is also called a Frobenius matrix, a Gauss matrix, or a Gauss transformation matrix.
Block triangular matrix
A block triangular matrix is a block matrix (partitioned matrix) that is a triangular matrix.
Upper block triangular
A matrix is upper block triangular if
,
where for all .
Lower block triangular
A matrix is lower block triangular if
,
where for all .
Triangularisability
A matrix that is similar to a triangular matrix is referred to as triangularizable. Abstractly, this is equivalent to stabilizing a flag: upper triangular matrices are precisely those that preserve the standard flag, which is given by the standard ordered basis and the resulting flag All flags are conjugate (as the general linear group acts transitively on bases), so any matrix that stabilises a flag is similar to one that stabilizes the standard flag.
Any complex square matrix is triangularizable. In fact, a matrix A over a field containing all of the eigenvalues of A (for example, any matrix over an algebraically closed field) is similar to a triangular matrix. This can be proven by using induction on the fact that A has an eigenvector, by taking the quotient space by the eigenvector and inducting to show that A stabilizes a flag, and is thus triangularizable with respect to a basis for that flag.
A more precise statement is given by the Jordan normal form theorem, which states that in this situation, A is similar to an upper triangular matrix of a very particular form. The simpler triangularization result is often sufficient however, and in any case used in proving the Jordan normal form theorem.
In the case of complex matrices, it is possible to say more about triangularization, namely, that any square matrix A has a Schur decomposition. This means that A is unitarily equivalent (i.e. similar, using a unitary matrix as change of basis) to an upper triangular matrix; this follows by taking an Hermitian basis for the flag.
Simultaneous triangularisability
A set of matrices are said to be if there is a basis under which they are all upper triangular; equivalently, if they are upper triangularizable by a single similarity matrix P. Such a set of matrices is more easily understood by considering the algebra of matrices it generates, namely all polynomials in the denoted Simultaneous triangularizability means that this algebra is conjugate into the Lie subalgebra of upper triangular matrices, and is equivalent to this algebra being a Lie subalgebra of a Borel subalgebra.
The basic result is that (over an algebraically closed field), the commuting matrices or more generally are simultaneously triangularizable. This can be proven by first showing that commuting matrices have a common eigenvector, and then inducting on dimension as before. This was proven by Frobenius, starting in 1878 for a commuting pair, as discussed at commuting matrices. As for a single matrix, over the complex numbers these can be triangularized by unitary matrices.
The fact that commuting matrices have a common eigenvector can be interpreted as a result of Hilbert's Nullstellensatz: commuting matrices form a commutative algebra over which can be interpreted as a variety in k-dimensional affine space, and the existence of a (common) eigenvalue (and hence a common eigenvector) corresponds to this variety having a point (being non-empty), which is the content of the (weak) Nullstellensatz. In algebraic terms, these operators correspond to an algebra representation of the polynomial algebra in k variables.
This is generalized by Lie's theorem, which shows that any representation of a solvable Lie algebra is simultaneously upper triangularizable, the case of commuting matrices being the abelian Lie algebra case, abelian being a fortiori solvable.
More generally and precisely, a set of matrices is simultaneously triangularisable if and only if the matrix is nilpotent for all polynomials p in k non-commuting variables, where is the commutator; for commuting the commutator vanishes so this holds. This was proven by Drazin, Dungey, and Gruenberg in 1951; a brief proof is given by Prasolov in 1994. One direction is clear: if the matrices are simultaneously triangularisable, then is strictly upper triangularizable (hence nilpotent), which is preserved by multiplication by any or combination thereof – it will still have 0s on the diagonal in the triangularizing basis.
Algebras of triangular matrices
Upper triangularity is preserved by many operations:
The sum of two upper triangular matrices is upper triangular.
The product of two upper triangular matrices is upper triangular.
The inverse of an upper triangular matrix, if it exists, is upper triangular.
The product of an upper triangular matrix and a scalar is upper triangular.
Together these facts mean that the upper triangular matrices form a subalgebra of the associative algebra of square matrices for a given size. Additionally, this also shows that the upper triangular matrices can be viewed as a Lie subalgebra of the Lie algebra of square matrices of a fixed size, where the Lie bracket [a, b] given by the commutator . The Lie algebra of all upper triangular matrices is a solvable Lie algebra. It is often referred to as a Borel subalgebra of the Lie algebra of all square matrices.
All these results hold if upper triangular is replaced by lower triangular throughout; in particular the lower triangular matrices also form a Lie algebra. However, operations mixing upper and lower triangular matrices do not in general produce triangular matrices. For instance, the sum of an upper and a lower triangular matrix can be any matrix; the product of a lower triangular with an upper triangular matrix is not necessarily triangular either.
The set of unitriangular matrices forms a Lie group.
The set of strictly upper (or lower) triangular matrices forms a nilpotent Lie algebra, denoted This algebra is the derived Lie algebra of , the Lie algebra of all upper triangular matrices; in symbols, In addition, is the Lie algebra of the Lie group of unitriangular matrices.
In fact, by Engel's theorem, any finite-dimensional nilpotent Lie algebra is conjugate to a subalgebra of the strictly upper triangular matrices, that is to say, a finite-dimensional nilpotent Lie algebra is simultaneously strictly upper triangularizable.
Algebras of upper triangular matrices have a natural generalization in functional analysis which yields nest algebras on Hilbert spaces.
Borel subgroups and Borel subalgebras
The set of invertible triangular matrices of a given kind (lower or upper) forms a group, indeed a Lie group, which is a subgroup of the general linear group of all invertible matrices. A triangular matrix is invertible precisely when its diagonal entries are invertible (non-zero).
Over the real numbers, this group is disconnected, having components accordingly as each diagonal entry is positive or negative. The identity component is invertible triangular matrices with positive entries on the diagonal, and the group of all invertible triangular matrices is a semidirect product of this group and the group of diagonal matrices with on the diagonal, corresponding to the components.
The Lie algebra of the Lie group of invertible upper triangular matrices is the set of all upper triangular matrices, not necessarily invertible, and is a solvable Lie algebra. These are, respectively, the standard Borel subgroup B of the Lie group GLn and the standard Borel subalgebra of the Lie algebra gln.
The upper triangular matrices are precisely those that stabilize the standard flag. The invertible ones among them form a subgroup of the general linear group, whose conjugate subgroups are those defined as the stabilizer of some (other) complete flag. These subgroups are Borel subgroups. The group of invertible lower triangular matrices is such a subgroup, since it is the stabilizer of the standard flag associated to the standard basis in reverse order.
The stabilizer of a partial flag obtained by forgetting some parts of the standard flag can be described as a set of block upper triangular matrices (but its elements are not all triangular matrices). The conjugates of such a group are the subgroups defined as the stabilizer of some partial flag. These subgroups are called parabolic subgroups.
Examples
The group of 2×2 upper unitriangular matrices is isomorphic to the additive group of the field of scalars; in the case of complex numbers it corresponds to a group formed of parabolic Möbius transformations; the 3×3 upper unitriangular matrices form the Heisenberg group.
| Mathematics | Linear algebra | null |
374388 | https://en.wikipedia.org/wiki/Uranium%20hexafluoride | Uranium hexafluoride | Uranium hexafluoride, sometimes called hex, is an inorganic compound with the formula . Uranium hexafluoride is a volatile, toxic white solid that is used in the process of enriching uranium, which produces fuel for nuclear reactors and nuclear weapons.
Preparation
Uranium dioxide is converted with hydrofluoric acid (HF) to uranium tetrafluoride:
In samples contaminated with uranium trioxide, the oxyfluoride is produced in the HF step:
The resulting is subsequently oxidized with fluorine to give the hexafluoride:
Properties
Physical properties
At atmospheric pressure, sublimes at 56.5 °C.
The solid-state structure was determined by neutron diffraction at 77 K and 293 K.
Chemical properties
UF6 reacts with water, releasing hydrofluoric acid. The compound reacts with aluminium, forming a surface layer of that resists any further reaction from the compound.
Uranium hexafluoride is a mild oxidant. It is a Lewis acid as evidenced by its binding to form heptafluorouranate(VI), .
Polymeric uranium(VI) fluorides containing organic cations have been isolated and characterized by X-ray diffraction.
Application in the fuel cycle
As one of the most volatile compounds of uranium, uranium hexafluoride is relatively convenient to process and is used in both of the main uranium enrichment methods, namely gaseous diffusion and the gas centrifuge method. Since the triple point of ; 64 °C(147 °F; 337 K) and 152 kPa (22 psi; 1.5 atm); is close to ambient conditions, phase transitions can be achieved with little thermodynamic work.
Fluorine has only a single naturally occurring stable isotope, so isotopologues of differ in their molecular weight based solely on the uranium isotope present. This difference is the basis for the physical separation of isotopes in enrichment.
All the other uranium fluorides are nonvolatile solids that are coordination polymers.
The conversion factor for the isotopologue of ("hex") to "U mass" is 0.676.
Gaseous diffusion requires about 60 times as much energy as the gas centrifuge process: gaseous diffusion-produced nuclear fuel produces 25 times more energy than is used in the diffusion process, while centrifuge-produced fuel produces 1,500 times more energy than is used in the centrifuge process.
In addition to its use in enrichment, uranium hexafluoride has been used in an advanced reprocessing method (fluoride volatility), which was developed in the Czech Republic. In this process, spent nuclear fuel is treated with fluorine gas to transform the oxides or elemental metals into a mixture of fluorides. This mixture is then distilled to separate the different classes of material. Some fission products form nonvolatile fluorides which remain as solids and can then either be prepared for storage as nuclear waste or further processed either by solvation-based methods or electrochemically.
Uranium enrichment produces large quantities of depleted uranium hexafluoride (D or D-) as a waste product. The long-term storage of D- presents environmental, health, and safety risks because of its chemical instability. When is exposed to moist air, it reacts with the water in the air to produce (uranyl fluoride) and HF (hydrogen fluoride) both of which are highly corrosive and toxic. In 2005, 686,500 tonnes of D- was housed in 57,122 storage cylinders located near Portsmouth, Ohio; Oak Ridge, Tennessee; and Paducah, Kentucky. Storage cylinders must be regularly inspected for signs of corrosion and leaks. The estimated lifetime of the steel cylinders is measured in decades.
Accidents and disposal
There have been several accidents involving uranium hexafluoride in the US, including a cylinder-filling accident and material release at the Sequoyah Fuels Corporation in 1986 where an estimated 29 500 pounds of gaseous escaped. The U.S. government has been converting D to solid uranium oxides for disposal. Such disposal of the entire D stockpile could cost anywhere from $15 million to $450 million.
| Physical sciences | Halide salts | Chemistry |
374390 | https://en.wikipedia.org/wiki/Anthropocene | Anthropocene | The Anthropocene is a now rejected proposal for the name of a geological epoch that would follow the Holocene, dating from the commencement of significant human impact on Earth up to the present day. It was rejected in 2024 by the International Commission on Stratigraphy in terms of being a defined geologic period. The impacts of humans affect Earth's oceans, geology, geomorphology, landscape, limnology, hydrology, ecosystems and climate. The effects of human activities on Earth can be seen for example in biodiversity loss and climate change. Various start dates for the Anthropocene have been proposed, ranging from the beginning of the Neolithic Revolution (12,000–15,000 years ago), to as recently as the 1960s. The biologist Eugene F. Stoermer is credited with first coining and using the term anthropocene informally in the 1980s; Paul J. Crutzen re-invented and popularized the term. However, in 2024 the International Commission on Stratigraphy (ICS) and the International Union of Geological Sciences (IUGS) rejected the Anthropocene Epoch proposal for inclusion in the Geologic Time Scale.
The Anthropocene Working Group (AWG) of the Subcommission on Quaternary Stratigraphy (SQS) of the ICS voted in April 2016 to proceed towards a formal golden spike (GSSP) proposal to define the Anthropocene epoch in the geologic time scale. The group presented the proposal to the International Geological Congress in August 2016.
In May 2019, the AWG voted in favour of submitting a formal proposal to the ICS by 2021. The proposal located potential stratigraphic markers to the mid-20th century. This time period coincides with the start of the Great Acceleration, a post-World War II time period during which global population growth, pollution and exploitation of natural resources have all increased at a dramatic rate. The Atomic Age also started around the mid-20th century, when the risks of nuclear wars, nuclear terrorism and nuclear accidents increased.
Twelve candidate sites were selected for the GSSP; the sediments of Crawford Lake, Canada were finally proposed, in July 2023, to mark the lower boundary of the Anthropocene, starting with the Crawfordian stage/age in 1950.
In March 2024, after 15 years of deliberation, the Anthropocene Epoch proposal of the AWG was voted down by a wide margin by the SQS, owing largely to its shallow sedimentary record and extremely recent proposed start date. The ICS and the IUGS later formally confirmed, by a near unanimous vote, the rejection of the AWG's Anthropocene Epoch proposal for inclusion in the Geologic Time Scale. The IUGS statement on the rejection concluded: "Despite its rejection as a formal unit of the Geologic Time Scale, Anthropocene will nevertheless continue to be used not only by Earth and environmental scientists, but also by social scientists, politicians and economists, as well as by the public at large. It will remain an invaluable descriptor of human impact on the Earth system."
Development of the concept
An early concept for the Anthropocene was the Noosphere by Vladimir Vernadsky, who in 1938 wrote of "scientific thought as a geological force". Scientists in the Soviet Union appear to have used the term Anthropocene as early as the 1960s to refer to the Quaternary, the most recent geological period.
Ecologist Eugene F. Stoermer subsequently used Anthropocene with a different sense in the 1980s and the term was widely popularised in 2000 by atmospheric chemist Paul J. Crutzen, who regards the influence of human behavior on Earth's atmosphere in recent centuries as so significant as to constitute a new geological epoch.
The term Anthropocene is informally used in scientific contexts. The Geological Society of America entitled its 2011 annual meeting: Archean to Anthropocene: The past is the key to the future. The new epoch has no agreed start-date, but one proposal, based on atmospheric evidence, is to fix the start with the Industrial Revolution 1780, with the invention of the steam engine. Other scientists link the new term to earlier events, such as the rise of agriculture and the Neolithic Revolution (around 12,000 years BP).
Evidence of relative human impact – such as the growing human influence on land use, ecosystems, biodiversity, and species extinction – is substantial; scientists think that human impact has significantly changed (or halted) the growth of biodiversity. Those arguing for earlier dates posit that the proposed Anthropocene may have begun as early as 14,000–15,000 years BP, based on geologic evidence; this has led other scientists to suggest that "the onset of the Anthropocene should be extended back many thousand years"; this would make the Anthropocene essentially synonymous with the current term, Holocene.
Anthropocene Working Group
In 2008, the Stratigraphy Commission of the Geological Society of London considered a proposal to make the Anthropocene a formal unit of geological epoch divisions. A majority of the commission decided the proposal had merit and should be examined further. Independent working groups of scientists from various geological societies began to determine whether the Anthropocene will be formally accepted into the Geological Time Scale.
In January 2015, 26 of the 38 members of the International Anthropocene Working Group published a paper suggesting the Trinity test on 16 July 1945 as the starting point of the proposed new epoch. However, a significant minority supported one of several alternative dates. A March 2015 report suggested either 1610 or 1964 as the beginning of the Anthropocene. Other scholars pointed to the diachronous character of the physical strata of the Anthropocene, arguing that onset and impact are spread out over time, not reducible to a single instant or date of start.
A January 2016 report on the climatic, biological, and geochemical signatures of human activity in sediments and ice cores suggested the era since the mid-20th century should be recognised as a geological epoch distinct from the Holocene.
The Anthropocene Working Group met in Oslo in April 2016 to consolidate evidence supporting the argument for the Anthropocene as a true geologic epoch. Evidence was evaluated and the group voted to recommend Anthropocene as the new geological epoch in August 2016.
In April 2019, the Anthropocene Working Group (AWG) announced that they would vote on a formal proposal to the International Commission on Stratigraphy, to continue the process started at the 2016 meeting. In May 2019, 29 members of the 34 person AWG panel voted in favour of an official proposal to be made by 2021. The AWG also voted with 29 votes in favour of a starting date in the mid 20th century. Ten candidate sites for a Global boundary Stratotype Section and Point have been identified, one of which will be chosen to be included in the final proposal. Possible markers include microplastics, heavy metals, or radioactive nuclei left by tests from thermonuclear weapons.
In November 2021, an alternative proposal that the Anthropocene is a geological event, not an epoch, was published and later expanded in 2022. This challenged the assumption underlying the case for the Anthropocene epoch – the idea that it is possible to accurately assign a precise date of start to highly diachronous processes of human-influenced Earth system change. The argument indicated that finding a single GSSP would be impractical, given human-induced changes in the Earth system occurred at different periods, in different places, and spread under different rates. Under this model, the Anthropocene would have many events marking human-induced impacts on the planet, including the mass extinction of large vertebrates, the development of early farming, land clearance in the Americas, global-scale industrial transformation during the Industrial Revolution, and the start of the Atomic Age. The authors are members of the AWG who had voted against the official proposal of a starting date in the mid-20th century, and sought to reconcile some of the previous models (including Ruddiman and Maslin proposals). They cited Crutzen's original concept, arguing that the Anthropocene is much better and more usefully conceived of as an unfolding geological event, like other major transformations in Earth's history such as the Great Oxidation Event.
In July 2023, the AWG chose Crawford Lake in Ontario, Canada as a site representing the beginning of the proposed new epoch. The sediment in that lake shows a spike in levels of plutonium from hydrogen bomb tests, a key marker the group chose to place the start of the Anthropocene in the 1950s, along with other elevated markers including carbon particles and nitrates from the burning of fossil fuels and widespread application of chemical fertilizers respectively. Had it been approved, the official declaration of the new Anthropocene epoch would have taken place in August 2024, and its first age may have been named Crawfordian after the lake.
Rejection in 2024 vote by IUGS
In March 2024, the New York Times reported on the results of an internal vote held by the IUGS: After nearly 15 years of debate, the proposal to ratify the Anthropocene had been defeated by a 12-to-4 margin, with 2 abstentions. These results were not out of a dismissal of human impact on the planet, but rather an inability to constrain the Anthropocene in a geological context. This is because the widely-adopted 1950 start date was found to be prone to recency bias. It also overshadowed earlier examples of human impacts, many of which happened in different parts of the world at different times. Although the proposal could be raised again, this would require the entire process of debate to start from the beginning. The results of the vote were officially confirmed by the IUGS and upheld as definitive later that month.
Proposed starting point
Industrial Revolution
Crutzen proposed the Industrial Revolution as the start of Anthropocene. Lovelock proposes that the Anthropocene began with the first application of the Newcomen steam engine in 1712. The Intergovernmental Panel on Climate Change takes the pre-industrial era (chosen as the year 1750) as the baseline related to changes in long-lived, well mixed greenhouse gases. Although it is apparent that the Industrial Revolution ushered in an unprecedented global human impact on the planet, much of Earth's landscape already had been profoundly modified by human activities. The human impact on Earth has grown progressively, with few substantial slowdowns. A 2024 scientific perspective paper authored by a group of scientists led by William J. Ripple proposed the start of the Anthropocene around 1850, stating it is a "compelling choice ... from a population, fossil fuel, greenhouse gasses, temperature, and land use perspective."
Mid 20th century (Great Acceleration)
In May 2019 the twenty-nine members of the Anthropocene Working Group (AWG) proposed a start date for the Epoch in the mid-20th century, as that period saw "a rapidly rising human population accelerated the pace of industrial production, the use of agricultural chemicals and other human activities. At the same time, the first atomic-bomb blasts littered the globe with radioactive debris that became embedded in sediments and glacial ice, becoming part of the geologic record." The official start-dates, according to the panel, would coincide with either the radionuclides released into the atmosphere from bomb detonations in 1945, or with the Limited Nuclear Test Ban Treaty of 1963.
First atomic bomb (1945)
The peak in radionuclides fallout consequential to atomic bomb testing during the 1950s is another possible date for the beginning of the Anthropocene (the detonation of the first atomic bomb in 1945 or the Partial Nuclear Test Ban Treaty in 1963).
Etymology
The name Anthropocene is a combination of anthropo- from the Ancient Greek () meaning 'human' and -cene from () meaning 'new' or 'recent'.
As early as 1873, the Italian geologist Antonio Stoppani acknowledged the increasing power and effect of humanity on the Earth's systems and referred to an 'anthropozoic era'.
Nature of human effects
Biodiversity loss
The human impact on biodiversity forms one of the primary attributes of the Anthropocene. Humankind has entered what is sometimes called the Earth's sixth major extinction. Most experts agree that human activities have accelerated the rate of species extinction. The exact rate remains controversial – perhaps 100 to 1000 times the normal background rate of extinction.
Anthropogenic extinctions started as humans migrated out of Africa over 60,000 years ago. Increases in global rates of extinction have been elevated above background rates since at least 1500, and appear to have accelerated in the 19th century and further since. Rapid economic growth is considered a primary driver of the contemporary displacement and eradication of other species.
According to the 2021 Economics of Biodiversity review, written by Partha Dasgupta and published by the UK government, "biodiversity is declining faster than at any time in human history." A 2022 scientific review published in Biological Reviews confirms that an anthropogenic sixth mass extinction event is currently underway. A 2022 study published in Frontiers in Ecology and the Environment, which surveyed more than 3,000 experts, states that the extinction crisis could be worse than previously thought, and estimates that roughly 30% of species "have been globally threatened or driven extinct since the year 1500." According to a 2023 study published in Biological Reviews some 48% of 70,000 monitored species are experiencing population declines from human activity, whereas only 3% have increasing populations.
Biogeography and nocturnality
Studies of urban evolution give an indication of how species may respond to stressors such as temperature change and toxicity. Species display varying abilities to respond to altered environments through both phenotypic plasticity and genetic evolution. Researchers have documented the movement of many species into regions formerly too cold for them, often at rates faster than initially expected.
Permanent changes in the distribution of organisms from human influence will become identifiable in the geologic record. This has occurred in part as a result of changing climate, but also in response to farming and fishing, and to the accidental introduction of non-native species to new areas through global travel. The ecosystem of the entire Black Sea may have changed during the last 2000 years as a result of nutrient and silica input from eroding deforested lands along the Danube River.
Researchers have found that the growth of the human population and expansion of human activity has resulted in many species of animals that are normally active during the day, such as elephants, tigers and boars, becoming nocturnal to avoid contact with humans, who are largely diurnal.
Climate change
One geological symptom resulting from human activity is increasing atmospheric carbon dioxide () content. This signal in the Earth's climate system is especially significant because it is occurring much faster, and to a greater extent, than previously. Most of this increase is due to the combustion of fossil fuels such as coal, oil, and gas.
Geomorphology
Changes in drainage patterns traceable to human activity will persist over geologic time in large parts of the continents where the geologic regime is erosional. This involves, for example, the paths of roads and highways defined by their grading and drainage control. Direct changes to the form of the Earth's surface by human activities (quarrying and landscaping, for example) also record human impacts.
It has been suggested that the deposition of calthemite formations exemplify a natural process which has not previously occurred prior to the human modification of the Earth's surface, and which therefore represents a unique process of the Anthropocene. Calthemite is a secondary deposit, derived from concrete, lime, mortar or other calcareous material outside the cave environment. Calthemites grow on or under man-made structures (including mines and tunnels) and mimic the shapes and forms of cave speleothems, such as stalactites, stalagmites, flowstone etc.
Stratigraphy
Sedimentological record
Human activities like deforestation and road construction are believed to have elevated average total sediment fluxes across the Earth's surface. However, construction of dams on many rivers around the world means the rates of sediment deposition in any given place do not always appear to increase in the Anthropocene. For instance, many river deltas around the world are actually currently starved of sediment by such dams, and are subsiding and failing to keep up with sea level rise, rather than growing.
Fossil record
Increases in erosion due to farming and other operations will be reflected by changes in sediment composition and increases in deposition rates elsewhere. In land areas with a depositional regime, engineered structures will tend to be buried and preserved, along with litter and debris. Litter and debris thrown from boats or carried by rivers and creeks will accumulate in the marine environment, particularly in coastal areas, but also in mid-ocean garbage patches. Such human-created artifacts preserved in stratigraphy are known as "technofossils".
Changes in biodiversity will also be reflected in the fossil record, as will species introductions. An example cited is the domestic chicken, originally the red junglefowl Gallus gallus, native to south-east Asia but has since become the world's most common bird through human breeding and consumption, with over 60 billion consumed annually and whose bones would become fossilised in landfill sites. Hence, landfills are important resources to find "technofossils".
Trace elements
In terms of trace elements, there are distinct signatures left by modern societies. For example, in the Upper Fremont Glacier in Wyoming, there is a layer of chlorine present in ice cores from 1960's atomic weapon testing programs, as well as a layer of mercury associated with coal plants in the 1980s.
From the late 1940s, nuclear tests have led to local nuclear fallout and severe contamination of test sites both on land and in the surrounding marine environment. Some of the radionuclides that were released during the tests are Cs, Sr, Pu, Pu, Am, and I. These have been found to have had significant impact on the environment and on human beings. In particular, Cs and Sr have been found to have been released into the marine environment and led to bioaccumulation over a period through food chain cycles. The carbon isotope C, commonly released during nuclear tests, has also been found to be integrated into the atmospheric CO, and infiltrating the biosphere, through ocean-atmosphere gas exchange. Increase in thyroid cancer rates around the world is also surmised to be correlated with increasing proportions of the I radionuclide.
The highest global concentration of radionuclides was estimated to have been in 1965, one of the dates which has been proposed as a possible benchmark for the start of the formally defined Anthropocene.
Human burning of fossil fuels has also left distinctly elevated concentrations of black carbon, inorganic ash, and spherical carbonaceous particles in recent sediments across the world. Concentrations of these components increases markedly and almost simultaneously around the world beginning around 1950.
Anthropocene markers
A marker that accounts for a substantial global impact of humans on the total environment, comparable in scale to those associated with significant perturbations of the geological past, is needed in place of minor changes in atmosphere composition.
A useful candidate for holding markers in the geologic time record is the pedosphere. Soils retain information about their climatic and geochemical history with features lasting for centuries or millennia. Human activity is now firmly established as the sixth factor of soil formation. Humanity affects pedogenesis directly by, for example, land levelling, trenching and embankment building, landscape-scale control of fire by early humans, organic matter enrichment from additions of manure or other waste, organic matter impoverishment due to continued cultivation and compaction from overgrazing. Human activity also affects pedogenesis indirectly by drift of eroded materials or pollutants. Anthropogenic soils are those markedly affected by human activities, such as repeated ploughing, the addition of fertilisers, contamination, sealing, or enrichment with artefacts (in the World Reference Base for Soil Resources they are classified as Anthrosols and Technosols). An example from archaeology would be dark earth phenomena when long-term human habitation enriches the soil with black carbon.
Anthropogenic soils are recalcitrant repositories of artefacts and properties that testify to the dominance of the human impact, and hence appear to be reliable markers for the Anthropocene. Some anthropogenic soils may be viewed as the 'golden spikes' of geologists (Global Boundary Stratotype Section and Point), which are locations where there are strata successions with clear evidences of a worldwide event, including the appearance of distinctive fossils. Drilling for fossil fuels has also created holes and tubes which are expected to be detectable for millions of years. The astrobiologist David Grinspoon has proposed that the site of the Apollo 11 Lunar landing, with the disturbances and artifacts that are so uniquely characteristic of our species' technological activity and which will survive over geological time spans could be considered as the 'golden spike' of the Anthropocene.
An October 2020 study coordinated by University of Colorado at Boulder found that distinct physical, chemical and biological changes to Earth's rock layers began around the year 1950. The research revealed that since about 1950, humans have doubled the amount of fixed nitrogen on the planet through industrial production for agriculture, created a hole in the ozone layer through the industrial scale release of chlorofluorocarbons (CFCs), released enough greenhouse gasses from fossil fuels to cause planetary level climate change, created tens of thousands of synthetic mineral-like compounds that do not naturally occur on Earth, and caused almost one-fifth of river sediment worldwide to no longer reach the ocean due to dams, reservoirs and diversions. Humans have produced so many millions of tons of plastic each year since the early 1950s that microplastics are "forming a near-ubiquitous and unambiguous marker of Anthropocene". The study highlights a strong correlation between global human population size and growth, global productivity and global energy use and that the "extraordinary outburst of consumption and productivity demonstrates how the Earth System has departed from its Holocene state since c. 1950 CE, forcing abrupt physical, chemical and biological changes to the Earth's stratigraphic record that can be used to justify the proposal for naming a new epoch—the Anthropocene."
A December 2020 study published in Nature found that the total anthropogenic mass, or human-made materials, outweighs all the biomass on earth, and highlighted that "this quantification of the human enterprise gives a mass-based quantitative and symbolic characterization of the human-induced epoch of the Anthropocene."
Debates
Although the validity of Anthropocene as a scientific term remains disputed, its underlying premise, i.e., that humans have become a geological force, or rather, the dominant force shaping the Earth's climate, has found traction among academics and the public. In an opinion piece for Philosophical Transactions of the Royal Society B, Rodolfo Dirzo, Gerardo Ceballos, and Paul R. Ehrlich write that the term is "increasingly penetrating the lexicon of not only the academic socio-sphere, but also society more generally", and is now included as an entry in the Oxford English Dictionary. The University of Cambridge, as another example, offers a degree in Anthropocene Studies. In the public sphere, the term Anthropocene has become increasingly ubiquitous in activist, pundit, and political discourses. Some who are critical of the term Anthropocene nevertheless concede that "For all its problems, [it] carries power." The popularity and currency of the word has led scholars to label the term a "charismatic meta-category" or "charismatic mega-concept." The term, regardless, has been subject to a variety of criticisms from social scientists, philosophers, Indigenous scholars, and others.
The anthropologist John Hartigan has argued that due to its status as a charismatic meta-category, the term Anthropocene marginalizes competing, but less visible, concepts such as that of "multispecies." The more salient charge is that the ready acceptance of Anthropocene is due to its conceptual proximity to the status quo – that is, to notions of human individuality and centrality.
Other scholars appreciate the way in which the term Anthropocene recognizes humanity as a geological force, but take issue with the indiscriminate way in which it does. Not all humans are equally responsible for the climate crisis. To that end, scholars such as the feminist theorist Donna Haraway and sociologist Jason Moore, have suggested naming the Epoch instead as the Capitalocene. Such implies capitalism as the fundamental reason for the ecological crisis, rather than just humans in general. However, according to philosopher Steven Best, humans have created "hierarchical and growth-addicted societies" and have demonstrated "ecocidal proclivities" long before the emergence of capitalism. Hartigan, Bould, and Haraway all critique what Anthropocene does as a term; however, Hartigan and Bould differ from Haraway in that they criticize the utility or validity of a geological framing of the climate crisis, whereas Haraway embraces it.
In addition to "Capitalocene," other terms have also been proposed by scholars to trace the roots of the Epoch to causes other than the human species broadly. Janae Davis, for example, has suggested the "Plantationocene" as a more appropriate term to call attention to the role that plantation agriculture has played in the formation of the Epoch, alongside Kathryn Yusoff's argument that racism as a whole is foundational to the Epoch. The Plantationocene concept traces "the ways that plantation logics organize modern economies, environments, bodies, and social relations." In a similar vein, Indigenous studies scholars such as Métis geographer Zoe Todd have argued that the Epoch must be dated back to the colonization of the Americas, as this "names the problem of colonialism as responsible for contemporary environmental crisis." Potawatomi philosopher Kyle Powys Whyte has further argued that the Anthropocene has been apparent to Indigenous peoples in the Americas since the inception of colonialism because of "colonialism's role in environmental change."
Other critiques of Anthropocene have focused on the genealogy of the concept. Todd also provides a phenomenological account, which draws on the work of the philosopher Sara Ahmed, writing: "When discourses and responses to the Anthropocene are being generated within institutions and disciplines which are embedded in broader systems that act as de facto 'white public space,' the academy and its power dynamics must be challenged." Other aspects which constitute current understandings of the concept of the Anthropocene such as the ontological split between nature and society, the assumption of the centrality and individuality of the human, and the framing of environmental discourse in largely scientific terms have been criticized by scholars as concepts rooted in colonialism and which reinforce systems of postcolonial domination. To that end, Todd makes the case that the concept of Anthropocene must be indigenized and decolonized if it is to become a vehicle of justice as opposed to white thought and domination.
The scholar Daniel Wildcat, a Yuchi member of the Muscogee Nation of Oklahoma, for example, has emphasized spiritual connection to the land as a crucial tenet for any ecological movement. Similarly, in her study of the Ladakhi people in northern India, the anthropologist Karine Gagné, detailed their understanding of the relation between nonhuman and human agency as one that is deeply intimate and mutual. For the Ladakhi, the nonhuman alters the epistemic, ethical, and affective development of humans – it provides a way of "being in the world." The Ladakhi, who live in the Himalayas, for example, have seen the retreat of the glaciers not just as a physical loss, but also as the loss of entities which generate knowledge, compel ethical reflections, and foster intimacy. Other scholars have similarly emphasized the need to return to notions of relatedness and interdependence with nature. The writer Jenny Odell has written about what Robin Wall Kimmerer calls "species loneliness," the loneliness which occurs from the separation of the human and the nonhuman, and the anthropologist Radhika Govindrajan has theorized on the ethics of care, or relatedness, which govern relations between humans and animals. Scholars are divided on whether to do away with the term Anthropocene or co-opt it.
More recently, eco-philosopher David Abram, in a book chapter titled 'Interbreathing in the Humilocene', has proposed adoption of the term ‘Humilocene’ (the Epoch of Humility), which emphasizes an ethical imperative and ecocultural direction that human societies should take. The term plays with the etymological roots of the term ‘human’, thus connecting it back with terms such as humility, humus (the soil), and even a corrective sense of humiliation that some human societies should feel given their collective destructive impact on the earth.
"Early anthropocene" model
William Ruddiman has argued that the Anthropocene began approximately 8,000 years ago with the development of farming and sedentary cultures. At that point, humans were dispersed across all continents except Antarctica, and the Neolithic Revolution was ongoing. During this period, humans developed agriculture and animal husbandry to supplement or replace hunter-gatherer subsistence. Such innovations were followed by a wave of extinctions, beginning with large mammals and terrestrial birds. This wave was driven by both the direct activity of humans (e.g. hunting) and the indirect consequences of land-use change for agriculture. Landscape-scale burning by prehistoric hunter-gathers may have been an additional early source of anthropogenic atmospheric carbon. Ruddiman also claims that the greenhouse gas emissions in-part responsible for the Anthropocene began 8,000 years ago when ancient farmers cleared forests to grow crops.
Ruddiman's work has been challenged with data from an earlier interglaciation ("Stage 11", approximately 400,000 years ago) which suggests that 16,000 more years must elapse before the current Holocene interglaciation comes to an end, and thus the early anthropogenic hypothesis is invalid. Also, the argument that "something" is needed to explain the differences in the Holocene is challenged by more recent research showing that all interglacials are different.
Homogenocene
Homogenocene (from old Greek: homo-, same; geno-, kind; kainos-, new;) is a more specific term used to define our current epoch, in which biodiversity is diminishing and biogeography and ecosystems around the globe seem more and more similar to one another mainly due to invasive species that have been introduced around the globe either on purpose (crops, livestock) or inadvertently. This is due to the newfound globalism that humans participate in, as species traveling across the world to another region was not as easily possible in any point of time in history as it is today.
The term Homogenocene was first used by Michael Samways in his editorial article in the Journal of Insect Conservation from 1999 titled "Translocating fauna to foreign lands: Here comes the Homogenocene."
The term was used again by John L. Curnutt in the year 2000 in Ecology, in a short list titled "A Guide to the Homogenocene", which reviewed Alien species in North America and Hawaii: impacts on natural ecosystems by George Cox. Charles C. Mann, in his acclaimed book 1493: Uncovering the New World Columbus Created, gives a bird's-eye view of the mechanisms and ongoing implications of the homogenocene.
Society and culture
Humanities
The concept of the Anthropocene has also been approached via humanities such as philosophy, literature and art. In the scholarly world, it has been the subject of increasing attention through special journals, conferences, and disciplinary reports. The Anthropocene, its attendant timescale, and ecological implications prompt questions about death and the end of civilisation, memory and archives, the scope and methods of humanistic inquiry, and emotional responses to the "end of nature". Some scholars have posited that the realities of the Anthropocene, including "human-induced biodiversity loss, exponential increases in per-capita resource consumption, and global climate change," have made the goal of environmental sustainability largely unattainable and obsolete.
Historians have actively engaged the Anthropocene. In 2000, the same year that Paul Crutzen coined the term, world historian John McNeill published Something New Under the Sun, tracing the rise of human societies' unprecedented impact on the planet in the twentieth century. In 2001, historian of science Naomi Oreskes revealed the systematic efforts to undermine trust in climate change science and went on to detail the corporate interests delaying action on the environmental challenge. Both McNeill and Oreskes became members of the Anthropocene Working Group because of their work correlating human activities and planetary transformation.
Popular culture
In 2019, the English musician Nick Mulvey released a music video on YouTube named "In the Anthropocene". In cooperation with Sharp's Brewery, the song was recorded on 105 vinyl records made of washed-up plastic from the Cornish coast.
The Anthropocene Reviewed is a podcast and book by author John Green, where he "reviews different facets of the human-centered planet on a five-star scale".
Photographer Edward Burtynsky created "The Anthropocene Project" with Jennifer Baichwal and Nicholas de Pencier, which is a collection of photographs, exhibitions, a film, and a book. His photographs focus on landscape photography that captures the effects human beings have had on the earth.
In 2015, the American death metal band Cattle Decapitation released its seventh studio album titled The Anthropocene Extinction.
In 2020, Canadian musician Grimes released her fifth studio album titled Miss Anthropocene. The name is also a pun on the feminine title "Miss" and the words "misanthrope" and "Anthropocene."
| Physical sciences | Geological timescale | Earth science |
374407 | https://en.wikipedia.org/wiki/Finless%20porpoise | Finless porpoise | Neophocaena is a genus of porpoise native to the Indian and Pacific oceans, as well as the freshwater habitats of the Yangtze River basin in China. They are commonly known as finless porpoises. Genetic studies indicate that Neophocaena is the most basal living member of the porpoise family.
There are three species in this genus:
Description
The finless porpoises are the only porpoises to lack a true dorsal fin. Instead there is a low ridge covered in thick skin bearing several lines of tiny tubercles. In addition, the forehead is unusually steep compared with those of other porpoises. With fifteen to twenty-one teeth in each jaw, they also have, on average, fewer teeth than other porpoises, although there is some overlap, and this is a not a reliable means of distinguishing them. Finless porpoises in Ariake Sound-Tachibana Bay showed ontogenctic and seasonal variations in diet. The mean length at weaning was estimated to be 101 cm, corresponding to approximately 6 months of age. Calves fed on small-sized demersal fish and cephalopods.
| Biology and health sciences | Toothed whale | Animals |
374662 | https://en.wikipedia.org/wiki/Cupressaceae | Cupressaceae | Cupressaceae or the cypress family is a family of conifers. The family includes 27–30 genera (17 monotypic), which include the junipers and redwoods, with about 130–140 species in total. They are monoecious, subdioecious or (rarely) dioecious trees and shrubs up to tall. The bark of mature trees is commonly orange- to red-brown and of stringy texture, often flaking or peeling in vertical strips, but smooth, scaly or hard and square-cracked in some species. The family reached its peak of diversity during the mesozoic era.
Description
The leaves are arranged either spirally, in decussate pairs (opposite pairs, each pair at 90° to the previous pair) or in decussate whorls of three or four, depending on the genus. On young plants, the leaves are needle-like, becoming small and scale-like on mature plants of many genera; some genera and species retain needle-like leaves throughout their lives. Old leaves are mostly not shed individually, but in small sprays of foliage (cladoptosis); exceptions are leaves on the shoots that develop into branches. These leaves eventually fall off individually when the bark starts to flake. Most are evergreen with the leaves persisting 2–10 years, but three genera (Glyptostrobus, Metasequoia and Taxodium) are deciduous or include deciduous species.
The seed cones are either woody, leathery, or (in Juniperus) berry-like and fleshy, with one to several ovules per scale. The bract scale and ovuliferous scale are fused together except at the apex, where the bract scale is often visible as a short spine (often called an umbo) on the ovuliferous scale. As with the foliage, the cone scales are arranged spirally, decussate (opposite) or whorled, depending on the genus. The seeds are mostly small and somewhat flattened, with two narrow wings, one down each side of the seed; rarely (e.g. Actinostrobus) triangular in section with three wings; in some genera (e.g. Glyptostrobus and Libocedrus), one of the wings is significantly larger than the other, and in some others (e.g. Juniperus, Microbiota, Platycladus, and Taxodium) the seed is larger and wingless. The seedlings usually have two cotyledons, but in some species up to six. The pollen cones are more uniform in structure across the family, 1–20 mm long, with the scales again arranged spirally, decussate (opposite) or whorled, depending on the genus; they may be borne singly at the apex of a shoot (most genera), in the leaf axils (Cryptomeria), in dense clusters (Cunninghamia and Juniperus drupacea), or on discrete long pendulous panicle-like shoots (Metasequoia and Taxodium).
Cupressaceae is a widely distributed conifer family, with a near-global range in all continents except for Antarctica, stretching from 70°N in arctic Norway (Juniperus communis) to 55°S in southernmost Chile (Pilgerodendron uviferum), further south than any other conifer species. Juniperus indica reaches 4930 m altitude in Tibet. Most habitats on land are occupied, with the exceptions of polar tundra and tropical lowland rainforest (though several species are important components of temperate rainforests and tropical highland cloud forests); they are also rare in deserts, with only a few species able to tolerate severe drought, notably Cupressus dupreziana in the central Sahara. Despite the wide overall distribution, many genera and species show very restricted relictual distributions, and many are endangered species.
The world's largest (Sequoiadendron giganteum) and current tallest (Sequoia sempervirens) trees belong to the Cupressaceae, as do six of the ten longest-lived tree species.
Classification
Molecular and morphological studies have expanded Cupressaceae to include the genera of Taxodiaceae, previously treated as a distinct family, but now shown not to differ from the Cupressaceae in any consistent characteristics. The member genera have been placed into five distinct subfamilies of Cupressaceae, Athrotaxidoideae, Cunninghamioideae, Sequoioideae, Taiwanioideae, and Taxodioideae, which form a grade basal to Cupressaceae sensu stricto, containing Callitroideae and Cupressoideae. The former Taxodiaceae genus, Sciadopitys, has been moved to a separate monotypic family Sciadopityaceae due to being genetically distinct from the rest of the Cupressaceae. In some classifications Cupressaceae is raised to an order, Cupressales. Molecular evidence supports Cupressaceae being the sister group to the yews (family Taxaceae), from which it diverged during the early-mid Triassic. The clade comprising both is sister to Sciadopityaceae, which diverged from them during the early-mid Permian. The oldest definitive record of Cupressaceae is Austrohamia minuta from the Early Jurassic (Pliensbachian) of Patagonia, known from many parts of the plant. The reproductive structures of Austrohamia have strong similarities to those of the primitive living cypress genera Taiwania and Cunninghamia. By the Middle to Late Jurassic Cupressaceae were abundant in warm temperate–tropical regions of the Northern Hemisphere. The diversity of the group continued to increase during the Cretaceous period. The earliest appearance of the non-taxodiaceous Cupressaceae (the clade containing Callitroideae and Cupressoideae) is in the mid-Cretaceous, represented by "Widdringtonia" americana from the Cenomanian of North America, and they subsequently diversified during the Late Cretaceous and early Cenozoic.
The family is divided into seven subfamilies, based on genetic and morphological analysis as follows:
Subfamily Cunninghamioideae
Cunninghamia
Elatides Middle Jurassic- Early Cretaceous, Eurasia (possibly North America)
Hughmillerites Late Jurassic-Early Cretaceous Europe, North America
Sewardiodendron Middle Jurassic, Asia
Scitistrobus Middle Jurassic, Europe
Pentakonos Early Cretaceous, Asia
Acanthostrobus Late Cretaceous, North America
Mikasostrobus Late Cretaceous, Japan
Parataiwania Late Cretaceous, Japan
Ohanastrobus Late Cretaceous, Japan
Nishidastrobus Late Cretaceous, Japan
Cunninghamiostrobus Early Cretaceous-Oligocene Japan, North America
Subfamily Taiwanioideae
Taiwania
Subfamily Athrotaxidoideae
Athrotaxis – Tasmanian cedar
Subfamily Sequoioideae
Metasequoia – dawn redwood
Sequoia – coast redwood
Sequoiadendron – giant sequoia
Subfamily Taxodioideae
Cryptomeria – sugi
Glyptostrobus – Chinese swamp cypress
Taxodium – bald cypress
Subfamily Callitroideae
Actinostrobus – cypress-pine
Austrocedrus
Callitris – cypress-pine
Diselma
Fitzroya – alerce
Libocedrus
Neocallitropsis
Papuacedrus
Pilgerodendron
Widdringtonia
Subfamily Cupressoideae
Callitropsis – Nootka cypress
Calocedrus – incense-cedar
Chamaecyparis – cypress
Cupressus – cypress
Fokienia – Fujian cypress
Hesperocyparis
Juniperus – juniper
Microbiota
Platycladus – Chinese arborvitae
Tetraclinis
Thuja – thuja or arborvitae
Thujopsis – hiba
Xanthocyparis – cypress
A 2010 study of Actinostrobus and Callitris places the three species of Actinostrobus within an expanded Callitris based on analysis of 42 morphological and anatomical characters.
Phylogeny based on 2000 study of morphological and molecular data. Several further papers have suggested the segregation Cupressus species into four total genera.
A 2021 molecular study supported a very similar phylogeny but with some slight differences, along with the splitting of Cupressus (found to be paraphyletic):
Uses
Many of the species are important timber sources, especially in the genera Calocedrus, Chamaecyparis, Cryptomeria, Cunninghamia, Cupressus, Sequoia, Taxodium, and Thuja. Calocedrus decurrens is the main wood used to make wooden pencils, and is also used in chests, paneling, and flooring. In China, cypress wood known as baimu or bomu, was carved into furniture, using notably Cupressus funebris, and particularly in tropical areas, Fujian cypress and the aromatic wood of Glyptostrobus pensilis. Juniperus virginiana has used by Native Americans for waymarking. Its heartwood is fragrant and used in clothes chests, drawers and closets to repel moths. It is a source of juniper oil used in perfumes and medicines. The wood is also used as long lasting fenceposts and for bows.
Several genera are important in horticulture. Junipers are planted as evergreen trees, shrubs, and groundcovers. Hundreds of cultivars have been developed, including plants with blue, grey, or yellow foliage. Chamaecyparis and Thuja also provide hundreds of dwarf cultivars as well as trees, including Lawson's cypress. Dawn redwood is widely planted as an ornamental tree because of its excellent horticultural qualities, rapid growth and status as a living fossil. Giant sequoia is a popular ornamental tree and is occasionally grown for timber. Giant sequoia, Leyland cypress, and Arizona cypress are grown to a small extent as Christmas trees.
Some species have significant cultural importance. The ahuehuete (Taxodium mucronatum) is the national tree of Mexico. Coast redwood and giant sequoia were jointly designated the state tree of California, and are major tourist attractions where they grow naturally. Parks such as Redwood National and State Parks and Giant Sequoia National Monument protect almost half the remaining stands of Coast Redwoods and Giant sequoias. Bald cypress is the state tree of Louisiana. Bald cypress, often festooned with Spanish moss, of southern swamps are another tourist attraction. They can be seen at Big Cypress National Preserve in Florida. Bald cypress "knees" are often sold as souvenirs, made into lamps, or carved to make folk art. Monterey cypresses are often visited by tourists and photographers, particularly a tree known as the Lone Cypress.
The fleshy cones of Juniperus communis are used to flavour gin.
Native Americans and early European explorers used Thuja leaves as a cure for scurvy. Distillation of Fokienia roots produces an essential oil called pemou oil used in medicine and cosmetics.
Recent progress on Endophyte Biology in Cupressaceae, by the groups of Jalal Soltani (Bu-Ali Sina University) and Elizabeth Arnold (Arizona University) have revealed prevalent symbioses of endophytes and endofungal bacteria with family Cupressaceae. Furthermore, current and potential uses of Cupressaceous tree's endophytes in agroforestry and medicine is shown by both groups.
Chemistry
The Cupressaceae trees contain a wide range of extractives, especially terpenes and terpenoids, both of which have strong and often pleasant odors.
The heartwood, bark and leaves are the tree parts richest in terpenes. Some of these compounds are widely distributed in other trees as well, and some are typical for Cupressaceae family. The most known terpenoids found in conifers are sesquiterpenoids, diterpenes and tropolones. Diterpenes are commonly found in different types of conifers and are not typical for this family. Some sesquiterpenoids (e.g. bisabolanes, cubenanes, guaianes, ylanganes, himachalanes, longifolanes, longibornanes, longipinanes, cedranes, thujopsanes) also present in Pinaceae, Podocarpaceae and Taxodiaceae. Meanwhile, chamigranes, cuparanes, widdranes and acoranes are more distinctive for Cupressaceae. Tropolone derivatives, such as nootkatin, chanootin, thujaplicinol and hinokitiol are particularly characteristic for Cupressaceae.
Disease vectors
Several genera are an alternate host of Gymnosporangium rust, which damages apples and other related trees in the subfamily Maloideae.
Allergenicity
The pollen of many genera of Cupressaceae is allergenic, causing major hay fever problems in areas where they are abundant, most notably by Cryptomeria japonica (sugi) pollen in Japan. Highly allergenic species of cypress with an OPALS allergy scale rating of 8 out of 10 or higher include: Taxodium, Cupressus, Callitris, Chamaecyparis, and the males and monoicous variants of Austrocedrus and Widdringtonia. However, the females of some species have a very low potential for causing allergies (an OPALS allergy scale rating of 2 or lower) including Austrocedrus females and Widdringtonia females.
| Biology and health sciences | Cupressaceae | Plants |
374673 | https://en.wikipedia.org/wiki/Mountain%20pass | Mountain pass | A mountain pass is a navigable route through a mountain range or over a ridge. Since mountain ranges can present formidable barriers to travel, passes have played a key role in trade, war, and both human and animal migration throughout history. At lower elevations it may be called a hill pass. A mountain pass is typically formed between two volcanic peaks or created by erosion from water or wind.
Overview
Mountain passes make use of a gap, saddle, col or notch. A topographic saddle is analogous to the mathematical concept of a saddle surface, with a saddle point marking the minimum high point between two valleys and the lowest point along a ridge. On a topographic map, passes can be identified by contour lines with an hourglass shape, which indicates a low spot between two higher points. In the high mountains, a difference of between the summit and the mountain is defined as a mountain pass.
Passes are often found just above the source of a river, constituting a drainage divide. A pass may be very short, consisting of steep slopes to the top of the pass, or a valley many kilometers long, whose highest point might only be identifiable by surveying.
Roads and railways have long been built through passes. Some high and rugged passes may have tunnels bored underneath a nearby mountainside, as with the Eisenhower Tunnel bypassing Loveland Pass in the Rockies, to allow faster traffic flow throughout the year.
The top of a pass is frequently the only flat ground in the area, and may be a high vantage point. In some cases this makes it a preferred site for buildings. If a national border follows the ridge of a mountain range, a pass over the mountains is typically on the border, and there may be a border control or customs station, and possibly a military post. For instance, Argentina and Chile share the world's third-longest international border, long, which runs north–south along the Andes mountains and includes 42 mountain passes.
On a road over a pass, it is customary to have a small roadside sign giving the name of the pass and its elevation above mean sea level.
Apart from offering relatively easy travel between valleys, passes also provide a route between two mountain tops with a minimum of descent. As a result, it is common for tracks to meet at a pass; this often makes them convenient routes even when travelling between a summit and the valley floor. Passes traditionally were places for trade routes, communications, cultural exchange, military expeditions etc. A typical example is the Brenner pass in the Alps.
Some mountain passes above the tree line have problems with snow drift in the winter. This might be alleviated by building the road a few meters above the ground, which will make snow blow off the road.
Synonyms
There are many words for pass in the English-speaking world. In the United States, pass is very common in the West, the word gap is common in the southern Appalachians, notch in parts of New England, and saddle in northern Idaho. The term col, derived from Old French, is also used, particularly in Europe.
In the highest mountain range in the world, the Himalayas, passes are denoted by the suffix "La" in Tibetan, Ladhakhi, and several other regional languages. Examples are the Taglang La at 5,328 m (17,480 ft) on the Leh-Manali highway, and the Sia La at 5,589 m (18,337 ft) in the Eastern Karakoram range.
Scotland has the Gaelic term bealach (anglicised "balloch"), while Wales has the similar bwlch (both being insular Celtic languages). In the Lake District of north-west England, the term hause is often used, although the term pass is also common—one distinction is that a pass can refer to a route, as well as the highest part thereof, while a hause is simply that highest part, often flattened somewhat into a high-level plateau.
In Japan they are known as tōge, which means "pass" in Japanese. The word can also refer to narrow, winding roads that can be found in and around mountains and geographically similar areas, or specifically to a style of street racing which may take place on these roads.
Around the world
There are thousands of named passes around the world, some of which are well-known, such as the Khyber Pass close to the present-day Afghanistan-Pakistan border on the ancient Silk Road, the Great St. Bernard Pass at in the Alps, the Chang La at , the Khardung La at in Ladakh, India and the Palakkad Gap at in Palakkad, Kerala, India. The roads at Mana Pass at and Marsimik La at , on and near the China–India border respectively, appear to be world's two highest motorable passes. Khunjerab Pass between Pakistan and China at is also a high-altitude motorable mountain pass. One of the famous but non-motorable mountain passes is Thorong La at in Annapurna Conservation Area, Nepal.
Gallery
| Physical sciences | Montane landforms | Earth science |
374977 | https://en.wikipedia.org/wiki/Disc%20tumbler%20lock | Disc tumbler lock | A disc tumbler or disc detainer lock is a lock composed of slotted rotating detainer discs. The lock was invented by Finnish founder of Abloy, Emil Henriksson (1886–1959) in 1907 and first manufactured under the Abloy brand in 1918.
Design
Disc tumbler locks are composed of slotted rotating detainer discs. A specially cut key rotates these discs like the tumblers of a safe to align the slots, allowing the sidebar to drop into the slots, thus opening the lock. Unlike a wafer tumbler lock or a pin tumbler lock, this mechanism does not use springs. Because they do not contain springs, they are better suited for areas with harsh conditions and are often used at outdoor locations like railroad and public utility installations.
The original Abloy Classic design consists of a notched semi-cylindrical key, and a lock with detainer discs with holes ranging from a semicircle (180°) to a 3/4 circle (270°). The key is inserted and rotated 90°. Notches, machined to an angle, correspond to complementary angles in the holes of the discs. Thus, the misalignment of the slots is "corrected" by a rotation to the correct angle. For example, if the hole is 270°, the key is 180°, and if the hole is 240° (270° minus 30°), the key is 150° (180° with 30° notch) of the circle. In addition, there is a notch in the perimeter of each disc. A sidebar inside an opening in the lock cylinder around the discs and an edge in the casing obstruct the movement of the cylinder beyond the 90°.
When a correct key is inserted and turned, all the discs will rotate so that notches in the perimeters line up. This allows a sidebar to drop from the cylinder into the groove made by the lined-up notches in the discs, so that it does not obstruct the cylinder, allowing the cylinder to rotate and open the lock. If a key with one wrong notch is used, one disc will be rotated to an incorrect angle, thus its notch will not line up with the rest, and the lock cannot be unlocked.
The lock is locked again by rotating it into the other direction, sliding the sidebar back into the cylinder opening and allowing the straight edge of the key to return the discs to the scrambled "zero" position.
The mechanism makes it easy to construct locks that can be opened with multiple different keys: "blank" discs with a circular hole are used, and only notches shared by the keys are employed in the lock mechanism. This is commonly used for locks of common areas such as garages in apartment houses.
Lock picking
Disc tumblers tend to be more difficult to pick than competitively priced pin tumbler locks, and are often sold as "high security" locks for that reason. Picking the lock is not impossible but typically requires dedicated tools. They may also require more time to pick. They are similar in difficulty to pick as curtained 5-lever locks. The disc tumbler lock cannot be bumped. This level of difficulty tends to drive attention to alternative methods of gaining entry. More expensive locks have false gates which are similar to security pins on pin tumbler locks; catching the sidebar making one think they have picked a disc when in fact they have not.
The locking mechanism can be disabled destructively by drilling into the lock to destroy the sidebar. Anti-drilling plates can be installed to prevent this.
Alternatively, for cheaper locks, the discs can be bypassed by inserting a screwdriver or similar device to apply torque, causing the retaining clip to pop off and the core of the lock to spill out. The same effect can be created by turning a torque tool in the lock until the end cap pops off. After this, cheap plastic locks can be turned by hand, pulled out with pliers, or deformed by heating, causing the lock to open, or completely fall apart. More expensive disc tumbler locks use metal parts that can defeat this technique or make it more difficult.
| Technology | Mechanisms | null |
375096 | https://en.wikipedia.org/wiki/Retriever | Retriever | A retriever is a type of gun dog that retrieves game for a hunter. Generally gun dogs are divided into three major classifications: retrievers, flushing spaniels, and pointing breeds. Retrievers were bred primarily to retrieve birds or other prey and return them to the hunter without damage; retrievers are distinguished in that nonslip retrieval is their primary function. As a result, retriever breeds are bred for soft mouths and a great willingness to please, learn, and obey. A soft mouth refers to the willingness of the dog to carry game in its mouth without biting into it. "Hard mouth" is a serious fault in a hunting dog and is very difficult to correct. A hard-mouthed dog renders game unpresentable or at worst inedible.
The retriever's willingness to please, patient nature and trainability have made breeds such as the Labrador retriever and Golden retriever popular as a disability assistance dog. The outstanding reputation of the retriever has landed both the Labrador and the Golden retriever among the top 10 best dogs for children and families around the world.
Skills
To carry out the duties of a gun dog, a retriever should be trained to perform these tasks:
Remain under control: Retrievers are typically used for waterfowl hunting. Since a majority of waterfowl hunting employs the use of small boats in winter conditions, retrievers are trained to remain under control sitting calmly and quietly until sent to retrieve. This is often referred to as "steadiness". Steadiness helps to avoid an accidental capsizing, disrupting the hunter's aim or the possible accidental discharge of a firearm which could cause serious harm or death to others in the hunting party or to the dog itself. A steady dog is also better able to “mark” downed game.
Mark downed game: Marking is the process of watching for a falling bird or multiple birds. When the command "mark" is given, the dog should look up for incoming birds and remember where each bird falls. Well-trained retrievers are taught to follow the direction the gun barrel is pointing to mark where the birds fall. Once the game is downed, the handler will command the dog to retrieve the game. The dog's ability to remember multiple “marks” is extremely important, and trainers use techniques to improve a dog's marking and memory ability.
Perform a blind retrieve: When hunting waterfowl, a retriever's primary job is to retrieve downed birds. At times, a dog will not see the game fall, so retrievers are trained to take hand, voice, and whistle commands from the handler directing the dog to the downed game for retrieval. This is called a “blind retrieve”. Precision between the dog and handler is extremely useful and desired so as to minimize retrieval time and limit the disturbance of surrounding cover. The majority of blind retrieves in the field are made within 30-80 yards of the gun, but a good retriever/handler team can perform precise blind retrieves out to 100+ yards and more.
Retrieve to hand: Although some hunters prefer to have a bird dropped at their feet, the majority of handlers require the dog to deliver the game to hand, meaning once the dog has completed the retrieve, it will gently but firmly hold the bird until commanded to release it to the handler's hand. Delivery to hand reduces the risk of a crippled bird escaping, as the bird remains in the dog's mouth until the handler takes hold of it.
Honoring: When hunting with multiple dogs, a retriever should remain under control while other dogs work, and wait its turn. This is important because having multiple dogs retrieving game simultaneously can cause confusion. This is one reason why many handlers use the dog's name as the command to retrieve.
Shake on command: Following a retrieve, a well-trained dog will not shake off excess water from its fur until after the delivery is complete. A dog shaking water from its fur in a small boat at worst risks capsizing the craft in cold winter conditions and at best will most likely shower hunters and equipment. Also, a dog shaking while still holding the game in its mouth could damage the bird to the point of making it unfit for the table. To avoid these mishaps, trainers use a distinct command releasing a dog to shake.
Quarter: Retrievers are often used in a secondary role as an upland flushing dog. Dogs must work in a pattern in front of the hunter seeking upland game birds. The retriever must be taught to stay within gun range to avoid flushing a bird outside of shooting distance.
Remain steady to wing and shot: When hunting upland birds, the flushing dog should be steady to wing and shot, meaning it sits when a bird rises or a gun is fired. It does this to mark the fall and to avoid flushing other birds by unnecessarily pursuing a missed bird.
Although most individual retrievers have the raw capacity to be trained to perform as a gun dog, a significant amount of thought and effort is given to breeding in specific desired traits into dogs from field bred lines that greatly enhance the training process. When breeding retrievers for field work, extensive consideration is given to:
Biddableness: Because producing a well-trained retriever capable of performing the tasks outlined above requires a significant amount of time and effort, an intelligent, controllable, and open-to-learning (biddable) retriever is of utmost importance.
Desire and drive: These traits covers a broad range of behaviors exhibited by the “good retriever”. Most notably, they demonstrate the desire to retrieve almost to the point of manic behavior and take on significant obstacles to make a retrieve. They also demonstrate an exceptional interest in birds, bird feathers, and bird scent, which is termed “birdiness”.
Marking and memory: Eyesight and depth perception are of paramount importance to a dog's ability to mark downed game. Remembering each fall is also critical. While retriever trainers use special techniques to help a dog to mark and remember downed game, a good retriever is born with these “raw tools”.
Nose: Dogs are led primarily by their nose. A good retriever uses its nose to find downed game in heavy cover and while quartering a field to locate and flush upland game birds.
Soft mouth: A soft-mouthed dog is needed to ensure retrieved game is fit for the table. A soft-mouthed dog picks up and holds game softly but firmly on the retrieve. Dogs that unnecessarily drop birds, crunch on, chew, or even eat the bird before delivery to the handler are considered “hard-mouthed” or are described as having “mouth problems”. While training can overcome most “mouth problems”, a dog with an inherently soft mouth is more desirable when starting the training process.
Hardiness: Waterfowl hunting is a cold-weather sport undertaken across a wide variety of locations and conditions, from thick, flooded timber in the south US, to icy and ice-covered ponds in the Midwest to frigid seas along upper the New England coast. A good retriever willingly re-enters the water and makes multiple retrieves under these and other extreme conditions.
Lifespan
The average lifespan of a retriever is about 10–12 years. Some may live up to 15 years.
Retriever breeds
Chesapeake Bay Retriever
Curly Coated Retriever
Flat Coated Retriever
Golden Retriever
Labrador Retriever
Nova Scotia Duck Tolling Retriever
Murray River Retriever
Other dogs with retrieving skill
American Cocker Spaniel
American Water Spaniel
Barbet
Boykin Spaniel
Blackmouth Cur
Blue Lacy
Brittany
Clumber Spaniel
Dutch Partridge Dog
English Cocker Spaniel
English Setter
English Springer Spaniel
Épagneul Bleu de Picardie
Epagneul Pont-Audemer
Frisian Pointer (stabyhoun/stabij)
German Longhaired Pointer
German Shorthaired Pointer
German Wirehaired Pointer
German Water Spaniel
Gordon Setter
Hungarian Vizsla
Hungarian Wirehaired Vizsla
Italian Spinone
Irish Setter
Irish Water Spaniel
Newfoundland
Pointer
Poodle
Portuguese Water Dog
Spanish Water Dog
Sussex Spaniel
Tibetan Terrier
Weimaraner
Welsh Springer Spaniel
Wire-haired Pointing Griffon
| Biology and health sciences | Dogs | Animals |
375130 | https://en.wikipedia.org/wiki/Aldosterone | Aldosterone | Aldosterone is the main mineralocorticoid steroid hormone produced by the zona glomerulosa of the adrenal cortex in the adrenal gland. It is essential for sodium conservation in the kidney, salivary glands, sweat glands, and colon. It plays a central role in the homeostatic regulation of blood pressure, plasma sodium (Na+), and potassium (K+) levels. It does so primarily by acting on the mineralocorticoid receptors in the distal tubules and collecting ducts of the nephron. It influences the reabsorption of sodium and excretion of potassium (from and into the tubular fluids, respectively) of the kidney, thereby indirectly influencing water retention or loss, blood pressure, and blood volume. When dysregulated, aldosterone is pathogenic and contributes to the development and progression of cardiovascular and kidney disease. Aldosterone has exactly the opposite function of the atrial natriuretic hormone secreted by the heart.
Aldosterone is part of the renin–angiotensin–aldosterone system. It has a plasma half-life of less than 20 minutes. Drugs that interfere with the secretion or action of aldosterone are in use as antihypertensives, like lisinopril, which lowers blood pressure by blocking the angiotensin-converting enzyme (ACE), leading to lower aldosterone secretion. The net effect of these drugs is to reduce sodium and water retention but increase the retention of potassium. In other words, these drugs stimulate the excretion of sodium and water in urine, while they block the excretion of potassium.
Another example is spironolactone, a potassium-sparing diuretic of the steroidal spirolactone group, which interferes with the aldosterone receptor (among others) leading to lower blood pressure by the mechanism described above.
Aldosterone was first isolated by Sylvia Tait (Simpson) and Jim Tait in 1953; in collaboration with Tadeusz Reichstein.
Biosynthesis
The corticosteroids are synthesized from cholesterol within the zona glomerulosa and zona fasciculata of adrenal cortex. Most steroidogenic reactions are catalysed by enzymes of the cytochrome P450 family. They are located within the mitochondria and require adrenodoxin as a cofactor (except 21-hydroxylase and 17α-hydroxylase).
Aldosterone and corticosterone share the first part of their biosynthetic pathways. The last parts are mediated either by the aldosterone synthase (for aldosterone) or by the 11β-hydroxylase (for corticosterone). These enzymes are nearly identical (they share 11β-hydroxylation and 18-hydroxylation functions), but aldosterone synthase is also able to perform an 18-oxidation. Moreover, aldosterone synthase is found within the zona glomerulosa at the outer edge of the adrenal cortex; 11β-hydroxylase is found in the zona glomerulosa and zona fasciculata.
Aldosterone synthase is normally absent in other sections of the adrenal gland.
Stimulation
Aldosterone synthesis is stimulated by several factors:
increase in the plasma concentration of angiotensin III, a metabolite of angiotensin II
increase in plasma angiotensin II, ACTH, or potassium levels, which are present in proportion to plasma sodium deficiencies. (The increased potassium level works to regulate aldosterone synthesis by depolarizing the cells in the zona glomerulosa, which opens the voltage-dependent calcium channels.) The level of angiotensin II is regulated by angiotensin I, which is in turn regulated by renin, a hormone secreted in the kidneys.
Serum potassium concentrations are the most potent stimulator of aldosterone secretion.
the ACTH stimulation test, which is sometimes used to stimulate the production of aldosterone along with cortisol to determine whether primary or secondary adrenal insufficiency is present. However, ACTH has only a minor role in regulating aldosterone production; with hypopituitarism there is no atrophy of the zona glomerulosa.
plasma acidosis
the stretch receptors located in the atria of the heart. If decreased blood pressure is detected, the adrenal gland is stimulated by these stretch receptors to release aldosterone, which increases sodium reabsorption from the urine, sweat, and the gut. This causes increased osmolarity in the extracellular fluid, which will eventually return blood pressure toward normal.
adrenoglomerulotropin, a lipid factor, obtained from pineal extracts. It selectively stimulates secretion of aldosterone.
The secretion of aldosterone has a diurnal rhythm.
Biological function
Aldosterone is the primary of several endogenous members of the class of mineralocorticoids in humans. Deoxycorticosterone is another important member of this class. Aldosterone tends to promote Na+ and water retention, and lower plasma K+ concentration by the following mechanisms:
Acting on the nuclear mineralocorticoid receptors (MR) within the principal cells of the distal tubule and the collecting duct of the kidney nephron, it upregulates and activates the basolateral Na+/K+ pumps, which pumps three sodium ions out of the cell, into the interstitial fluid and two potassium ions into the cell from the interstitial fluid. This creates a concentration gradient which results in reabsorption of sodium (Na+) ions and water (which follows sodium) into the blood, and secreting potassium (K+) ions into the urine (lumen of collecting duct).
Aldosterone upregulates epithelial sodium channels (ENaCs) in the collecting duct and the colon, increasing apical membrane permeability for Na+ and thus absorption.
Cl− is reabsorbed in conjunction with sodium cations to maintain the system's electrochemical balance.
Aldosterone stimulates the secretion of K+ into the tubular lumen.
Aldosterone stimulates Na+ and water reabsorption from the gut, salivary and sweat glands in exchange for K+.
Aldosterone stimulates secretion of H+ via the H+/ATPase in the intercalated cells of the cortical collecting tubules
Aldosterone upregulates expression of NCC in the distal convoluted tubule chronically and its activity acutely.
Aldosterone is responsible for the reabsorption of about 2% of filtered sodium in the kidneys, which is nearly equal to the entire sodium content in human blood under normal glomerular filtration rates.
Aldosterone, probably acting through mineralocorticoid receptors, may positively influence neurogenesis in the dentate gyrus.
Mineralocorticoid receptors
Steroid receptors are intracellular since steroid hormones are able to cross the cell membrane without a specific transporter. The aldosterone mineralocorticoid receptor (MR) complex binds on the DNA to specific hormone response element, which leads to gene specific transcription.
Some of the transcribed genes are crucial for transepithelial sodium transport, including the three subunits of the epithelial sodium channel (ENaC), the Na+/K+ pumps and their regulatory proteins serum and glucocorticoid-induced kinase, and channel-inducing factor, respectively.
The MR is stimulated by both aldosterone and cortisol, but a mechanism protects the body from excess aldosterone receptor stimulation by glucocorticoids (such as cortisol), which happen to be present at much higher concentrations than mineralocorticoids in the healthy individual. The mechanism consists of an enzyme called 11 β-hydroxysteroid dehydrogenase (11β-HSD). This enzyme co-localizes with intracellular adrenal steroid receptors and converts cortisol into cortisone, a relatively inactive metabolite with little affinity for the MR. Liquorice, which contains glycyrrhetinic acid, can inhibit 11β-HSD and lead to a mineralocorticoid excess syndrome.
Control of aldosterone release from the adrenal cortex
Major regulators
The role of the renin–angiotensin system
Angiotensin is involved in regulating aldosterone and is the core regulation. Angiotensin II acts synergistically with potassium, and the potassium feedback is virtually inoperative when no angiotensin II is present. A small portion of the regulation resulting from angiotensin II must take place indirectly from decreased blood flow through the liver due to constriction of capillaries. When the blood flow decreases so does the destruction of aldosterone by liver enzymes.
Although sustained production of aldosterone requires persistent calcium entry through low-voltage-activated Ca2+ channels, isolated zona glomerulosa cells are considered nonexcitable, with recorded membrane voltages that are too hyperpolarized to permit Ca2+ channels entry. However, mouse zona glomerulosa cells within adrenal slices spontaneously generate membrane potential oscillations of low periodicity; this innate electrical excitability of zona glomerulosa cells provides a platform for the production of a recurrent Ca2+ channels signal that can be controlled by angiotensin II and extracellular potassium, the 2 major regulators of aldosterone production. Voltage-gated Ca2+ channels have been detected in the zona glomerulosa of the human adrenal, which suggests that Ca2+ channel blockers may directly influence the adrenocortical biosynthesis of aldosterone in vivo.
The plasma concentration of potassium
The amount of plasma renin secreted is an indirect function of the serum potassium as probably determined by sensors in the carotid artery.
Adrenocorticotropic hormone
Adrenocorticotropic hormone (ACTH), a pituitary peptide, also has some stimulating effect on aldosterone, probably by stimulating the formation of deoxycorticosterone, a precursor of aldosterone. Aldosterone is increased by blood loss, pregnancy, and possibly by further circumstances such as physical exertion, endotoxin shock, and burns.
Miscellaneous regulators
The role of sympathetic nerves
The aldosterone production is also affected to one extent or another by nervous control, which integrates the inverse of carotid artery pressure, pain, posture, and probably emotion (anxiety, fear, and hostility) (including surgical stress). Anxiety increases aldosterone, which must have evolved because of the time delay involved in migration of aldosterone into the cell nucleus. Thus, there is an advantage to an animal's anticipating a future need from interaction with a predator, since too high a serum content of potassium has very adverse effects on nervous transmission.
The role of baroreceptors
Pressure-sensitive baroreceptors are found in the vessel walls of nearly all large arteries in the thorax and neck, but are particularly plentiful in the sinuses of the carotid arteries and in the arch of the aorta. These specialized receptors are sensitive to changes in mean arterial pressure. An increase in sensed pressure results in an increased rate of firing by the baroreceptors and a negative feedback response, lowering systemic arterial pressure. Aldosterone release causes sodium and water retention, which causes increased blood volume, and a subsequent increase in blood pressure, which is sensed by the baroreceptors. To maintain normal homeostasis these receptors also detect low blood pressure or low blood volume, causing aldosterone to be released. This results in sodium retention in the kidney, leading to water retention and increased blood volume.
The plasma concentration of sodium
Aldosterone levels vary as an inverse function of sodium intake as sensed via osmotic pressure. The slope of the response of aldosterone to serum potassium is almost independent of sodium intake. Aldosterone is increased at low sodium intakes, but the rate of increase of plasma aldosterone as potassium rises in the serum is not much lower at high sodium intakes than it is at low. Thus, potassium is strongly regulated at all sodium intakes by aldosterone when the supply of potassium is adequate, which it usually is in hunter-gatherer diets.
Aldosterone feedback
Feedback by aldosterone concentration itself is of a nonmorphological character (that is, other than changes in the cells' number or structure) and is poor, so the electrolyte feedbacks predominate, short term.
Associated clinical conditions
Hyperaldosteronism is abnormally increased levels of aldosterone, while hypoaldosteronism is abnormally decreased levels of aldosterone.
A measurement of aldosterone in blood may be termed a plasma aldosterone concentration (PAC), which may be compared to plasma renin activity (PRA) as an aldosterone-to-renin ratio.
Hyperaldosteronism
Primary aldosteronism, also known as primary hyperaldosteronism, is characterized by the overproduction of aldosterone by the adrenal glands, when not a result of excessive renin secretion. It leads to arterial hypertension (high blood pressure) associated with hypokalemia, usually a diagnostic clue. Secondary hyperaldosteronism, on the other hand, is due to overactivity of the renin–angiotensin system.
Conn's syndrome is primary hyperaldosteronism caused by an aldosterone-producing adenoma.
Depending on cause and other factors, hyperaldosteronism can be treated by surgery and/or medically, such as by aldosterone antagonists.
The ratio of renin to aldosterone is an effective screening test to screen for primary hyperaldosteronism related to adrenal adenomas. It is the most sensitive serum blood test to differentiate primary from secondary causes of hyperaldosteronism. Blood obtained when the patient has been standing for more than 2 hours are more sensitive than those from when the patient is lying down. Before the test, individuals should not restrict salt and low potassium should be corrected before the test because it can suppress aldosterone secretion.
Hypoaldosteronism
An ACTH stimulation test for aldosterone can help in determining the cause of hypoaldosteronism, with a low aldosterone response indicating a primary hypoaldosteronism of the adrenals, while a large response indicating a secondary hypoaldosteronism.
The most common cause of this condition (and related symptoms) is Addison's disease; it is typically treated by fludrocortisone, which has a much longer persistence (1 day) in the bloodstream.
Additional images
| Biology and health sciences | Animal hormones | Biology |
375136 | https://en.wikipedia.org/wiki/Glyceraldehyde | Glyceraldehyde | Glyceraldehyde (glyceral) is a triose monosaccharide with chemical formula C3H6O3. It is the simplest of all common aldoses. It is a sweet, colorless, crystalline solid that is an intermediate compound in carbohydrate metabolism. The word comes from combining glycerol and aldehyde, as glyceraldehyde is glycerol with one alcohol group oxidized to an aldehyde.
Structure
Glyceraldehyde has one chiral center and therefore exists as two different enantiomers with opposite optical rotation:
In the nomenclature, either from Latin Dexter meaning "right", or from Latin Laevo meaning "left"
In the R/S nomenclature, either R from Latin Rectus meaning "right", or S from Latin Sinister meaning "left"
While the optical rotation of glyceraldehyde is (+) for R and (−) for S, this is not true for all monosaccharides. The stereochemical configuration can only be determined from the chemical structure, whereas the optical rotation can only be determined empirically (by experiment).
It was by a lucky guess that the molecular geometry was assigned to (+)-glyceraldehyde in the late 19th century, as confirmed by X-ray crystallography in 1951.
Nomenclature
In the system, glyceraldehyde is used as the configurational standard for carbohydrates. Monosaccharides with an absolute configuration identical to (R)-glyceraldehyde at the last stereocentre, for example C5 in glucose, are assigned the stereo-descriptor . Those similar to (S)-glyceraldehyde are assigned an .
Chemical synthesis
Glyceraldehyde can be prepared, along with dihydroxyacetone, by the mild oxidation of glycerol, for example with hydrogen peroxide and a ferrous salt as catalyst.
Its cyclohexylidene acetal can also be produced by oxidative cleavage of the bis(acetal) of mannitol.
Biochemistry
The enzyme glycerol dehydrogenase (NADP+) has two substrates, glycerol and NADP+, and 3 products, D-glyceraldehyde, NADPH and H+.
The interconversion of the phosphates of glyceraldehyde (glyceraldehyde 3-phosphate) and dihydroxyacetone (dihydroxyacetone phosphate), catalyzed by the enzyme triosephosphate isomerase, is an intermediate step in glycolysis.
| Biology and health sciences | Carbohydrates | Biology |
375247 | https://en.wikipedia.org/wiki/Serpentine%20subgroup | Serpentine subgroup | Serpentine subgroup (part of the kaolinite-serpentine group in the category of phyllosilicates) are greenish, brownish, or spotted minerals commonly found in serpentinite. They are used as a source of magnesium and asbestos, and as decorative stone. The name comes from the greenish color and smooth or scaly appearance from the Latin , meaning "snake-like".
Serpentine subgroup is a set of common rock-forming hydrous magnesium iron phyllosilicate () minerals, resulting from the metamorphism of the minerals that are contained in mafic to ultramafic rocks. They may contain minor amounts of other elements including chromium, manganese, cobalt or nickel. In mineralogy and gemology, serpentine may refer to any of the 20 varieties belonging to the serpentine subgroup. Owing to admixture, these varieties are not always easy to individualize, and distinctions are not usually made. There are three important mineral polymorphs of serpentine: antigorite, lizardite and chrysotile.
Serpentine minerals are polymorphous, meaning that they have the same chemical formulae, but the atoms are arranged into different structures, or crystal lattices. Chrysotile, which has a fibrous habit, is one polymorph of serpentine and is one of the more important asbestos minerals. Other polymorphs in the serpentine subgroup may have a platy habit. Antigorite and lizardite are the polymorphs with platy habit.
Many types of serpentine have been used for jewelry and hardstone carving, sometimes under the name "false jade" or "Teton jade".
Properties and structure
Most serpentines are opaque to translucent, light (specific gravity between 2.2 and 2.9), soft (hardness 2.5–4), infusible and susceptible to acids. All are microcrystalline and massive in habit, never being found as single crystals. Lustre may be vitreous, silky or greasy. Colors range from white to grey, yellow to green, and brown to black, and are often splotchy or veined. Many are intergrown with other minerals, such as calcite and dolomite.
The basic structural unit of serpentine is a polar layer 0.72 nm thick. A Mg-rich trioctahedral sheet is tightly linked on one side to a single tetrahedral silicate sheet, regardless of the 3–5% larger lateral lattice dimensions of the octahedral sheet. The second level of the structure organized into different serpentine species originates partly to compensate the intra-layer stress due to this dimensional misfit. Good compensation results in a nearly constant layer curvature, with the larger octahedral sheet on the convex side. However, such curvature weakens the H-bonding between the layers. H-bonding tries to maintain flat layers, but this competes with the requirements of misfit compensation. As a result, the layers are locally either curved or flat. Antigorite, lizardite and chrysotile have the same chemical composition, but their different layer of curvatures result in lamellar agglomerated antigorite and lizardite and fibrous chrysotile elongated mineral particles.
Occurrence
Serpentine minerals are ubiquitous in many geological systems where hydrothermal alteration of ultramafic rocks is possible, in both terrestrial (oceanic hydrothermalism, subduction zones and transform faulting) and extraterrestrial environments. The process of alteration from mafic minerals to serpentine group minerals is called serpentinization. Serpentine minerals are often formed by the hydration of olivine-rich ultramafic rocks at relatively low temperatures (0 to ~600 °C). The chemical reaction turns olivine into serpentine minerals. They may also have their origins in metamorphic alterations of peridotite and pyroxene. Serpentines may also pseudomorphously replace other magnesium silicates. Incomplete alteration causes the physical properties of serpentines to vary widely.
Antigorite is the polymorph of serpentine that most commonly forms during metamorphism of wet ultramafic rocks and is stable at the highest temperatures—to over at depths of or so. In contrast, lizardite and chrysotile typically form near the Earth's surface and break down at relatively low temperatures, probably well below . It has been suggested that chrysotile is never stable relative to either of the other two serpentine polymorphs.
Samples of the oceanic crust and uppermost mantle from ocean basins document that ultramafic rocks there commonly contain abundant serpentine. Antigorite contains water in its structure, about 13 percent by weight. Hence, antigorite may play an important role in the transport of water into the earth in subduction zones and in the subsequent release of water to create magmas in island arcs, and some of the water may be carried to yet greater depths.
Occurrence is worldwide, notable localities include New Caledonia, Canada (Quebec), US (northern California, Rhode Island, Connecticut, Massachusetts, Maryland and southern Pennsylvania), Afghanistan, Britain (the Lizard peninsula in Cornwall), Ireland, Greece (Thessaly), China, Russia (Ural Mountains), France, Korea, Austria (Styria and Carinthia), India (Assam, and Manipur), Myanmar (Burma), New Zealand, Norway and Italy.
Uses
Serpentines find use in industry for several purposes, such as railway ballasts, building materials, and the asbestiform types find use as thermal and electrical insulation (chrysotile asbestos). The asbestos content can be released into the air when serpentine is excavated and if it is used as a road surface, forming a long-term health hazard by breathing. Asbestos from serpentine can also appear at low levels in water supplies through normal weathering processes, but there is as yet no fully proven health hazard associated with use or ingestion, although the EPA states an increased risk of developing benign intestinal polyps can occur. In its natural state, some forms of serpentine react with carbon dioxide and re-release oxygen into the atmosphere.
The more attractive and durable varieties (all of the antigorite) are termed "noble" or "precious" serpentine and are used extensively as gems and in ornamental carvings. The town of Bhera in the historic Punjab province of the Indian subcontinent was known for centuries for finishing a relatively pure form of green serpentine obtained from quarries in Afghanistan into lapidary work, cups, ornamental sword hilts, and dagger handles. This high-grade serpentine ore was known as in Persian, or 'false jade' in English, and was used for generations by Indian craftsmen for lapidary work. It is easily carved, taking a good polish, and is said to have a pleasingly greasy feel. Less valuable serpentine ores of varying hardness and clarity are also sometimes dyed to imitate jade. Misleading synonyms for this material include "Suzhou jade", "Styrian jade", and "New jade".
New Caledonian serpentine is particularly rich in nickel. The Māori of New Zealand once carved beautiful objects from local serpentine, which they called , meaning "tears".
The of the Romans, now known as verde antique, or verde antic, is a serpentinite breccia popular as a decorative facing stone. In classical times it was mined at Casambala, Thessaly, Greece. Serpentinite marbles are also widely used: Green Connemara marble (or 'Irish green marble') from Connemara, Ireland (and many other sources), and red from Italy. Use is limited to indoor settings as serpentinites do not weather well.
Potential harm
Soils derived from serpentine are toxic to many plants, because of high levels of nickel, chromium, and cobalt; growth of many plants is also inhibited by low levels of potassium and phosphorus and a low ratio of calcium/magnesium. The flora is generally very distinctive, with specialized, slow-growing species. Areas of serpentine-derived soil will show as strips of shrubland and open, scattered small trees (often conifers) within otherwise forested areas; these areas are called serpentine barrens.
Antigorite variety
Lamellated antigorite occurs in tough, pleated masses. It is usually dark green, but may also be yellowish, gray, brown or black. It has a hardness of 3.5–4 and its luster is greasy. The monoclinic crystals show micaceous cleavage and fuse with difficulty. Antigorite is named after its type locality, the Geisspfad serpentinite, Valle Antigorio in the border region of Italy/Switzerland.
Bowenite
Bowenite, a variety of antigorite, is an especially hard serpentine (5.5) of light to dark apple green color, often mottled with cloudy white patches and darker veining. It is the serpentine most frequently encountered in carving and jewelry. The name 'retinalite' is sometimes applied to yellow bowenite. The New Zealand material is called .
Although not an official species, bowenite is the state mineral of Rhode Island, United States: this is also the variety's type locality. A bowenite cabochon featured as part of the "Our Mineral Heritage Brooch", was presented to U.S. First Lady Mrs. Lady Bird Johnson in 1967.
Williamsite is an American local varietal name for antigorite that is oil-green with black crystals of chromite or magnetite often included. Somewhat resembling fine jade, williamsite is cut into cabochons and beads. It is found mainly in Maryland and Pennsylvania.
Gymnite
Gymnite is an amorphous form of antigorite. It was originally found in the Bare Hills of Maryland, and is named from the Greek, , meaning "bare" or "naked".
State emblem
In 1965, the California Legislature designated the mineral serpentine as "the official State Rock and lithologic emblem".
Gallery
| Physical sciences | Silicate minerals | Earth science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.