id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
171104 | https://en.wikipedia.org/wiki/Chitin | Chitin | Chitin (C8H13O5N)n ( ) is a long-chain polymer of N-acetylglucosamine, an amide derivative of glucose. Chitin is the second most abundant polysaccharide in nature (behind only cellulose); an estimated 1 billion tons of chitin are produced each year in the biosphere. It is a primary component of cell walls in fungi (especially filamentous and mushroom-forming fungi), the exoskeletons of arthropods such as crustaceans and insects, the radulae, cephalopod beaks and gladii of molluscs and in some nematodes and diatoms.
It is also synthesised by at least some fish and lissamphibians. Commercially, chitin is extracted from the shells of crabs, shrimps, shellfish and lobsters, which are major by-products of the seafood industry. The structure of chitin is comparable to cellulose, forming crystalline nanofibrils or whiskers. It is functionally comparable to the protein keratin. Chitin has proved useful for several medicinal, industrial and biotechnological purposes.
Etymology
The English word "chitin" comes from the French word chitine, which was derived in 1821 from the Greek word χιτών (khitōn) meaning covering.
A similar word, "chiton", refers to a marine animal with a protective shell.
Chemistry, physical properties and biological function
The structure of chitin was determined by Albert Hofmann in 1929. Hofmann hydrolyzed chitin using a crude preparation of the enzyme chitinase, which he obtained from the snail Helix pomatia.
Chitin is a modified polysaccharide that contains nitrogen; it is synthesized from units of N-acetyl-D-glucosamine (to be precise, 2-(acetylamino)-2-deoxy-D-glucose). These units form covalent β-(1→4)-linkages (like the linkages between glucose units forming cellulose). Therefore, chitin may be described as cellulose with one hydroxyl group on each monomer replaced with an acetyl amine group. This allows for increased hydrogen bonding between adjacent polymers, giving the chitin-polymer matrix increased strength.
In its pure, unmodified form, chitin is translucent, pliable, resilient, and quite tough. In most arthropods, however, it is often modified, occurring largely as a component of composite materials, such as in sclerotin, a tanned proteinaceous matrix, which forms much of the exoskeleton of insects. Combined with calcium carbonate, as in the shells of crustaceans and molluscs, chitin produces a much stronger composite. This composite material is much harder and stiffer than pure chitin, and is tougher and less brittle than pure calcium carbonate. Another difference between pure and composite forms can be seen by comparing the flexible body wall of a caterpillar (mainly chitin) to the stiff, light elytron of a beetle (containing a large proportion of sclerotin).
In butterfly wing scales, chitin is organized into stacks of gyroids constructed of chitin photonic crystals that produce various iridescent colors serving phenotypic signaling and communication for mating and foraging. The elaborate chitin gyroid construction in butterfly wings creates a model of optical devices having potential for innovations in biomimicry. Scarab beetles in the genus Cyphochilus also utilize chitin to form extremely thin scales (five to fifteen micrometres thick) that diffusely reflect white light. These scales are networks of randomly ordered filaments of chitin with diameters on the scale of hundreds of nanometres, which serve to scatter light. The multiple scattering of light is thought to play a role in the unusual whiteness of the scales. In addition, some social wasps, such as Protopolybia chartergoides, orally secrete material containing predominantly chitin to reinforce the outer nest envelopes, composed of paper.
Chitosan is produced commercially by deacetylation of chitin by treatment with sodium hydroxide. Chitosan has a wide range of biomedical applications including wound healing, drug delivery and tissue engineering. Due to its specific intermolecular hydrogen bonding network, dissolving chitin in water is very difficult. Chitosan (with a degree of deacetylation of more than ~28%), on the other hand, can be dissolved in dilute acidic aqueous solutions below a pH of 6.0 such as acetic, formic and lactic acids. Chitosan with a degree of deacetylation greater than ~49% is soluble in water
Humans and other mammals
Humans and other mammals have chitinase and chitinase-like proteins that can degrade chitin; they also possess several immune receptors that can recognize chitin and its degradation products, initiating an immune response.
Chitin is sensed mostly in the lungs or gastrointestinal tract where it can activate the innate immune system through eosinophils or macrophages, as well as an adaptive immune response through T helper cells. Keratinocytes in skin can also react to chitin or chitin fragments.
Plants
Plants also have receptors that can cause a response to chitin, namely chitin elicitor receptor kinase 1 and chitin elicitor-binding protein. The first chitin receptor was cloned in 2006. When the receptors are activated by chitin, genes related to plant defense are expressed, and jasmonate hormones are activated, which in turn activate systemic defenses. Commensal fungi have ways to interact with the host immune response that, , were not well understood.
Some pathogens produce chitin-binding proteins that mask the chitin they shed from these receptors. Zymoseptoria tritici is an example of a fungal pathogen that has such blocking proteins; it is a major pest in wheat crops.
Fossil record
Chitin was probably present in the exoskeletons of Cambrian arthropods such as trilobites. The oldest preserved (intact) chitin samples thus far reported are dated to the Oligocene, about , from specimens encased in amber where the chitin has not completely degraded.
Uses
Agriculture
Chitin is a good inducer of plant defense mechanisms for controlling diseases. It has potential for use as a soil fertilizer or conditioner to improve fertility and plant resilience that may enhance crop yields.
Industrial
Chitin is used in many industrial processes. Examples of the potential uses of chemically modified chitin in food processing include the formation of edible films and as an additive to thicken and stabilize foods and food emulsions. Processes to size and strengthen paper employ chitin and chitosan.
Research
How chitin interacts with the immune system of plants and animals has been an active area of research, including the identity of key receptors with which chitin interacts, whether the size of chitin particles is relevant to the kind of immune response triggered, and mechanisms by which immune systems respond. Chitin is deacetylated chemically or enzymatically to produce chitosan, a highly biocompatible polymer which has found a wide range of applications in the biomedical industry. Chitin and chitosan have been explored as a vaccine adjuvant due to its ability to stimulate an immune response.
Chitin and chitosan are under development as scaffolds in studies of how tissue grows and how wounds heal, and in efforts to invent better bandages, surgical thread, and materials for allotransplantation. Sutures made of chitin have been experimentally developed, but their lack of elasticity and problems making thread have prevented commercial success so far.
Chitosan has been demonstrated and proposed to make a reproducible form of biodegradable plastic. Chitin nanofibers are extracted from crustacean waste and mushrooms for possible development of products in tissue engineering, drug delivery and medicine.
Chitin has been proposed for use in building structures, tools, and other solid objects from a composite material, combining chitin with Martian regolith. To build this, the biopolymers in the chitin are suggested as the binder for the regolith aggregate to form a concrete-like composite material. The authors believe that waste materials from food production (e.g. scales from fish, exoskeletons from crustaceans and insects, etc.) could be put to use as feedstock for manufacturing processes.
| Biology and health sciences | Biochemistry and molecular biology | null |
171141 | https://en.wikipedia.org/wiki/Guava | Guava | Guava ( ) is a common tropical fruit cultivated in many tropical and subtropical regions. The common guava Psidium guajava (lemon guava, apple guava) is a small tree in the myrtle family (Myrtaceae), native to Mexico, Central America, the Caribbean and northern South America. The name guava is also given to some other species in the genus Psidium such as strawberry guava (Psidium cattleyanum) and to the pineapple guava, Feijoa sellowiana. In 2019, 55 million tonnes of guavas were produced worldwide, led by India with 45% of the total. Botanically, guavas are berries.
Etymology
The term guava appears to have been in use since the mid-16th century. The name derived from the Taíno, a language of the Arawaks as for guava tree via the Spanish for . It has been adapted in many European and Asian languages, having a similar form.
Origin and distribution
Guavas originated from an area thought to extend from Mexico, Central America or northern South America throughout the Caribbean region. Archaeological sites in Peru yielded evidence of guava cultivation as early as 2500 BC.
Guava was adopted as a crop in subtropical and tropical Asia, parts of the United States (from Tennessee and North Carolina, southward, as well as the west and Hawaii), tropical Africa, and Oceania. Guavas were introduced to Florida, US in the 19th century and are grown there as far north as Sarasota, Chipley, Waldo and Fort Pierce. However, they are a primary host of the Caribbean fruit fly and must be protected against infestation in areas of Florida where this pest is present.
Guavas are cultivated in several tropical and subtropical countries. Several species are grown commercially; apple guava and its cultivars are those most commonly traded internationally. Guavas also grow in southwestern Europe, specifically the Costa del Sol on Málaga, (Spain) and Greece where guavas have been commercially grown since the middle of the 20th century and they proliferate as cultivars. Mature trees of most species are fairly cold-hardy and can survive temperatures slightly colder than for short periods of time, but younger plants will likely freeze to the ground.
Guavas are of interest to home growers in subtropical areas as one of the few tropical fruits that can grow to fruiting size in pots indoors. When grown from seed, guava trees can bear fruit in two years, and can continue to do so for forty years.
Types
The most frequently eaten species, and the one often simply referred to as "the guava", is the apple guava (Psidium guajava). Guavas are typical Myrtoideae, with tough dark heavy leaves that are opposite, simple, elliptic to ovate, and long. The flowers are white, with five petals and numerous stamens. The fruits are many-seeded berries.
Ecology
Psidium species are eaten by the caterpillars of some Lepidoptera, mainly moths like the Ello Sphinx (Erinnyis ello), Eupseudosoma aberrans, E. involutum, and Hypercompe icasia. Mites, like Pronematus pruni and Tydeus munsteri, are known to be crop pests of the apple guava (P. guajava) and perhaps other species. The bacterium Erwinia psidii causes rot diseases of the apple guava.
The fruit is cultivated and favored by humans, and many other animals such as birds consume it, readily dispersing the seeds in their droppings. In Hawaii, strawberry guava (P. littorale) has become an aggressive invasive species threatening extinction to more than 100 other plant species. By contrast, several guava species have become rare due to habitat destruction and at least one (Jamaican guava, P. dumetorum), is already extinct.
Guava wood is used for meat smoking in Hawaii, and is used at barbecue competitions across the United States. In Cuba and Mexico, the leaves are used in barbecues.
Fruit
Guava fruits, usually long, are round or oval depending on the species. They have a pronounced and typical fragrance, similar to lemon rind but less sharp. The outer skin may be rough, often with a bitter taste, or soft and sweet. Varying between species, the skin can be any thickness, is usually green before maturity, but may be yellow, maroon, or green when ripe. The pulp inside may be sweet or sour and off-white ("white" guavas) to deep pink ("red" guavas). The seeds in the central pulp vary in number and hardness, depending on species.
Production
In 2022, world production of guavas was 59 million tonnes, led by India with 44% of the total (table; mangoes and mangosteens included). Secondary producers were Indonesia and China.
Uses
Culinary
In Mexico and other Latin American countries, the beverage agua fresca is often made with guava. The entire fruit is a key ingredient in punch, and the juice is often used in culinary sauces (hot or cold), ales, candies, dried snacks, fruit bars, and desserts, or dipped in chamoy. Pulque de guayaba ("guayaba" is Spanish for guava) is a common alcoholic beverage in these regions.
In many countries, guava is eaten raw, typically cut into quarters or eaten like an apple; it is also eaten with a pinch of salt and pepper, cayenne powder or a mix of spices (masala). In the Philippines, ripe guava is used in cooking sinigang. Guava is a snack in Cuba as pastelitos de guayaba; and in Taiwan, sold on many street corners and night markets during hot weather, accompanied by packets of dried plum powder mixed with sugar and salt for dipping. In east Asia, guava is commonly eaten with sweet and sour dried plum powder mixtures. Guava juice is consumed in many countries. The fruit is also often included in fruit salads.
Because of its high level of pectin, guavas are extensively used to make candies, preserves, jellies, jams, and marmalades (such as Brazilian goiabada and Colombian and Venezuelan bocadillo), and as a marmalade jam served on toast.
Red guavas can be used as the base of salted products such as sauces, substituting for tomatoes, especially to minimize the acidity. A drink may be made from an infusion of guava fruits and leaves, which in Brazil is called chá-de-goiabeira, i.e., "tea" of guava tree leaves.
Nutrition
A raw common guava is 81% water, 14% carbohydrates, 3% protein, and 0.5% fat (table). In a reference amount of , raw guava supplies 68 calories and is a rich source of dietary fiber and vitamin C (275% of the Daily Value, DV), with moderate levels of folic acid (12% DV) and potassium (14% DV, table). Raw guava contains lycopene (table).
Phytochemicals
Guava leaves contain both carotenoids and polyphenols, such as (+)-gallocatechin and leucocyanidin. As some of these phytochemicals produce the fruit skin and flesh color, guavas that are red-orange tend to have more polyphenol and carotenoid content than yellow-green ones.
Seed oil
Guava seed oil may be used for culinary or cosmetics products. It is rich in linoleic acid.
Folk medicine
Since the 1950s, guavas – particularly the leaves – have been studied for their constituents, potential biological properties and history in folk medicine.
Parasites
Guavas are one of the most common hosts for fruit flies like A. suspensa, which lay their eggs in overripe or spoiled guavas. The larvae of these flies then consume the fruit until they can proceed into the pupa stage. This parasitism has led to millions in economic losses for nations in Central America.
Fungal pathogens, Neopestalotiopsis and Pestalotiopsis species are causal agents of guava scab in Colombia.
Propagation
Air layering is an effective method for propagating guava plant. It allows for the production of new plants while maintaining the parent plant’s characteristics. This technique involves selecting a healthy branch making a small incision on the branch, and applying rooting hormone to encourage root development. The branch is then wrapped in moist peat moss and covered with plastic to help retain moisture. After several weeks, roots will form, and then a new plant can be severed from the parent and transplanted into soil. This method is particularly beneficial for guava due to its high success rate and ability to produce fruit-bearing plants quickly.
Gallery
| Biology and health sciences | Myrtales | null |
171208 | https://en.wikipedia.org/wiki/Carding | Carding | In textile production, carding is a mechanical process that disentangles, cleans and intermixes fibres to produce a continuous web or sliver suitable for subsequent processing. This is achieved by passing the fibres between differentially moving surfaces covered with "card clothing", a firm flexible material embedded with metal pins. It breaks up locks and unorganised clumps of fibre and then aligns the individual fibres to be parallel with each other. In preparing wool fibre for spinning, carding is the step that comes after teasing.
The word is derived from the Latin meaning thistle or teasel, as dried vegetable teasels were first used to comb the raw wool before technological advances led to the use of machines.
Overview
These ordered fibres can then be passed on to other processes that are specific to the desired end use of the fibre: Cotton, batting, felt, woollen or worsted yarn, etc. Carding can also be used to create blends of different fibres or different colours. When blending, the carding process combines the different fibres into a homogeneous mix. Commercial cards also have rollers and systems designed to remove some vegetable matter contaminants from the wool.
Common to all carders is card clothing. Card clothing is made from a sturdy flexible backing in which closely spaced wire pins are embedded. The shape, length, diameter, and spacing of these wire pins are dictated by the card designer and the particular requirements of the application where the card cloth will be used. A later version of the card clothing product developed during the latter half of the 19th century and was found only on commercial carding machines, whereby a single piece of serrated wire was wrapped around a roller, became known as metallic card clothing.
Carding machines are known as cards. Fibre may be carded by hand for hand spinning.
History
Science historian Joseph Needham ascribes the invention of bow-instruments used in textile technology to India. The earliest evidence for using bow-instruments for carding comes from India (2nd century CE). These carding devices, called kaman (bow) and dhunaki, would loosen the texture of the fibre by the means of a vibrating string.
At the turn of the eighteenth century, wool in England was being carded using pairs of hand cards, in a two-stage process: 'working' with the cards opposed and 'stripping' where they are in parallel.
In 1748 Lewis Paul of Birmingham, England, invented two hand driven carding machines. The first used a coat of wires on a flat table moved by foot pedals. This failed. On the second, a coat of wire slips was placed around a card which was then wrapped around a cylinder.
Daniel Bourn obtained a similar patent in the same year, and probably used it in his spinning mill at Leominster, but this burnt down in 1754. The invention was later developed and improved by Richard Arkwright and Samuel Crompton. Arkwright's second patent (of 1775) for his carding machine was subsequently declared invalid (1785) because it lacked originality.
From the 1780s, the carding machines were set up in mills in the north of England and mid-Wales. Priority was given to cotton but woollen fibres were being carded in Yorkshire in 1780. With woollen, two carding machines were used: the first or the scribbler opened and mixed the fibres, the second or the condenser mixed and formed the web. The first in Wales was in a factory at Dolobran near Meifod in 1789. These carding mills produced yarn particularly for the Welsh flannel industry.
In 1834 James Walton invented the first practical machines to use a wire card. He patented this machine and also a new form of card with layers of cloth and rubber. The combination of these two inventions became the standard for the carding industry, using machines first built by Parr, Curtis and Walton in Ancoats, and from 1857 by Jams Walton & Sons at Haughton Dale.
By 1838, the Spen Valley, centred on Cleckheaton, had at least 11 card clothing factories and by 1893, it was generally accepted as the card cloth capital of the world, though by 2008 only two manufacturers of metallic and flexible card clothing remained in England: Garnett Wire Ltd., dating from 1851, and Joseph Sellers & Son Ltd., established in 1840.
Baird from Scotland took carding to Leicester, Massachusetts, in the 1780s. In the 1890s, the town produced one-third of all hand and machine cards in North America. John and Arthur Slater, from Saddleworth went over to work with Slater in 1793.
A 1780s scribbling mill would be driven by a water wheel. There were 170 scribbling mills around Leeds at that time. Each scribbler would require to operate. Modern machines are driven by belting from an electric motor or an overhead shaft via two pulleys.
Cotton manufacturing processes
Carding: the fibres are separated and then assembled into a loose strand (sliver or tow) at the conclusion of this stage.
In a wider sense carding can refer to the four processes of willowing, lapping, carding and drawing. In willowing the fibres are loosened. In lapping the dust is removed to create a flat sheet or lap of fibres; Carding itself is the combing of the tangled lap into a thick rope or sliver of 1/2 inch in diameter, it can then be optionally combed, is used to remove the shorter fibres, creating a stronger yarn.
In drawing a drawing frame combines 4 slivers into one. Repeated drawing increases the quality of the sliver allowing for finer counts to be spun. Each sliver will have thin and thick spots, and by combining several slivers together a more consistent size can be reached. Since combining several slivers produces a very thick rope of cotton fibres, directly after being combined the slivers are separated into rovings. These rovings (or slubbings) are then what are used in the spinning process.
For machine processing, a roving is about the width of a pencil. The rovings are collected in a drum and proceed to the slubbing frame which adds twist, and winds onto bobbins. Intermediate Frames are used to repeat the slubbing process to produce a finer yarn, and then the roving frames reduces it to a finer thread, gives more twist, makes more regular and even in thickness, and winds onto a smaller tube.
The carders used currently in woollen mills differ very little from machines used 20 to 50 years ago, and in some cases, the machines are from that era.
Machine carders vary in size from the one that easily fits on the kitchen table, to the carder that takes up a full room .
A carder that takes up a full room works very similarly, the main difference being that the fibre goes through many more drums often with intervening cross laying to even out the load on the subsequent cards, which normally get finer as the fibre progresses through the system.
When the fibre comes off the drum, it is in the form of a bat – a flat, orderly mass of fibres. If a small drum carder is being used, the bat is the length of the circumference of the big drum and is often the finished product. A big drum carder, though, will then take that bat and turn it into roving, by stretching it thinner and thinner, until it is the desired thickness (often rovings are the thickness of a wrist). (A rolag differs from a roving because it is not a continuous strand, and because the fibres end up going across instead of along the strand.) Cotton fibres are fed into the machine, picked up and brushed onto flats when carded.
Some hand-spinners have a small drum carder at home especially for the purpose of mixing together the different coloured fibre that are bought already carded.
Tools
Predating mechanised weaving, hand loom weaving was a cottage industry that used the same processes but on a smaller scale. These skills have survived as an artisan craft or as an art form and hobby.
Hand carders
Hand cards are typically square or rectangular paddles manufactured in a variety of sizes from to . The working face of each paddle can be flat or cylindrically curved and wears the card cloth. Small cards, called flick cards, are used to flick the ends of a lock of fibre, or to tease out some strands for spinning off.
A pair of cards is used to brush the wool between them until the fibres are more or less aligned in the same direction. The aligned fibre is then peeled from the card as a rolag. Carding is an activity normally done outside or over a drop cloth, depending on the wool's cleanliness. Rolag is peeled from the card.
This product (rovings, rolags, and batts) can be used for spinning.
Carding of wool can either be done "in the grease" or not, depending on the type of machine and on the spinner's preference. "In the grease" means that the lanolin that naturally comes with the wool has not been washed out, leaving the wool with a slightly greasy feel. The large drum carders do not tend to get along well with lanolin, so most commercial worsted and woollen mills wash the wool before carding. Hand carders (and small drum carders too, though the directions may not recommend it) can be used to card lanolin-rich wool.
Drum carders
The simplest machine carder is the drum carder. Most drum carders are hand-cranked but some are powered by an electric motor. These machines generally have two rollers, or drums, covered with card clothing. The licker-in, or smaller roller meters fibre from the infeed tray onto the larger storage drum. The two rollers are connected to each other by a belt- or chain-drive so that their relative speeds cause the storage drum to gently pull fibres from the licker-in. This pulling straightens the fibres and lays them between the wire pins of the storage drum's card cloth. Fibre is added until the storage drum's card cloth is full. A gap in the card cloth facilitates removal of the batt when the card cloth is full.
Some drum carders have a soft-bristled brush attachment that presses the fibre into the storage drum. This attachment serves to condense the fibres already in the card cloth and adds a small amount of additional straightening to the condensed fibre.
Cottage carders
Cottage carding machines differ significantly from the simple drum card. These carders do not store fibre in the card cloth as the drum carder does; rather, fibre passes through the workings of the carder for storage or for additional processing by other machines.
A typical cottage carder has a single large drum (the swift) accompanied by a pair of in-feed rollers (nippers), one or more pairs of worker and stripper rollers, a fancy, and a doffer. In-feed to the carder is usually accomplished by hand or by conveyor belt and often the output of the cottage carder is stored as a batt or further processed into roving and wound into bumps with an accessory bump winder. The cottage carder in the image below supports both outputs.
Raw fibre, placed on the in-feed table or conveyor is moved to the nippers which restrain and meter the fiber onto the swift. As they are transferred to the swift, many of the fibres are straightened and laid into the swift's card cloth. These fibres will be carried past the worker or stripper rollers to the fancy.
As the swift carries the fibres forward, from the nippers, those fibres that are not yet straightened are picked up by a worker and carried over the top to its paired stripper. Relative to the surface speed of the swift, the worker turns quite slowly. This has the effect of reversing the fibre. The stripper, which turns at a higher speed than the worker, pulls fibres from the worker and passes them to the swift. The stripper's relative surface speed is slower than the swift's so the swift pulls the fibres from the stripper for additional straightening.
Straightened fibres are carried by the swift to the fancy. The fancy's card cloth is designed to engage with the swift's card cloth so that the fibres are lifted to the tips of the swift's card cloth and carried by the swift to the doffer. The fancy and the swift are the only rollers in the carding process that actually touch.
The slowly turning doffer removes the fibres from the swift and carries them to the fly comb where they are stripped from the doffer. A fine web of more or less parallel fibre, a few fibres thick and as wide as the carder's rollers, exits the carder at the fly comb by gravity or other mechanical means for storage or further processing.
| Technology | Textiles | null |
171526 | https://en.wikipedia.org/wiki/Subaru%20Telescope | Subaru Telescope | is the telescope of the National Astronomical Observatory of Japan, located at the Mauna Kea Observatory on Hawaii. It is named after the open star cluster known in English as the Pleiades. It had the largest monolithic primary mirror in the world from its commissioning until the Large Binocular Telescope opened in 2005.
Overview
The Subaru Telescope is a Ritchey-Chretien reflecting telescope. Instruments can be mounted at a Cassegrain focus below the primary mirror; at either of two Nasmyth focal points in enclosures on the sides of the telescope mount, to which light can be directed with a tertiary mirror; or at the prime focus in lieu of a secondary mirror, an arrangement rare on large telescopes, to provide a wide field of view suited to deep wide-field surveys.
In 1984, the University of Tokyo formed an engineering working group to develop and study the concept of a telescope. In 1985, the astronomy committee of Japan's science council gave top priority to the development of a "Japan National Large Telescope" (JNLT), and in 1986, the University of Tokyo signed an agreement with the University of Hawaii to build the telescope in Hawaii. In 1988, the National Astronomical Observatory of Japan was formed through a reorganization of the University's Tokyo Astronomical Observatory, to oversee the JNLT and other large national astronomy projects.
Construction of the Subaru Telescope began in April 1991, and later that year, a public contest gave the telescope its official name, "Subaru Telescope". Construction was completed in 1998, and the first scientific images were taken in January 1999. In September 1999, Princess Sayako of Japan dedicated the telescope.
A number of state-of-the-art technologies were worked into the telescope design. For example, 261 computer-controlled actuators press the main mirror from underneath, which corrects for primary mirror distortion caused by changes in the telescope orientation. The telescope enclosure building is also shaped to improve the quality of astronomical images by minimizing the effects caused by atmospheric turbulence.
Subaru is one of the few state-of-the-art telescopes to have been used with the naked eye. For the dedication, an eyepiece was constructed so that Princess Sayako could look through it directly. It was enjoyed by the staff for a few nights until it was replaced with the much more sensitive working instruments.
Subaru is the primary tool in the search for Planet Nine. Its large field of view, 75 times that of the Keck telescopes, and strong light-gathering power are suited for deep wide-field sky surveys. The search, split between a research group led by Konstantin Batygin and Michael Brown and another led by Scott Sheppard and Chad Trujillo, is expected to take up to five years.
Accidents during construction
Two separate incidents claimed the lives of four workers during the construction of the telescope. On October 13, 1993, 42-year-old Paul F. Lawrence was fatally injured when a forklift tipped over onto him. On January 16, 1996, sparks from a welder ignited insulation which smoldered, generating noxious smoke that killed Marvin Arruda, 52, Ricky Del Rosario, 38, and Warren K. "Kip" Kaleo, 36, and sent twenty-six other workers to the hospital in Hilo. All four workers are memorialized by a plaque outside the base of the telescope dome and a sign posted temporarily each January along the Mauna Kea access road.
Mishap in 2011
On July 2, 2011, the telescope operator in Hilo noted an anomaly from the top unit of the telescope. Upon further examination, coolant from the top unit was found to have leaked over the primary mirror and other parts of the telescope.
Observation using Nasmyth foci resumed on July 22, and Cassegrain focus resumed on August 26.
Mishap in 2023
On September 15, 2023, an abnormal load sensor value of the primary mirror fixed point was observed during a maintenance operational test. Later, a part fell onto the primary mirror during repair work of the mirror cover. Science observation was suspended.
After the replacement of sensor and the repair work of the primary mirror damage, it returned to observation on 3 March 2024.
Instruments
Several cameras and spectrographs can be mounted at Subaru Telescope's four focal points for observations in visible and infrared wavelengths.
Multi-Object Infrared Camera and Spectrograph (MOIRCS) Wide-field camera and spectrograph with the ability to take spectra of multiple objects simultaneously, mounts at the Cassegrain focus.
Infrared Camera and Spectrograph (IRCS) Used in conjunction with the new 188-element adaptive optics unit (AO188), mounted at the infrared Nasmyth focus.
Cooled Mid Infrared Camera and Spectrometer (COMICS) Mid-infrared camera and spectrometer with the ability to study cool interstellar dust, mounts on the Cassegrain focus. Decommissioned in 2020.
Faint Object Camera And Spectrograph (FOCAS) Visible-light camera and spectrograph with the ability to take spectra of up to 100 objects simultaneously, mounts on the Cassegrain focus.
Subaru Prime Focus Camera (Suprime-Cam) 80-megapixel wide-field visible-light camera, mounts at the prime focus. Superseded by the Hyper Suprime-Cam in 2012, decommissioned in May 2017.
High Dispersion Spectrograph (HDS) Visible-light spectrograph mounted at the optical Nasmyth focus.
Fiber Multi Object Spectrograph (FMOS) Infrared spectrograph using movable fiber optics to take spectra of up to 400 objects simultaneously. Mounts at the prime focus.
High-Contrast Coronographic Imager for Adaptive Optics (HiCIAO) Infrared camera for hunting planets around other stars. Used with AO188, mounted at the infrared Nasmyth focus.
Hyper Suprime-Cam (HSC) This 900-megapixel ultra-wide-field (1.5° field of view) camera saw first light in 2012, and was offered for open-use in 2014. The extremely large wide-field correction optics (a seven-element lens with some elements up to a meter in diameter) was manufactured by Canon and delivered March 29, 2011. It will be used for surveys of weak lensing to determine dark matter distribution.
Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument is a high-contrast imaging system for directly imaging exoplanets. The coronagraph uses a Phase Induced Amplitude Apodization (PIAA) design which means it will be able to image planets closer to their stars than conventional Lyot type coronagraph designs. For example, at a distance of 100 pc, the PIAA coronagraph on SCExAO would be able to image from 4 AU outwards while Gemini Planet Imager and VLT-SPHERE from 12 AU outwards. The system also has several other types of coronagraph: Vortex, Four-Quadrant Phase Mask and 8-Octant Phase Mask versions, and a shaped pupil coronagraph. The phase I of construction is complete and phase II construction to be complete by end of 2014 for science operations in 2015. SCExAO will initially use the HiCIAO camera but this will be replaced by CHARIS, an integral field spectrograph, around 2016.
| Technology | Ground-based observatories | null |
171808 | https://en.wikipedia.org/wiki/Bobcat | Bobcat | The bobcat (Lynx rufus), also known as the wildcat, bay lynx, or red lynx, is one of the four extant species within the medium-sized wild cat genus Lynx. Native to North America, it ranges from southern Canada through most of the contiguous United States to Oaxaca in Mexico. It is listed as Least Concern on the IUCN Red List since 2002, due to its wide distribution and large population. Although it has been hunted extensively both for sport and fur, populations have proven stable, though declining in some areas.
It has distinctive black bars on its forelegs and a black-tipped, stubby (or "bobbed") tail, from which it derives its name. It reaches a total length (including the tail) of up to . It is an adaptable predator inhabiting wooded areas, semidesert, urban edge, forest edge, and swampland environments. It remains in some of its original range, but populations are vulnerable to extirpation by coyotes and domestic animals. Though the bobcat prefers rabbits and hares, it hunts insects, chickens, geese and other birds, small rodents, and deer. Prey selection depends on location and habitat, season, and abundance. Like most cats, the bobcat is territorial and largely solitary, although with some overlap in home ranges. It uses several methods to mark its territorial boundaries, including claw marks and deposits of urine or feces. The bobcat breeds from winter into spring and has a gestation period of about two months.
Two subspecies are recognized: one east of the Great Plains, and the other west of the Great Plains. It is featured in some stories of the indigenous peoples of North and Central America, and in the folklore of European-descended inhabitants of the Americas.
Taxonomy and evolution
Felis rufa was the scientific name proposed by Johann Christian Daniel von Schreber in 1777. In the 19th and 20th centuries, the following zoological specimens were described:
Lynx floridanus proposed by Constantine Samuel Rafinesque in 1817 was a greyish lynx with yellowish brown spots from Florida.
Lynx fasciatus also proposed by Rafinesque in 1817 was a reddish brown lynx with a thick fur from the northwest coast.
Lynx baileyi proposed by Clinton Hart Merriam in 1890 was a female lynx that was shot in the San Francisco Mountains.
Lynx texensis proposed by Joel Asaph Allen in 1895 to replace the earlier name Lynx rufus var. maculatus.
Lynx gigas proposed by Outram Bangs in 1897 was a skin of an adult male lynx shot near Bear River, Nova Scotia.
Lynx rufus eremicus and Lynx rufus californicus proposed by Edgar Alexander Mearns in 1898 were skins and skulls of two adult lynxes killed in San Diego County, California.
Lynx rufus peninsularis proposed by Oldfield Thomas in 1898 was a skull and a pale rufous skin of a male lynx from Baja California Peninsula.
Lynx fasciatus pallescens proposed by Merriam in 1899, was a skin of a gray lynx that was killed near Trout Lake, Washington.
Lynx ruffus escuinapae proposed by Allen in 1903 was a skull and a pale rufous skin of an adult female from Escuinapa Municipality in Mexico.
Lynx rufus superiorensis by Randolph Lee Peterson and Stuart C. Downing in 1952 was a skeleton and skin of a male lynx killed near Port Arthur, Ontario.
Lynx rufus oaxacensis proposed by George Goodwin in 1963 was based on three skulls and six skins of lynxes killed in the Mexican Tehuantepec District.
The validity of these subspecies was challenged in 1981 because of the minor differences between specimens from the various geographic regions in North America.
Since the revision of cat taxonomy in 2017, only two subspecies are recognized as valid taxa:
L. r. rufus – east of the Great Plains
L. r. fasciatus – west of the Great Plains
Phylogeny
The genus Lynx shares a clade with the genera Puma, Prionailurus and Felis dated to ; Lynx diverged approximately .
The bobcat is thought to have evolved from the Eurasian lynx (L. lynx), which crossed into North America by way of the Bering Land Bridge during the Pleistocene, with progenitors arriving as early as 2.6 million years ago. It first appeared during the Irvingtonian stage around . The first bobcat wave moved into the southern portion of North America, which was soon cut off from the north by glaciers; the population evolved into the modern bobcat around 20,000 years ago. A second population arrived from Asia and settled in the north, developing into the modern Canada lynx (L. canadensis). Hybridization between the bobcat and the Canada lynx may sometimes occur.
The populations east and west of the Great Plains were probably separated during Pleistocene interglacial periods due to the aridification of the region.
Description
The bobcat resembles other species of the midsize genus Lynx, but is on average the smallest of the four. Its coat is variable, though generally tan to grayish-brown, with black streaks on the body and dark bars on the forelegs and tail. Its spotted patterning acts as camouflage. The ears are black-tipped and pointed, with short, black tufts. Generally, an off-white color is seen on the lips, chin, and underparts. Bobcats in the desert regions of the southwest have the lightest-colored coats, while those in the northern, forested regions are darkest. Kittens are born well-furred and already have their spots. A few melanistic bobcats have been sighted and captured in Florida, USA and New Brunswick, Canada. They appear black, but may still exhibit a spot pattern.
The face appears wide due to ruffs of extended hair beneath the ears. Bobcat eyes are yellow with round, black pupils. The nose of the bobcat is pinkish-red, and it has a base color of gray or yellowish- or brownish-red on its face, sides, and back. The pupils widen during nocturnal activity to maximize light reception. The bobcat has sharp hearing and vision, and a good sense of smell. It is an excellent climber and swims when it needs to, but normally avoids water.
The adult bobcat is long from the head to the base of its distinctive stubby tail, averaging ; the tail is long.
Its "bobbed" appearance gives the species its name.
An adult stands about at the shoulders.
Adult males can range in weight from , with an average of ; females at , with an average of . The largest bobcat accurately measured on record weighed , although unverified reports have them reaching . Furthermore, a June 20, 2012, report of a New Hampshire roadkill specimen listed the animal's weight at . The largest-bodied bobcats were recorded in eastern Canada and northern New England, and the smallest in the southern Appalachian Mountains.
Consistent with Bergmann's rule, the bobcat is larger in its northern range and in open habitats. A morphological size comparison study in the eastern United States found a divergence in the location of the largest male and female specimens, suggesting differing selection constraints for the sexes.
Skeletal muscles make up 58.5 % of the bobcat's body weight. At birth, it weighs and is about in length. At the age of one year, it weighs about .
Tracks
Bobcat tracks show four toes without claw marks, due to their retractile claws. The tracks range in size from ; the average is about . When walking or trotting, the tracks are spaced roughly apart. The bobcat can make great strides when running, often from .
Like all cats, the bobcat 'directly registers', meaning its hind prints usually fall exactly on top of its fore prints. Bobcat tracks can be generally distinguished from feral or house cat tracks by their larger size: about versus .
Distribution and habitat
The bobcat is an adaptable species. It prefers woodlands—deciduous, coniferous, or mixed—but does not depend exclusively on the deep forest. It ranges from the humid swamps of Florida to desert lands of Texas or rugged mountain areas. It makes its home near agricultural areas, if rocky ledges, swamps, or forested tracts are present; its spotted coat serves as camouflage. The population of the bobcat depends primarily on the population of its prey; other principal factors in the selection of habitat type include protection from severe weather, availability of resting and den sites, dense cover for hunting and escape, and freedom from disturbance.
The bobcat's range does not seem to be limited by human populations, but by availability of suitable habitat; only large, intensively cultivated tracts are unsuitable for the species. The animal may appear in back yards in "urban edge" environments, where human development intersects with natural habitats. If chased by a dog, it usually climbs up a tree.
The historical range of the bobcat was from southern Canada, throughout the United States, and as far south as the Mexican state of Oaxaca, and it still persists across much of this area. In the 20th century, it was thought to have lost territory in the US Midwest and parts of the Northeast, including southern Minnesota, eastern South Dakota, and much of Missouri, mostly due to habitat changes from modern agricultural practices. While thought to no longer exist in western New York and Pennsylvania, multiple confirmed sightings of bobcats (including dead specimens) have been recently reported in New York's Southern Tier and in central New York, and a bobcat was captured in 2018 on a tourist boat in Downtown Pittsburgh, Pennsylvania. In addition, bobcat sightings have been confirmed in northern Indiana, and one was killed near Albion, Michigan, in 2008. In early March 2010, a bobcat was sighted (and later captured by animal control authorities) in a parking garage in downtown Houston. By 2010, bobcats appear to have recolonized many states, occurring in every state in the contiguous 48 except Delaware.
The bobcat population in Canada is limited due to both snow depth and the presence of the Canada lynx. The bobcat does not tolerate deep snow, and waits out heavy storms in sheltered areas; it lacks the large, padded feet of the Canada lynx and cannot support its weight on snow as efficiently. The bobcat is not entirely at a disadvantage where its range meets that of the larger felid: displacement of the Canada lynx by the aggressive bobcat has been observed where they interact in Nova Scotia, while the clearing of coniferous forests for agriculture has led to a northward retreat of the Canada lynx's range to the advantage of the bobcat. In northern and central Mexico, the cat is found in dry scrubland and forests of pine and oak; its range ends at the tropical southern portion of the country.
Behavior and ecology
The bobcat is crepuscular, and is active mostly during twilight. It keeps on the move from three hours before sunset until about midnight, and then again from before dawn until three hours after sunrise. Each night, it moves from along its habitual route. This behavior may vary seasonally, as bobcats become more diurnal during fall and winter in response to the activity of their prey, which are more active during the day in colder weather.
Social structure and home range
Bobcat activities are confined to well-defined territories, which vary in size depending on the sex and the distribution of prey. The home range is marked with feces, urine scent, and by clawing prominent trees in the area. In its territory, the bobcat has numerous places of shelter, usually a main den, and several auxiliary shelters on the outer extent of its range, such as hollow logs, brush piles, thickets, or under rock ledges. Its den smells strongly of the bobcat.
The sizes of bobcats' home ranges vary significantly from . One study in Kansas found resident males to have ranges of roughly , and females less than half that area. Transient bobcats were found to have home ranges of and less well-defined home ranges. Kittens had the smallest range at about . Dispersal from the natal range is most pronounced with males.
Reports on seasonal variation in range size have been equivocal. One study found a large variation in male range sizes, from in summer up to in winter. Another found that female bobcats, especially those which were reproductively active, expanded their home range in winter, but that males merely shifted their range without expanding it, which was consistent with numerous earlier studies. Other research in various American states has shown little or no seasonal variation.
Like most felines, the bobcat is largely solitary, but ranges often overlap. Unusual for cats, males are more tolerant of overlap, while females rarely wander into others' ranges. Given their smaller range sizes, two or more females may reside within a male's home range. When multiple territories overlap, a dominance hierarchy is often established, resulting in the exclusion of some transients from favored areas.
In line with widely differing estimates of home range size, population density figures diverge from one to 38 bobcats per in one survey. The average is estimated at one bobcat per . A link has been observed between population density and sex ratio. An unhunted population in California had a sex ratio of 2.1 males per female. When the density decreased, the sex ratio skewed to 0.86 males per female. Another study observed a similar ratio, and suggested the males may be better able to cope with the increased competition, and this helped limit reproduction until various factors lowered the density.
Hunting and diet
The bobcat is able to survive for long periods without food, but eats heavily when prey is abundant. During lean periods, it often preys on larger animals, which it can kill and return to feed on later. The bobcat hunts by stalking its prey and then ambushing with a short chase or pounce. Its preference is for mammals weighing about . Its main prey varies by region: in the eastern United States, it is the eastern cottontail and New England cottontail, and in the north, it is the snowshoe hare. When these prey species exist together, as in New England, they are the primary food sources of the bobcat. In the far south, the rabbits and hares are sometimes replaced by cotton rats as the primary food source. Birds up to the size of an adult trumpeter swan are also taken in ambushes while nesting, along with their fledglings and eggs. The bobcat is an opportunistic predator that, unlike the more specialized Canada lynx, readily varies its prey selection. Diet diversification positively correlates to a decline in numbers of the bobcat's principal prey; the abundance of its main prey species is the main determinant of overall diet.
The bobcat hunts animals of different sizes, and adjusts its hunting techniques accordingly. It hunts in areas abundant in prey and waits lying or crouching for victims to wander close. It then pounces and grabs the prey with its sharp, retractable claws. For slightly larger animals, such as geese, ducks, rabbits and hares, it stalks from cover and waits until prey comes within before rushing in to attack. Less commonly, it feeds on larger animals, such as young ungulates, and other carnivores, such as primarily female fishers, gray foxes, American minks, American martens, skunks, raccoons, small dogs and domestic cats. It also hunts rodents such as squirrels, moles, muskrats, mice, but also birds, small sharks, and insects. Bobcats occasional hunt livestock and poultry. While larger species, such as cattle and horses, are not known to be attacked, bobcats do present a threat to smaller ruminants such as pigs, sheep and goats. According to the National Agricultural Statistics Service, bobcats killed 11,100 sheep in 2004, comprising 4.9% of all sheep predator deaths. However, some amount of bobcat predation may be misidentified, as bobcats have been known to scavenge on the remains of livestock kills by other animals.
It has been known to kill deer or pronghorn, and sometimes to hunt elk in western North America, especially in winter when smaller prey is scarce, or when deer populations become more abundant. One study in the Everglades showed a large majority of kills (33 of 39) were fawns. In Yellowstone a large number of kills (15 of 20) were elk calves, but prey up to eight times the bobcat's weight could be successfully taken. It stalks the deer, often when the deer is lying down, then rushes in and grabs it by the neck before biting the throat, base of the skull, or chest. On the rare occasions a bobcat kills a deer, it eats its fill and then buries the carcass under snow or leaves, often returning to it several times to feed.
The bobcat prey base overlaps with that of other midsized predators of a similar ecological niche. Research in Maine has shown little evidence of competitive relationships between the bobcat and coyote or red fox; separation distances and territory overlap appeared random among simultaneously monitored animals. However, other studies have found bobcat populations may decrease in areas with high coyote populations, with the more social inclination of the canid giving them a possible competitive advantage. With the Canada lynx, however, the interspecific relationship affects distribution patterns; competitive exclusion by the bobcat is likely to have prevented any further southward expansion of the range of its felid relative.
Reproduction and life cycle
The average lifespan of the bobcat is seven years but rarely exceeds 10 years. The oldest wild bobcat on record was 16 years old, and the oldest captive bobcat lived to be 32.
Bobcats generally begin breeding by their second summer, though females may start as early as their first year. Sperm production begins each year by September or October, and the male is fertile into the summer. A dominant male travels with a female and mates with her several times, generally from winter until early spring; this varies by location, but most mating takes place during February and March. The pair may undertake a number of different behaviors, including bumping, chasing, and ambushing. Other males may be in attendance, but remain uninvolved. Once the male recognizes the female is receptive, he grasps her in the typical felid neck grip and mates with her. The female may later go on to mate with other males, and males generally mate with several females. During courtship, the bobcat's vocalizations include screaming and hissing. Research in Texas revealed that establishing a home range is necessary for breeding; studied animals without a home range had no identified offspring. The female has an estrous cycle of 44 days, with the estrus lasting five to ten days. Bobcats remain reproductively active throughout their lives.
The female raises the young alone. One to six, but usually two to four, kittens are born in April or May, after roughly 60 to 70 days of gestation. Sometimes, a second litter is born as late as September. The female generally gives birth in an enclosed space, usually a small cave or hollow log. The young open their eyes by the ninth or tenth day. They start exploring their surroundings at four weeks and are weaned at about two months. Within three to five months, they begin to travel with their mother. They hunt by themselves by fall of their first year, and usually disperse shortly thereafter. In Michigan, however, they have been observed staying with their mother as late as the next spring.
Predators
The adult bobcat has relatively few predators. Rarely, however, it may be killed in interspecific conflict by several larger predators or fall prey to them. Cougars and gray wolves can kill adult bobcats, a behavior repeatedly observed in Yellowstone National Park. Coyotes have killed adult bobcats and kittens. At least one confirmed observation of a bobcat and an American black bear (Ursus americanus) fighting over a carcass is confirmed. Like other Lynx species, bobcats probably avoid encounters with bears, in part because they are likely to lose kills to them or may rarely be attacked by them. Bobcat remains have occasionally been found in the resting sites of male fishers. American alligators (Alligator mississippensis) have been filmed opportunistically preying on adult bobcats in the southeast United States. Golden eagles (Aquila chrysaetos) have been reportedly observed preying on bobcats.
Kittens may be taken by several predators, including great horned owls, eagles, foxes, and bears, and other adult male bobcats. When prey populations are not abundant, fewer kittens are likely to reach adulthood.
Diseases, accidents, hunters, automobiles, and starvation are the other leading causes of death. Juveniles show high mortality shortly after leaving their mothers, while still perfecting their hunting techniques. One study of 15 bobcats showed yearly survival rates for both sexes averaged 0.62, in line with other research suggesting rates of 0.56 to 0.67. Cannibalism has been reported; kittens may be taken when prey levels are low, but this is very rare and does not much influence the population.
The bobcat may have external parasites, mostly ticks and fleas, and often carries the parasites of its prey, especially those of rabbits and squirrels. Internal parasites (endoparasites) are especially common in bobcats. One study found an average infection rate of 52% from Toxoplasma gondii, but with great regional variation. One mite in particular, Lynxacarus morlani, has to date been found only on the bobcat. Parasites' and diseases' role in the mortality of the bobcat is still unclear, but they may account for greater mortality than starvation, accidents, and predation.
Conservation
It is listed in Appendix II of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which means it is not considered threatened with extinction, but that international trade must be closely monitored. The animal is regulated in all three of its range countries, and is found in a number of protected areas of the United States, its principal territory. Estimates from the US Fish and Wildlife Service placed bobcat numbers between 700,000 and 1,500,000 in the US in 1988, with increased range and population density suggesting even greater numbers in subsequent years; for these reasons, the U.S. has petitioned CITES to remove the cat from Appendix II. Populations in Canada and Mexico remain stable and healthy. It is listed as least concern on the IUCN Red List, noting it is relatively widespread and abundant, but information from southern Mexico is poor.
The species is considered endangered in Ohio, Indiana, and New Jersey. It was removed from the threatened list of Illinois in 1999 and of Iowa in 2003. In Pennsylvania, limited hunting and trapping are once again allowed, after having been banned from 1970 to 1999. The bobcat also suffered population decline in New Jersey at the turn of the 19th century, mainly because of commercial and agricultural developments causing habitat fragmentation; by 1972, the bobcat was given full legal protection, and was listed as endangered in the state in 1991. The Mexican bobcat L. r. escuinipae was for a time considered endangered by the US Fish and Wildlife Service, but was delisted in 2005. Between 2003 and 2011, a reduction in bobcat sightings in the Everglades by 87.5% has been attributed to predation by the invasive Burmese python.
The bobcat has long been valued both for fur and sport; it has been hunted and trapped by humans, but has maintained a high population, even in the southern United States, where it is extensively hunted. In the 1970s and 1980s, an unprecedented rise in price for bobcat fur caused further interest in hunting, but by the early 1990s, prices had dropped significantly. Regulated hunting still continues, with half of mortality of some populations being attributed to this cause. As a result, the rate of bobcat deaths is skewed in winter, when hunting season is generally open.
Urbanization can result in the fragmentation of contiguous natural landscapes into patchy habitat within an urban area. Animals that live in these fragmented areas often have reduced movement between the habitat patches, which can lead to reduced gene flow and pathogen transmission between patches. Animals such as the bobcat are particularly sensitive to fragmentation because of their large home ranges. A study in coastal Southern California has shown bobcat populations are affected by urbanization, creation of roads, and other developments. The populations may not be declining as much as predicted, but instead the connectivity of different populations is affected. This leads to a decrease in natural genetic diversity among bobcat populations. For bobcats, preserving open space in sufficient quantities and quality is necessary for population viability. Educating local residents about the animals is critical, as well, for conservation in urban areas.
In bobcats using urban habitats in California, the use of rodenticides has been linked to both secondary poisoning by consuming poisoned rats and mice, and to increased rates of severe mite infestation (known as notoedric mange), as an animal with a poison-weakened immune system is less capable of fighting off mange. Liver autopsies in California bobcats that have succumbed to notoedric mange have revealed chronic rodenticide exposure. Alternative rodent control measures such as vegetation control and use of traps have been suggested to alleviate this issue.
Importance in human culture
Stories featuring the bobcat, in many variations, are found in some Indigenous cultures of North America, with parallels in South America. A story from the Nez Perce, for instance, depicts the bobcat and coyote as opposed, antithetical beings. However, another version represents them with equality and identicality. Claude Lévi-Strauss argues that the former concept, that of twins representing opposites, is an inherent theme in New World mythologies, but that they are not equally balanced figures, representing an open-ended dualism rather than the symmetric duality of Old World cultures. The latter notion then, Lévi-Strauss suggests, is the result of regular contact between Europeans and native cultures. Additionally, the version found in the Nez Perce story is of much greater complexity, while the version of equality seems to have lost the tale's original meaning.
In a Shawnee tale, the bobcat is outwitted by a rabbit, which gives rise to its spots. After trapping the rabbit in a tree, the bobcat is persuaded to build a fire, only to have the embers scattered on its fur, leaving it singed with dark brown spots. The Mohave people believed dreaming habitually of beings or objects would afford them their characteristics as supernatural powers. Dreaming of two deities, cougar and lynx, they thought, would grant them the superior hunting skills of other tribes. European-descended inhabitants of the Americas also admired the cat, both for its ferocity and its grace, and in the United States, it "rests prominently in the anthology of ... national folklore."
Grave artifacts from dirt domes excavated in the 1980s along the Illinois River revealed a complete skeleton of a young bobcat along with a collar made of bone pendants and shell beads that had been buried by the Hopewell culture. The type and place of burial indicate a tamed and cherished pet or possible spiritual significance. The Hopewell normally buried their dogs, so the bones were initially identified as remains of a puppy, but dogs were usually buried close to the village and not in the mounds themselves. This is the only wild cat decorated burial on the archaeological record.
An inhabitant of Appalachia, Lynx rufus is immortalized (along with university founder Rufus Putnam) at Ohio University through its popular college mascot, Rufus the Bobcat.
| Biology and health sciences | Felines | Animals |
171830 | https://en.wikipedia.org/wiki/Hornbeam | Hornbeam | Hornbeams are hardwood trees in the plant genus Carpinus in the family Betulaceae. Its species occur across much of the temperate regions of the Northern Hemisphere.
Common names
The common English name hornbeam derives from the hardness of the woods (likened to horn) and the Old English beam, "tree" (cognate with Dutch Boom and German Baum).
The American hornbeam is also occasionally known as blue-beech, ironwood, or musclewood, the first from the resemblance of the bark to that of the American beech Fagus grandifolia, the other two from the hardness of the wood and the muscled appearance of the trunk and limbs.
The botanical name for the genus, Carpinus, is the original Latin name for the European species, although some etymologists derive it from the Celtic for a yoke.
Description
Hornbeams are small, slow-growing, understory trees with a natural, rounded form growing tall and wide; the exemplar species—the European hornbeam—reaches a maximum height of .
Leaves are deciduous, dark-green, alternate and simple with a coarsely-serrated margin, varying from in length. In autumn, leaves turn various shades of yellow, orange and red. Hornbeam saplings, stressed trees, and the lower branches of mature trees may exhibit marcescence—where leaves wither with autumn but abscission (leafdrop) is delayed until spring.
The smooth, gray trunk and larger branches of a mature tree exhibit a distinctive muscle-like fluting.
As with other members of the birch family, hornbeam flowers are wind-pollinated pendulous catkins, produced in spring. Male and female flowers are on separate catkins, but on the same tree (monoecious). Female flowers give way to distinctive clusters of winged seeds that somewhat resemble the hops-like seeds of ironwood.
The fruit is a small nut about long, held in a leafy bract; the bract may be either trilobed or simple oval, and is slightly asymmetrical. The asymmetry of the seedwing makes it spin as it falls, improving wind dispersal. The shape of the wing is important in the identification of different hornbeam species. Typically, 10–30 seeds are on each seed catkin.
Taxonomy
Formerly some taxonomists segregated them with the genera Corylus (hazels) and Ostrya (hop-hornbeams) in a separate family, Corylaceae. Modern botanists place Carpinus in the subfamily Coryloideae of the family Betulaceae. Species of Carpinus are often grouped into two subgenera Carpinus subgenus Carpinus and Carpinus subgenus Distegicarpus.
Phylogentic analyses have shown that Ostrya likely evolved from a Carpinus ancestor somewhere in C. subg. Distegicarpus making Carpinus paraphyletic. The fossil record of the genus extends back to the Early Eocene, Ypresian of northwestern North America, with the species Carpinus perryae described from fossil fruits found in the Klondike Mountain Formation of Republic, Washington.
Species
43 species are currently accepted.
Carpinus austrobalcanica N.Kuzmanović, D.Lakušić, I.Stevanoski, P.Schönswetter, B.Frajman – Southern Albania, Northwestern Greece
Carpinus betulus – European hornbeam - Europe to Western Asia; naturalized in North America.
Carpinus caroliniana – American hornbeam - Eastern North America
Carpinus chuniana – Guangdong, Guizhou, Hubei
Carpinus cordata – Sawa hornbeam - Primorye, China, Korea, Japan
Carpinus dayongiana – Hunan
Carpinus faginea – Nepal, Himalayas of northern India
Carpinus fangiana – Sichuan, Guangxi
Carpinus fargesiana – central and east-central China
Carpinus firmifolia – Guizhou: Guiyang Shi
Carpinus gigabracteatus – Yunnan
Carpinus hebestroma – Taiwan
Carpinus henryana – southern China
Carpinus insularis – Hong Kong
Carpinus japonica – Japanese hornbeam - Japan
Carpinus kawakamii – Taiwan, southeastern China
Carpinus kweichowensis – Guizhou, Yunnan
Carpinus langaoensis – Shaanxi, China
Carpinus laxiflora – Aka-shide hornbeam - Japan, Korea
Carpinus lipoensis – Guizhou
Carpinus londoniana – southern China, northern Indochina
Carpinus luochengensis – Guangxi
Carpinus mengshanensis – Shandong
Carpinus microphylla – Guangxi
Carpinus mollicoma – Tibet, Sichuan, Yunnan
Carpinus monbeigiana – Tibet, Yunnan
Carpinus omeiensis – Sichuan, Guizhou
Carpinus orientalis – Oriental hornbeam - Hungary, Balkans, Italy, Crimea, Turkey, Iran, Caucasus
Carpinus paohsingensis – China
†Carpinus perryae - Ypresian, Klondike Mountain Formation
Carpinus polyneura – southern China
Carpinus pubescens – China, Vietnam
Carpinus purpurinervis – Guizhou, Guangxi
Carpinus putoensis – Putuo hornbeam - Zhejiang
Carpinus rankanensis – Taiwan
Carpinus rupestris – Yunnan, Guangxi, Guizhou
Carpinus × schuschaensis (C. betulus × C. orientalis) – Caucasus and northern Iran
Carpinus shensiensis – Gansu, Shaanxi
Carpinus shimenensis – Hunan
†Carpinus tengshongensis – Pliocene Yunnan
Carpinus tibetana – Tibet
Carpinus tientaiensis – Zhejiang: Tianmu Shan
Carpinus tropicalis – Mexico, Central America
Carpinus tsaiana – Yunnan, Guizhou
Carpinus tschonoskii – Asian Hornbeam, Chonosuki's Hornbeam - southern China, Korea, Japan
Carpinus turczaninovii – Korean hornbeam - China, Korea, Japan
Carpinus viminea – China, Korea, Himalayas, northern Indochina
Distribution and habitat
The 43 species occur across much of the temperate regions of the northern hemisphere, with the greatest number of species in east Asia, particularly China. Only three species occur in Europe, only one in eastern North America, and one in Mesoamerica. Carpinus betulus can be found in Europe, Turkey and Ukraine.
Ecology
Hornbeams are used as food plants by the larvae of some Lepidoptera species, including autumnal moth, common emerald, feathered thorn, walnut sphinx, Svensson's copper underwing, and winter moth (recorded on European hornbeam) as well as the Coleophora case-bearers C. currucipennella and C. ostryae.
Uses
Hornbeams yield a very hard timber, giving rise to the name "ironwood". Dried heartwood billets are nearly white and are suitable for decorative use. For general carpentry, hornbeam is rarely used, partly due to the difficulty of working it.
The wood is used to construct carving boards, tool handles, handplane soles, coach wheels, piano actions, shoe lasts, and other products where a very tough, hard wood is required.
The wood can also be used as gear pegs in simple machines, including traditional windmills. It is sometimes coppiced to provide hardwood poles. It is also used in parquet flooring and for making chess pieces.
| Biology and health sciences | Fagales | Plants |
8809081 | https://en.wikipedia.org/wiki/Flock%20%28birds%29 | Flock (birds) | A flock is a gathering of individual birds to forage or travel collectively. Avian flocks are typically associated with migration. Flocking also offers foraging benefits and protection from predators, although flocking can have costs for individual members.
Flocks are often defined as groups consisting of individuals from the same species. However, mixed flocks consisting of two or more species are also common. Avian species that tend to flock together are typically similar in taxonomy and share morphological characteristics such as size and shape. Mixed flocks offer increased protection against predators, which is particularly important in closed habitats such as forests where early warning calls play a vital importance in the early recognition of danger. The result is the formation of many mixed-species feeding flocks.
Mixed flocks
While mixed flocks are typically thought to comprise two different species, it is specifically the two different behaviours of the species that compose a mixed flock. Within a mixed flock there can be two different behavioural characteristics: sally and gleaner. Sallies are individuals that act as guards of the flock and consume prey in the air during flight. On the other hand, gleaners are those that consume prey living within vegetation.
Studies have shown that as resources in the aerial environment increase, the flock will possess more sallies than gleaners. This has been shown to occur during forest fires in which insects have been flushed from vegetation, however this can also be done by the gleaners. When gleaners obtain meals from vegetation it causes the other prey within the vegetation to be flushed out into the aerial environment. It is through this specific behaviour of feeding among vegetation that the gleaners indirectly increase the foraging rate of the sallies.
Those birds that are more rare and therefore less abundant in an environment are more likely to perform in this mixed flock behaviour. Despite the fact that this bird is more likely to be a subordinate, its ability to obtain food increases substantially. As well this bird is now less likely to be attacked by a predator because predators have a lower success rate when attacking large flocks.
Safety from predation
The ability to avoid predation is one of the most important skills necessary in order to increase one's fitness. It can be seen that by ground squirrels living in colonies, the ability to recognize a predator is rapid. The squirrel is then able to use vocalizations to warn conspecifics of the possible threat. This simple example demonstrates that flocks are not only seen in bird species or a herd of sheep, but it is also apparent in other animals such as rodents. This alarm call of the ground squirrel requires the ability of the animal to first recognize that there is danger present and then to react. This type of behaviour is also seen in some birds. It is important to note that by making an alarm call to signal members of the flock one is providing the predator with an acoustical cue to the location of a possible prey. The benefit here is if the members of the flock are genetically related to one another. If this is true, even if the bird that signalled the flock were to die its fitness would not decrease according to Hamilton's Rule. However another study involving thick-knees challenged whether or not an animal had to recognize the presence of a predator for protection against it.
Thick-knees are birds that are seen in large flocks during particular seasons in various regions of the world. During the nonbreeding season, Peruvian thick-knees in Chile are reported to have an average of 22.5 birds — a mixture of adults and youngsters — in their flocks. Young birds were observed learning anti-predator behaviour strategies from adults during this time. Researchers believe that the flocking behaviour may help to decrease a predator's success rate when attacking the flock, rather than increasing the ability of the flock to spot an approaching predator.
By birds co-existing with one another in a flock, less time and energy is spent searching for predators. This mutual protection of one another within the flock is one of the benefits to living within a group. However, as flock numbers increase the more aggressive individuals within the flock become towards one another. This is one of the costs to living within a flock. It is often seen that flocks are dynamic and thus fluctuate in size depending on the needs of individuals in order the maximize benefits without incurring a large amount of costs.
By living in a large flock, birds can to attack the predator with a stronger force compared to if the bird was on its own. Flocks of black-capped chickadees have shown the ability to produce a mobbing call when they visualize a possible predator. In response, the individual black-capped chickadees surround the predator and attack it in a mob-like fashion to force the predator to leave. This is known as mobbing. This mobbing behaviour is quickly learned by the juveniles within a flock meaning that these individuals will be better equipped as adults to ward off predators and respond rapidly when a predator is in sight.
Foraging in flocks
Bird species living in a flock may capture prey, likely injured, from an unsuccessful bird within its flock. This behavior is known as the beater effect and is one of the benefits of birds foraging in a flock with other birds.
It can be seen that birds in a flock may perform the information-sharing model. In this situation the entire flock would search for food and the first to find a reliable food source will alert the flock and the entire group may benefit by this finding. While this is an obvious benefit of the information-sharing model, the cost is that the social hierarchy of the flock may result in subordinate birds being denied food by those that are dominant. Another cost is the possibility that some individuals may refuse to contribute in the search of food and instead simply wait for another member to find a food resource. These individuals are known as producers and scroungers, respectively.
An intricate hunting system can be seen in the Harris's hawk in which groups of 2–6 hunt a single prey together. The group splits into smaller groups in which it then encloses on a prey, such as a rabbit, before it attacks it. By hunting as a group the Harris's Hawk can hunt larger animals and decrease the amount of energy spent hunting while each hawk in the group is able to eat from the catch.
Black sun
In Denmark, there is a biannual phenomenon known as sort sol (Danish for "black sun"). This is when flocks of European starlings gather in vast numbers, creating complex shapes against the sky during the spring. It is during this time spent in Denmark that the European starlings spend time gathering food and resting as part of their migration journey. Collecting in groups this large enables the European starlings to decrease their risk of predation by hawks.
| Biology and health sciences | Ethology | Biology |
8809337 | https://en.wikipedia.org/wiki/Trumpetfish | Trumpetfish | The trumpetfishes are three species of highly specialized, tubularly-elongated marine fishes in the genus Aulostomus, of the monogeneric family Aulostomidae. The trumpetfishes are members of the order Syngnathiformes, together with the seahorses and the similarly built, closely related cornetfishes.
The generic name, Aulostomus, is a composite of two Greek words: aulos, meaning flute, and stoma, meaning mouth, because the species appear to have tubular snouts. "Flutemouth" is another less-common name for the members of the family (although this word is more often used to refer to closely related cornetfishes of the family Fistulariidae).
Trumpetfishes are found in tropical waters worldwide, with two species in the Atlantic and one in the Indo-Pacific. They are mostly demersal reef-dwellers, where one species seems to prefer rocky substrate.
They are relatively large for reef fish, where they reach almost 1 m in length. Bodies of trumpetfish are elongated, rigid, and pike-shaped. Their dorsal and anal fins are closely adjacent to the tail, where individual dorsal spines reach midway towards the head region. Similar to most members of the order Syngnathiformes, the bodies of trumpetfish are inflexible, supported by interwoven struts of bone. A distinct trait of the family is their long, tubular snouts ending with somewhat nondescript jaws. Members of the family have the capability to expand their jaws quickly into a circular, gaping hole almost to the body's diameter when feeding.
Aulostomids are highly carnivorous fish. They stalk their prey by hovering almost motionlessly a few inches above the substrate, inching their way towards unsuspecting prey. Once close enough, they rapidly dart in and expand their jaws rapidly. Opening their tube-like mouths in quick succession creates a strong suction force, which draws prey straight into the mouth. Aulostomids are known to feed almost exclusively on small, schooling reef fishes.
While they have no commercial fisheries value, members of the family have been known to occasionally be found in the aquarium trade. Although not popular aquarium fish, they are common enough to have websites featuring instructions on keeping them in captivity.
Species
Currently, three species in this genus are recognized:
Aulostomus chinensis (Linnaeus, 1766) (Chinese trumpetfish)
Aulostomus maculatus Valenciennes, 1841 (West Atlantic trumpetfish)
Aulostomus strigosus Wheeler, 1955 (Atlantic trumpetfish)
The following fossil species of Aulostomus are also known:
†Aulostomus fractus Daniltshenko, 1960 - Early Oligocene of the North Caucasus, Russia
†Aulostomus medius Weiler, 1920 - Early Oligocene of Germany
Other extinct fossil genera within the Aulostomidae include Eoaulostomus, Macroaulostomus, Jungersenichthys, Synhypuralis & Tyleria, all from the Early Eocene of Italy, as well as Frauenweilerostomus from the Early Oligocene of Germany.
| Biology and health sciences | Acanthomorpha | Animals |
3071186 | https://en.wikipedia.org/wiki/Gravitational%20acceleration | Gravitational acceleration | In physics, gravitational acceleration is the acceleration of an object in free fall within a vacuum (and thus without experiencing drag). This is the steady gain in speed caused exclusively by gravitational attraction. All bodies accelerate in vacuum at the same rate, regardless of the masses or compositions of the bodies; the measurement and analysis of these rates is known as gravimetry.
At a fixed point on the surface, the magnitude of Earth's gravity results from combined effect of gravitation and the centrifugal force from Earth's rotation. At different points on Earth's surface, the free fall acceleration ranges from , depending on altitude, latitude, and longitude. A conventional standard value is defined exactly as 9.80665 m/s² (about 32.1740 ft/s²). Locations of significant variation from this value are known as gravity anomalies. This does not take into account other effects, such as buoyancy or drag.
Relation to the Universal Law
Newton's law of universal gravitation states that there is a gravitational force between any two masses that is equal in magnitude for each mass, and is aligned to draw the two masses toward each other. The formula is:
where and are any two masses, is the gravitational constant, and is the distance between the two point-like masses.
Using the integral form of Gauss's Law, this formula can be extended to any pair of objects of which one is far more massive than the other — like a planet relative to any man-scale artifact. The distances between planets and between the planets and the Sun are (by many orders of magnitude) larger than the sizes of the sun and the planets. In consequence both the sun and the planets can be considered as point masses and the same formula applied to planetary motions. (As planets and natural satellites form pairs of comparable mass, the distance 'r' is measured from the common centers of mass of each pair rather than the direct total distance between planet centers.)
If one mass is much larger than the other, it is convenient to take it as observational reference and define it as source of a gravitational field of magnitude and orientation given by:
where is the mass of the field source (larger), and is a unit vector directed from the field source to the sample (smaller) mass. The negative sign indicates that the force is attractive (points backward, toward the source).
Then the attraction force vector onto a sample mass can be expressed as:
Here is the frictionless, free-fall acceleration sustained by the sampling mass under the attraction of the gravitational source.
It is a vector oriented toward the field source, of magnitude measured in acceleration units. The gravitational acceleration vector depends only on how massive the field source is and on the distance 'r' to the sample mass . It does not depend on the magnitude of the small sample mass.
This model represents the "far-field" gravitational acceleration associated with a massive body. When the dimensions of a body are not trivial compared to the distances of interest, the principle of superposition can be used for differential masses for an assumed density distribution throughout the body in order to get a more detailed model of the "near-field" gravitational acceleration. For satellites in orbit, the far-field model is sufficient for rough calculations of altitude versus period, but not for precision estimation of future location after multiple orbits.
The more detailed models include (among other things) the bulging at the equator for the Earth, and irregular mass concentrations (due to meteor impacts) for the Moon. The Gravity Recovery and Climate Experiment (GRACE) mission launched in 2002 consists of two probes, nicknamed "Tom" and "Jerry", in polar orbit around the Earth measuring differences in the distance between the two probes in order to more precisely determine the gravitational field around the Earth, and to track changes that occur over time. Similarly, the Gravity Recovery and Interior Laboratory mission from 2011 to 2012 consisted of two probes ("Ebb" and "Flow") in polar orbit around the Moon to more precisely determine the gravitational field for future navigational purposes, and to infer information about the Moon's physical makeup.
Comparative gravities of the Earth, Sun, Moon, and planets
The table below shows comparative gravitational accelerations at the surface of the Sun, the Earth's moon, each of the planets in the Solar System and their major moons, Ceres, Pluto, and Eris. For gaseous bodies, the "surface" is taken to mean visible surface: the cloud tops of the giant planets (Jupiter, Saturn, Uranus, and Neptune), and the Sun's photosphere. The values in the table have not been de-rated for the centrifugal force effect of planet rotation (and cloud-top wind speeds for the giant planets) and therefore, generally speaking, are similar to the actual gravity that would be experienced near the poles. For reference, the time it would take an object to fall , the height of a skyscraper, is shown, along with the maximum speed reached. Air resistance is neglected.
General relativity
In Einstein's theory of general relativity, gravitation is an attribute of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, masses distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. The gravitational force is a fictitious force. There is no gravitational acceleration, in that the proper acceleration and hence four-acceleration of objects in free fall are zero. Rather than undergoing an acceleration, objects in free fall travel along straight lines (geodesics) on the curved spacetime.
Gravitational field
| Physical sciences | Orbital mechanics | null |
3071612 | https://en.wikipedia.org/wiki/Hydrogen%20spectral%20series | Hydrogen spectral series | The emission spectrum of atomic hydrogen has been divided into a number of spectral series, with wavelengths given by the Rydberg formula. These observed spectral lines are due to the electron making transitions between two energy levels in an atom. The classification of the series by the Rydberg formula was important in the development of quantum mechanics. The spectral series are important in astronomical spectroscopy for detecting the presence of hydrogen and calculating red shifts.
Physics
A hydrogen atom consists of an electron orbiting its nucleus. The electromagnetic force between the electron and the nuclear proton leads to a set of quantum states for the electron, each with its own energy. These states were visualized by the Bohr model of the hydrogen atom as being distinct orbits around the nucleus. Each energy level, or electron shell, or orbit, is designated by an integer, as shown in the figure. The Bohr model was later replaced by quantum mechanics in which the electron occupies an atomic orbital rather than an orbit, but the allowed energy levels of the hydrogen atom remained the same as in the earlier theory.
Spectral emission occurs when an electron transitions, or jumps, from a higher energy state to a lower energy state. To distinguish the two states, the lower energy state is commonly designated as , and the higher energy state is designated as . The energy of an emitted photon corresponds to the energy difference between the two states. Because the energy of each state is fixed, the energy difference between them is fixed, and the transition will always produce a photon with the same energy.
The spectral lines are grouped into series according to . Lines are named sequentially starting from the longest wavelength/lowest frequency of the series, using Greek letters within each series. For example, the line is called "Lyman-alpha" (Ly-α), while the line is called "Paschen-delta" (Pa-δ).
There are emission lines from hydrogen that fall outside of these series, such as the 21 cm line. These emission lines correspond to much rarer atomic events such as hyperfine transitions. The fine structure also results in single spectral lines appearing as two or more closely grouped thinner lines, due to relativistic corrections.
In quantum mechanical theory, the discrete spectrum of atomic emission was based on the Schrödinger equation, which is mainly devoted to the study of energy spectra of hydrogen-like atoms, whereas the time-dependent equivalent Heisenberg equation is convenient when studying an atom driven by an external electromagnetic wave.
In the processes of absorption or emission of photons by an atom, the conservation laws hold for the whole isolated system, such as an atom plus a photon. Therefore the motion of the electron in the process of photon absorption or emission is always accompanied by motion of the nucleus, and, because the mass of the nucleus is always finite, the energy spectra of hydrogen-like atoms must depend on the nuclear mass.
Rydberg formula
The energy differences between levels in the Bohr model, and hence the wavelengths of emitted or absorbed photons, is given by the Rydberg formula:
where
The wavelength will always be positive because is defined as the lower level and so is less than . This equation is valid for all hydrogen-like species, i.e. atoms having only a single electron, and the particular case of hydrogen spectral lines is given by Z = 1.
Series
Lyman series ( = 1)
In the Bohr model, the Lyman series includes the lines emitted by transitions of the electron from an outer orbit of quantum number n > 1 to the 1st orbit of quantum number n' = 1.
The series is named after its discoverer, Theodore Lyman, who discovered the spectral lines from 1906–1914. All the wavelengths in the Lyman series are in the ultraviolet band.
Balmer series ( = 2)
The Balmer series includes the lines due to transitions from an outer orbit n > 2 to the orbit n' = 2.
Named after Johann Balmer, who discovered the Balmer formula, an empirical equation to predict the Balmer series, in 1885. Balmer lines are historically referred to as "H-alpha", "H-beta", "H-gamma" and so on, where H is the element hydrogen. Four of the Balmer lines are in the technically "visible" part of the spectrum, with wavelengths longer than 400 nm and shorter than 700 nm. Parts of the Balmer series can be seen in the solar spectrum. H-alpha is an important line used in astronomy to detect the presence of hydrogen.
Paschen series (Bohr series, = 3)
Named after the German physicist Friedrich Paschen who first observed them in 1908. The Paschen lines all lie in the infrared band. This series overlaps with the next (Brackett) series, i.e. the shortest line in the Brackett series has a wavelength that falls among the Paschen series. All subsequent series overlap.
Brackett series ( = 4)
Named after the American physicist Frederick Sumner Brackett who first observed the spectral lines in 1922. The spectral lines of Brackett series lie in far infrared band.
Pfund series ( = 5)
Experimentally discovered in 1924 by August Herman Pfund.
Humphreys series ( = 6)
Discovered in 1953 by American physicist Curtis J. Humphreys.
Further series ( > 6)
Further series are unnamed, but follow the same pattern and equation as dictated by the Rydberg equation. Series are increasingly spread out and occur at increasing wavelengths. The lines are also increasingly faint, corresponding to increasingly rare atomic events. The seventh series of atomic hydrogen was first demonstrated experimentally at infrared wavelengths in 1972 by Peter Hansen and John Strong at the University of Massachusetts Amherst.
Extension to other systems
The concepts of the Rydberg formula can be applied to any system with a single particle orbiting a nucleus, for example a He+ ion or a muonium exotic atom. The equation must be modified based on the system's Bohr radius; emissions will be of a similar character but at a different range of energies. The Pickering–Fowler series was originally attributed to an unknown form of hydrogen with half-integer transition levels by both Pickering and Fowler, but Bohr correctly recognised them as spectral lines arising from the He+ ion.
All other atoms have at least two electrons in their neutral form and the interactions between these electrons makes analysis of the spectrum by such simple methods as described here impractical. The deduction of the Rydberg formula was a major step in physics, but it was long before an extension to the spectra of other elements could be accomplished.
| Physical sciences | Atomic physics | Physics |
3072461 | https://en.wikipedia.org/wiki/Long%20pepper | Long pepper | Long pepper (Piper longum), sometimes called Indian long pepper or pippali, is a flowering vine in the family Piperaceae, cultivated for its fruit, which is usually dried and used as a spice and seasoning. Long pepper has a taste similar to, but sweeter and more pungent than, that of its close relative Piper nigrum – from which black, green and white pepper are obtained.
The fruit of the pepper consists of many minuscule fruits – each about the size of a poppy seed – embedded in the surface of a flower spike that closely resembles a hazel tree catkin. Like Piper nigrum, the fruits contain the compound piperine, which contributes to their pungency. Another species of long pepper, Piper retrofractum, is native to Java, Indonesia. The fruits of this plant are often confused with chili peppers, which belong to the genus Capsicum, originally from the Americas.
History
The oldest known reference to long pepper comes from ancient Indian textbooks of Ayurveda, where its medicinal and dietary uses are described in detail. It reached Greece in the sixth or fifth century BCE, though Hippocrates discussed it as a medicament rather than a spice. Among the Greeks and Romans and prior to the Columbian exchange, long pepper was an important and well-known spice.
The ancient history of long pepper is often interlinked with that of black pepper (Piper nigrum). Theophrastus distinguished the two in his work of botany. The Romans knew of both but their word for pepper usually meant black pepper. Pliny erroneously believed dried black pepper and long pepper came from the same plant.
Round, or black, pepper began to compete with long pepper in Europe from the twelfth century and had displaced it by the fourteenth. The quest for cheaper and more dependable sources of black pepper fueled the Age of Discovery.
After the discovery of the American continents and of chili pepper, called by the Spanish pimiento, employing their word for long pepper, the popularity of long pepper faded away. Chili peppers, some of which, when dried, are similar in shape and taste to long pepper, were easier to grow in a variety of locations more convenient to Europe. Today, long pepper is a rarity in general commerce.
Etymology
The word pepper itself is derived from the word for long pepper, Tamil word pippali. The plant itself is a native of India. The word pepper in bell pepper, referring to completely different plants under genus Capsicum, is of the same etymology. That usage began in the 16th century.
Usage
Though often used in medieval times in spice mixes like "strong powder", long pepper is today a very rare ingredient in European cuisines, but it can still be found in Indian and Nepalese vegetable pickles, some North African spice mixtures, and in Indonesian and Malaysian cooking. It is readily available at Indian grocery stores, where it is usually labeled pippali. Pippali is the main spice of nihari, a popular meat stew from India, originating in the Indian metropolis of Lucknow, and one of the national dishes of Pakistan.
| Biology and health sciences | Herbs and spices | Plants |
3074113 | https://en.wikipedia.org/wiki/Antelope%20jackrabbit | Antelope jackrabbit | The antelope jackrabbit (Lepus alleni) is a species of North American hare found in southern Arizona and northwestern Mexico that occupies dry desert areas.
Behaviour
It is most active during twilight (crepuscular) and during the night (nocturnal), but can be active during the day when conditions are favorable (heavy cloud coverage).
Evolutionary history
Fossil evidence places the genus Lepus in North America approximately 2.5 million years ago. A now extinct jackrabbit species, Lepus giganteus, was thought to exist in North America during this time. This species shared similar physical traits with the antelope jackrabbit, making it difficult to differentiate fossils of the two species. In a 2014 study, researchers hypothesized that L. giganteus served as a common ancestor to the antelope jackrabbit and black-tailed jackrabbit. The black-tailed jackrabbit coexists with the antelope jackrabbit and the two species maintain a sympatric relationship. In the same 2014 study, genetic analysis concluded that three Lepus species share a common white-sided jackrabbit ancestor: L. callotis (white-sided jackrabbit), L. alleni (antelope jackrabbit), and L. flavigularis (Tehuantepec jackrabbit). Based on this evidence, researchers also concluded that the black-tailed jackrabbit, though closely related to white-sided jackrabbits, exists in its own separate subclade.
Geographic range
In the United States, the antelope jackrabbit is found in parts of Arizona and states like Chihuahua, Nayarit, Sinaloa and Sonora in Northwestern Mexico. Compared to the other hare species present in North America, the antelope jackrabbit's range is limited. This species does not inhabit areas further east than the sky islands in Arizona and the Sierra Madre Occidental in Mexico. It also does not radiate west of Florence, Arizona. As of July 2017 it had been spotted and photographed by a National Park Ranger in the Lake Mead National Recreation Area in Nevada.
Habitat
The antelope jackrabbit is found in a variety of tropic and subtropic habitats. It can be found in grassy hills or plains, preferring habitats with large, desert shrubs above long grass. This species can also be found in more barren desert habitats. A 2014 study focusing on ecology indicated that the ideal habitat for an antelope jackrabbit includes grassy ground cover and a mesquite overstory. This species does not prefer an arid climate; instead, antelope jackrabbits live in areas with summer precipitation amounts ranging from 90 mm to 360 mm. Unlike the black-tailed jackrabbit which survives in less humid conditions, the antelope jackrabbit inhabits locations with higher humidity.
Description
The antelope jackrabbit is a large Lepus species. Male and female antelope jackrabbits are identical in appearance. This species is large in size with long, pointed ears and a distinct coat coloration. The antelope jackrabbit has a white belly, light grey sides, a back peppered with black, and orange coloration on the neck and chest. It is similar to species like the black-tailed jackrabbit and white-sided jackrabbit. Its body length ranges from long and its tail can be long. Its front legs grow to be and the back legs can grow to be long. The antelope jackrabbit's ears grow to be and it can weigh up to . The species has a very large skull and a long rostrum. Its ears are extremely long with white on the point and edges. The bi-colored tail is black on top and a pale grey below.
Feeding
The antelope jackrabbit feeds on cacti, grasses, mesquite leaves and other leafy vegetation. This species has been observed digging and eating soil in an attempt to intake minerals and other nutrients. They can be classified as folivores and graminivores.
Reproduction
Antelope jackrabbits breed from December to September and the gestation period is roughly six weeks long. Females have up to four litters per year ranging from one to five individuals. A baby hare, called a leveret, is born precocial; its eyes are open, it is active, and covered with fur. Young are born in shallow dirt nests that are formed by scraping the surface of the ground.
Threats
Known predators of the antelope jackrabbit include bobcats, coyotes, and golden eagles. Since antelope jackrabbits attract predators that are also a threat to livestock, they are hunted by humans to reduce potential problems. This species is also hunted for human consumption or for their valuable pelt.
Habitat loss also poses a threat to antelope jackrabbits because agricultural expansion is interfering with their habitats. Grazing livestock reduce the abundance of grasses and herbaceous plants in areas where antelope jackrabbits reside.
Subspecies
There are three recognized subspecies:
L a. i alleni
L. a. palitans
L. a. tiburonensis
| Biology and health sciences | Lagomorphs | Animals |
1581508 | https://en.wikipedia.org/wiki/Sonchus | Sonchus | Sonchus is a genus of flowering plants in the tribe Cichorieae within the family Asteraceae and are commonly known as sow thistles (less commonly hare thistles or hare lettuces). Sowthistles are annual, biennial or perennial herbs, with or without rhizomes and a few are even woody (subgenus Dendrosonchus, restricted to the Canary Islands and Madeira).
Description
The genus is named after the Ancient Greek for such plants, σόγχος. All are characterized by soft, somewhat irregularly lobed leaves that clasp the stem and, at least initially, form a basal rosette. The stem contains a milky latex. Flower heads are yellow and range in size from half to one inch in diameter; the florets are all of ray type. Sonchus fruits are single-seeded, dry and indehiscent. Sow thistles are common roadside plants, and while native to Eurasia and tropical Africa, they are found almost worldwide in temperate regions.
Mature sow thistle stems can range from 30 cm to 2 m (1 to 6 ft) tall, depending upon species and growing conditions. Coloration ranges from green to purple in older plants. Sow thistles exude a milky latex when any part of the plant is cut or damaged, and it is from this fact that the plants obtained the common name, "sow thistle", as they were fed to lactating sows in the belief that milk production would increase. Sow thistles are known as "milk thistles" in some regions, although milk thistle more commonly refers to the genus Silybum.
Species
The following 106 species are accepted by Plants of the World Online .
Sonchus acaulis
Sonchus x aemulus
Sonchus afromontanus
Sonchus araraticus
Sonchus arboreus
Sonchus arvensis
Sonchus asper
Sonchus x beltraniae
Sonchus berteroanus
Sonchus bipontini
Sonchus bornmuelleri
Sonchus bourgeaui
Sonchus brachylobus
Sonchus brachyotus
Sonchus brassicifolius
Sonchus briquetianus
Sonchus bupleuroides
Sonchus camporum
Sonchus canariensis
Sonchus capillaris
Sonchus cavanillesii
Sonchus congestus
Sonchus crassifolius
Sonchus daltonii
Sonchus dregeanus
Sonchus erzincanicus
Sonchus esperanzae
Sonchus fauces-orci
Sonchus fragilis
Sonchus friesii
Sonchus fruticosus
Sonchus gandogeri
Sonchus gigas
Sonchus gomeraensis
Sonchus grandifolius
Sonchus gummifer
Sonchus heterophyllus
Sonchus hierrensis
Sonchus hotha
Sonchus hydrophilus
Sonchus integrifolius
Sonchus jacottetianus
Sonchus jainii
Sonchus x jaquiniocephalus
Sonchus kirkii
Sonchus laceratus
Sonchus latifolius
Sonchus leptocephalus
Sonchus lidii
Sonchus lobatiflorus
Sonchus luxurians
Sonchus macrocarpus
Sonchus maculigerus
Sonchus malayanus
Sonchus marginatus
Sonchus maritimus
Sonchus masguindalii
Sonchus mauritanicus
Sonchus x maynari
Sonchus megalocarpus
Sonchus melanolepis
Sonchus micranthus
Sonchus microcarpus
Sonchus microcephalus
Sonchus nanus
Sonchus neriifolius
Sonchus novae-zelandiae - also known as Kirkianella novae-zelandiae
Sonchus x novocastellanus
Sonchus obtusilobus
Sonchus oleraceus
Sonchus ortunoi
Sonchus palmensis
Sonchus palustris
Sonchus parathalassius
Sonchus pendulus
Sonchus phoeniciformis
Sonchus pinnatifidus
Sonchus pinnatus
Sonchus pitardii
Sonchus platylepis
Sonchus x prudhommei
Sonchus pruinatus
Sonchus pustulatus
Sonchus radicatus
Sonchus regis-jubae
Sonchus regius
Sonchus x rokosensis
Sonchus x rotundilobus
Sonchus x rupicola
Sonchus saudensis
Sonchus schweinfurthii
Sonchus sinuatus
Sonchus sosnowskyi
Sonchus splendens
Sonchus stenophyllus
Sonchus suberosus
Sonchus sventenii
Sonchus tectifolius
Sonchus tenerrimus
Sonchus transcaspicus
Sonchus tuberifer
Sonchus ustulatus
Sonchus webbii
Sonchus wightianus
Sonchus wildpretii
Sonchus wilmsii
Invasive
In many areas sow thistles are considered noxious weeds, as they grow quickly in a wide range of conditions and their wind-borne seeds allow them to spread rapidly. Sonchus arvensis, the perennial sow thistle, is considered the most economically detrimental, as it can crowd commercial crops, is a heavy consumer of nitrogen in soils, may deplete soil water of land left to fallow, and can regrow and sprout additional plants from its creeping roots. However, sow thistles are easily uprooted by hand, and their soft stems present little resistance to slashing or mowing.
Most livestock will readily devour sow thistle in preference to grass, and this lettuce-relative is edible and nutritious to humans—in fact this is the meaning of the second part of the Latin name of the common sow thistle, oleraceus. Attempts at weed control by herbicidal use, to the neglect of other methods, may have led to a proliferation of these species in some environments.
Cultivation
Sow thistles are common host plants for aphids. Gardeners may consider this a benefit or a curse; aphids may spread from sow thistle to other plants, but alternatively the sow thistle can encourage the growth of beneficial predators such as hoverflies. In this regard sow thistles make excellent sacrificial plants. Sonchus species are used as food plants by the larvae of some Lepidoptera including Celypha rufana and the broad-barred white, grey chi, nutmeg, and shark moths. The fly Tephritis formosa is known to attack the capitula of this plant.
Sow thistles have been used as fodder, particularly for rabbits, hence the other common names of "hare thistle" or "hare lettuce". They are also edible to humans as a leaf vegetable; old leaves and stalks can be bitter but young leaves have a flavour similar to lettuce. Going by the name pūhā or rareke (raraki) it is a traditional food eaten in New Zealand by Māori. When cooked the flavour is reminiscent of chard.
Uses
The greens were eaten by the indigenous people of North America. Edible raw when young, the older greens can also be eaten after cooking briefly.
| Biology and health sciences | Asterales | null |
1581671 | https://en.wikipedia.org/wiki/Healthy%20diet | Healthy diet | A healthy diet is a diet that maintains or improves overall health. A healthy diet provides the body with essential nutrition: fluid, macronutrients such as protein, micronutrients such as vitamins, and adequate fibre and food energy.
A healthy diet may contain fruits, vegetables, and whole grains, and may include little to no ultra-processed foods or sweetened beverages. The requirements for a healthy diet can be met from a variety of plant-based and animal-based foods, although additional sources of vitamin B12 are needed for those following a vegan diet. Various nutrition guides are published by medical and governmental institutions to educate individuals on what they should be eating to be healthy. Advertising may drive preferences towards unhealthy foods. To reverse this trend, consumers should be informed, motivated and empowered to choose healthy diets. Nutrition facts labels are also mandatory in some countries to allow consumers to choose between foods based on the components relevant to health.
It is estimated that in 2023 40% of the world population couldn't afford a healthy diet.
The Food and Agriculture Organization and the World Health Organization have formulated four core principles of what constitutes healthy diets. According to these two organizations, health diets are:
Adequate, as they meet, without exceeding, our body’s energy and essential nutrient requirements in support of all the many body functions.
Diverse, as they include various nutritious foods within and across food groups to help secure the sufficient nutrients needed by our bodies.
Balanced, as they include energy from the three primary sources (protein, fats, and carbohydrates) in a balanced way and foster healthy weight, growth and activity, and to prevent disease.
Moderate, as they include only small quantities (or none) of foods that may have a negative impact on health, such as highly salty and sugary foods.
Recommendations
World Health Organization
The World Health Organization (WHO) makes the following five recommendations with respect to both populations and individuals:
Maintain a healthy weight by eating roughly the same number of calories that your body is using.
Limit intake of fats to no more than 30% of total caloric intake, preferring unsaturated fats to saturated fats. Avoid trans fats.
Eat at least 400 grams of fruits and vegetables per day (not counting potatoes, sweet potatoes, cassava, and other starchy roots). A healthy diet also contains legumes (e.g. lentils, beans), whole grains, and nuts.
Limit the intake of simple sugars to less than 10% of caloric intake (below 5% of calories or 25 grams may be even better).
Limit salt/sodium from all sources and ensure that salt is iodized. Less than 5 grams of salt per day can reduce the risk of cardiovascular disease.
The WHO has stated that insufficient vegetables and fruit is the cause of 2.8% of deaths worldwide.
Other WHO recommendations include:
ensuring that the foods chosen have sufficient vitamins and certain minerals;
avoiding directly poisonous (e.g. heavy metals) and carcinogenic (e.g. benzene) substances;
avoiding foods contaminated by human pathogens (e.g. E. coli, tapeworm eggs);
and replacing saturated fats with polyunsaturated fats in the diet, which can reduce the risk of coronary artery disease and diabetes.
United States Department of Agriculture
The Dietary Guidelines for Americans by the United States Department of Agriculture (USDA) recommends three healthy patterns of diet, summarized in the table below, for a 2000 kcal diet. These guidelines are increasingly adopted by various groups and institutions for recipe and meal plan development.
The guidelines emphasize both health and environmental sustainability and a flexible approach. The committee that drafted it wrote: "The major findings regarding sustainable diets were that a diet higher in plant-based foods, such as vegetables, fruits, whole grains, legumes, nuts, and seeds, and lower in calories and animal-based foods is more health promoting and is associated with less environmental impact than is the current U.S. diet. This pattern of eating can be achieved through a variety of dietary patterns, including the "Healthy U.S.-style Pattern", the "Healthy Vegetarian Pattern" and the "Healthy Mediterranean-style Pattern". Food group amounts are per day, unless noted per week.
American Heart Association / World Cancer Research Fund / American Institute for Cancer Research
The American Heart Association, World Cancer Research Fund, and American Institute for Cancer Research recommend a diet that consists mostly of unprocessed plant foods, with emphasis on a wide range of whole grains, legumes, and non-starchy vegetables and fruits. This healthy diet includes a wide range of non-starchy vegetables and fruits which provide different colors including red, green, yellow, white, purple, and orange. The recommendations note that tomato cooked with oil, allium vegetables like garlic, and cruciferous vegetables like cauliflower, provide some protection against cancer. This healthy diet is low in energy density, which may protect against weight gain and associated diseases. Finally, limiting consumption of sugary drinks, limiting energy-rich foods, including "fast foods" and red meat, and avoiding processed meats improves health and longevity. Overall, researchers and medical policymakers conclude that this healthy diet can reduce the risk of chronic disease and cancer.
It is recommended that children consume 25 grams or less of added sugar (100 calories) per day. Other recommendations include no extra sugars in those under two years old and less than one soft drink per week. As of 2017, decreasing total fat is no longer recommended, but instead, the recommendation to lower risk of cardiovascular disease is to increase consumption of monounsaturated fats and polyunsaturated fats, while decreasing consumption of saturated fats.
Harvard School of Public Health
The Nutrition Source of Harvard School of Public Health (HSPH) makes the following dietary recommendations:
Eat healthy fats: healthy fats are necessary and beneficial for health. HSPH "recommends the opposite of the low-fat message promoted for decades by the USDA" and "does not set a maximum on the percentage of calories people should get each day from healthy sources of fat." Healthy fats include polyunsaturated and monounsaturated fats, found in vegetable oils, nuts, seeds, and fish. Foods containing trans fats are to be avoided, while foods high in saturated fats like red meat, butter, cheese, ice cream, coconut and palm oil negatively impact health and should be limited.
Eat healthy protein: the majority of protein should come from plant sources when possible: lentils, beans, nuts, seeds, whole grains; avoid processed meats like bacon.
Eat mostly vegetables, fruit, and whole grains.
Drink water. Consume sugary beverages, juices, and milk only in moderation. Artificially sweetened beverages contribute to weight gain because sweet drinks cause cravings. 100% fruit juice is high in calories. The ideal amount of milk and calcium is not known today.
Pay attention to salt intake from commercially prepared foods: most of the dietary salt comes from processed foods, "not from salt added to cooking at home or even from salt added at the table before eating."
Vitamins and minerals: must be obtained from food because they are not produced in our body. They are provided by a diet containing healthy fats, healthy protein, vegetables, fruit, milk and whole grains.
Pay attention to the carbohydrates package: the type of carbohydrates in the diet is more important than the amount of carbohydrates. Good sources for carbohydrates are vegetables, fruits, beans, and whole grains. Avoid sugared sodas, 100% fruit juice, artificially sweetened drinks, and other highly processed food.
Other than nutrition, the guide recommends staying active and maintaining a healthy body weight.
Others
David L. Katz, who reviewed the most prevalent popular diets in 2014, noted:
The weight of evidence strongly supports a theme of healthful eating while allowing for variations on that theme. A diet of minimally processed foods close to nature, predominantly plants, is decisively associated with health promotion and disease prevention and is consistent with the salient components of seemingly distinct dietary approaches.
Efforts to improve public health through diet are forestalled not for want of knowledge about the optimal feeding of Homo sapiens but for distractions associated with exaggerated claims, and our failure to convert what we reliably know into what we routinely do. Knowledge in this case is not, as of yet, power; would that it were so.
Marion Nestle expresses the mainstream view among scientists who study nutrition:
The basic principles of good diets are so simple that I can summarize them in just ten words: eat less, move more, eat lots of fruits and vegetables. For additional clarification, a five-word modifier helps: go easy on junk foods. Follow these precepts and you will go a long way toward preventing the major diseases of our overfed society—coronary heart disease, certain cancers, diabetes, stroke, osteoporosis, and a host of others.... These precepts constitute the bottom line of what seem to be the far more complicated dietary recommendations of many health organizations and national and international governments—the forty-one "key recommendations" of the 2005 Dietary Guidelines, for example. ... Although you may feel as though advice about nutrition is constantly changing, the basic ideas behind my four precepts have not changed in half a century. And they leave plenty of room for enjoying the pleasures of food.
Historically, a healthy diet was defined as a diet comprising more than 55% of carbohydrates, less than 30% of fat and about 15% of proteins. This view is currently shifting towards a more comprehensive framing of dietary needs as a global need of various nutrients with complex interactions, instead of per nutrient type needs.
Specific conditions
Diabetes
A healthy diet in combination with being active can help those with diabetes keep their blood sugar in check. The US CDC advises individuals with diabetes to plan for regular, balanced meals and to include more nonstarchy vegetables, reduce added sugars and refined grains, and focus on whole foods instead of highly processed foods. Generally, people with diabetes and those at risk are encouraged to increase their fiber intake.
Hypertension
A low-sodium diet is beneficial for people with high blood pressure. A 2008 Cochrane review concluded that a long-term (more than four weeks) low-sodium diet lowers blood pressure, both in people with hypertension (high blood pressure) and in those with normal blood pressure.
The DASH diet (Dietary Approaches to Stop Hypertension) is a diet promoted by the National Heart, Lung, and Blood Institute (part of the NIH, a United States government organization) to control hypertension. A major feature of the plan is limiting intake of sodium, and the diet also generally encourages the consumption of nuts, whole grains, fish, poultry, fruits, and vegetables while lowering the consumption of red meats, sweets, and sugar. It is also "rich in potassium, magnesium, and calcium, as well as protein".
The Mediterranean diet, which includes limiting consumption of red meat and using olive oil in cooking, has also been shown to improve cardiovascular outcomes.
Obesity
It is estimated that more than 675 million adults are obese. Healthy diets in combination with physical exercise can be used by people who are overweight or obese to lose weight, although this approach is not by itself an effective long-term treatment for obesity and is primarily effective for only a short period (up to one year), after which some of the weight is typically regained. A meta-analysis found no difference between diet types (low-fat, low-carbohydrate, and low-calorie), with a weight loss. This level of weight loss is by itself insufficient to move a person from an 'obese' body mass index (BMI) category to a 'normal' BMI.
Gluten-related disorders
Gluten, a mixture of proteins found in wheat and related grains including barley, rye, oat, and all their species and hybrids (such as spelt, kamut, and triticale), causes health problems for those with gluten-related disorders, including celiac disease, non-celiac gluten sensitivity, gluten ataxia, dermatitis herpetiformis, and wheat allergy. In these people, the gluten-free diet is the only available treatment.
Epilepsy
The ketogenic diet is a treatment to reduce epileptic seizures for adults and children when managed by a health care team.
Research
Preliminary research indicated that a diet high in fruit and vegetables may decrease the risk of cardiovascular disease and death, but not cancer. Eating a healthy diet and getting enough exercise can maintain body weight within the normal range and reduce the risk of obesity in most people. A 2021 scientific review of evidence on diets for lowering the risk of atherosclerosis found that:
low consumption of salt and foods of animal origin, and increased intake of plant-based foods—whole grains, fruits, vegetables, legumes, and nuts—are linked with reduced atherosclerosis risk. The same applies for the replacement of butter and other animal/tropical fats with olive oil and other unsaturated-fat-rich oil. [...] With regard to meat, new evidence differentiates processed and red meat—both associated with increased CVD risk—from poultry, showing a neutral relationship with CVD for moderate intakes. [...] New data endorse the replacement of most high glycemic index (GI) foods with both whole grain and low GI cereal foods.
Scientific research is also investigating impacts of nutrition on health- and lifespans beyond any specific range of diseases.
Moreover, not only do the components of diets matter but the total caloric content and eating patterns may also impact health – dietary restriction such as caloric restriction is considered to be potentially healthy to include in eating patterns in various ways in terms of health- and lifespan.
Affordability
The UN Food and Agriculture Organization estimates that in 2023 40% of the world population couldn't afford a healthy diet.
Unhealthy diets
An unhealthy diet is a major risk factor for a number of chronic diseases including: high blood pressure, high cholesterol, diabetes, abnormal blood lipids, overweight/obesity, cardiovascular diseases, and cancer. Estimates indicate that, each year, non-communicable diseases (NCDs) such as diabetes and cardiovascular disease are responsible for 41 million deaths – almost two-thirds of all deaths globally. The World Health Organization has estimated that 2.7 million deaths each year are attributable to a diet low in fruit and vegetables during the 21st century. At least 1.2 billion women are low of vitamins and minerals, which increases the risk of being exposed to chronic fatigue, low resistance to infections and birth defects in their offspring.
Globally, such diets are estimated to cause about 19% of gastrointestinal cancer, 31% of ischaemic heart disease, and 11% of strokes, thus making it one of the leading preventable causes of death worldwide, and the 4th leading risk factor for any disease. As an example, the Western pattern diet is "rich in red meat, dairy products, processed and artificially sweetened foods, and salt, with minimal intake of fruits, vegetables, fish, legumes, and whole grains," contrasted by the Mediterranean diet which is associated with less morbidity and mortality.
Dietary patterns that lead to non-communicable diseases generate productivity losses. A true cost accounting (TCA) assessment on the hidden impacts of agrifood systems estimated that unhealthy dietary patterns generate more than USD 9 trillion in health-related hidden costs in 2020, which is 73 percent of the total quantified hidden costs of global agrifood systems (USD 12.7 trillion). Globally, the average productivity losses per person from dietary intake is equivalent to 7 percent of GDP purchasing power parity (PPP) in 2020; low-income countries report the lowest value (4 percent), while other income categories report 7 percent or higher.
Fad diet
Some publicized diets, often referred to as fad diets, make exaggerated claims of fast weight loss or other health advantages, such as longer life or detoxification without clinical evidence; many fad diets are based on highly restrictive or unusual food choices. Celebrity endorsements (including celebrity doctors) are frequently associated with such diets, and the individuals who develop and promote these programs often profit considerably.
Public health
Consumers are generally aware of the elements of a healthy diet, but find nutrition labels and diet advice in popular media confusing.
Vending machines are criticized for being avenues of entry into schools for junk food promoters, but there is little in the way of regulation and it is difficult for most people to properly analyze the real merits of a company referring to itself as "healthy." The Committee of Advertising Practice in the United Kingdom launched a proposal to limit media advertising for food and soft drink products high in fat, salt, or sugar. The British Heart Foundation released its own government-funded advertisements, labeled "Food4Thought", which were targeted at children and adults to discourage unhealthy habits of consuming junk food.
From a psychological and cultural perspective, a healthier diet may be difficult to achieve for people with poor eating habits. This may be due to tastes acquired in childhood and preferences for sugary, salty, and fatty foods. In 2018, the UK chief medical officer recommended that sugar and salt be taxed to discourage consumption. The UK government 2020 Obesity Strategy encourages healthier choices by restricting point-of-sale promotions of less-healthy foods and drinks.
The effectiveness of population-level health interventions has included food pricing strategies, mass media campaigns and worksite wellness programs. One peso per liter of sugar-sweetened beverages (SSB) price intervention implemented in Mexico produced a 12% reduction in SSB purchasing. Mass media campaigns in Pakistan and the USA aimed at increasing vegetable and fruit consumption found positive changes in dietary behavior. Reviews of the effectiveness of worksite wellness interventions found evidence linking the programs to weight loss and increased fruit and vegetable consumption.
Other animals
Animals that are kept by humans also benefit from a healthy diet, but the requirements of such diets may be very different from the ideal human diet.
| Biology and health sciences | Health and fitness: General | Health |
1584031 | https://en.wikipedia.org/wiki/Molten-salt%20reactor | Molten-salt reactor | A molten-salt reactor (MSR) is a class of nuclear fission reactor in which the primary nuclear reactor coolant and/or the fuel is a mixture of molten salt with a fissile material.
Two research MSRs operated in the United States in the mid-20th century. The 1950s Aircraft Reactor Experiment (ARE) was primarily motivated by the technology's compact size, while the 1960s Molten-Salt Reactor Experiment (MSRE) aimed to demonstrate a nuclear power plant using a thorium fuel cycle in a breeder reactor.
Increased research into Generation IV reactor designs renewed interest in the 21st century with multiple nations starting projects. As of June 2023, China has been operating its TMSR-LF1 thorium unit.
Properties
MSRs eliminate the nuclear meltdown scenario present in water-cooled reactors because the fuel mixture is kept in a molten state. The fuel mixture is designed to drain without pumping from the core to a containment vessel in emergency scenarios, where the fuel solidifies, quenching the reaction. In addition, hydrogen evolution does not occur. This eliminates the risk of hydrogen explosions (as in the Fukushima nuclear disaster). They operate at or close to atmospheric pressure, rather than the 75–150 times atmospheric pressure of a typical light-water reactor (LWR). This reduces the need and cost for reactor pressure vessels. The gaseous fission products (Xe and Kr) have little solubility in the fuel salt, and can be safely captured as they bubble out of the fuel, rather than increasing the pressure inside the fuel tubes, as happens in conventional reactors. MSRs can be refueled while operating (essentially online-nuclear reprocessing) while conventional reactors shut down for refueling (notable exceptions include pressure tube reactors like the heavy water CANDU or the Atucha-class PHWRs, light water cooled graphite moderated RBMK, and British-built gas-cooled reactors such as Magnox, AGR). MSR operating temperatures are around , significantly higher than traditional LWRs at around . This increases electricity-generation efficiency and process-heat opportunities.
Relevant design challenges include the corrosivity of hot salts and the changing chemical composition of the salt as it is transmuted by the neutron flux.
MSRs, especially those with fuel in the molten salt, offer lower operating pressures, and higher temperatures. In this respect an MSR is more similar to a liquid metal cooled reactor than to a conventional light water cooled reactor. MSR designs are often breeding reactors with a closed fuel cycle—as opposed to the once-through fuel currently used in conventional nuclear power generators.
MSRs exploit a negative temperature coefficient of reactivity and a large allowable temperature rise to prevent criticality accidents. For designs with the fuel in the salt, the salt thermally expands immediately with power excursions. In conventional reactors the negative reactivity is delayed since the heat from the fuel must be transferred to the moderator. An additional method is to place a separate, passively cooled container below the reactor. Fuel drains into the container during malfunctions or maintenance, which stops the reaction.
The temperatures of some designs are high enough to produce process heat, which led them to be included on the GEN-IV roadmap.
Advantages
MSRs offer many potential advantages over light water reactors:
Passive decay heat removal is achieved in MSRs. In some designs, the fuel and the coolant are a single fluid, so a loss of coolant carries the fuel with it. Fluoride salts dissolve poorly in water, and do not form burnable hydrogen. The molten salt coolant is not damaged by neutron bombardment, though the reactor vessel is.
A low-pressure MSR does not require an expensive, steel core containment vessel, piping, and safety equipment. However, most MSR designs place radioactive fluid in direct contact with pumps and heat exchangers.
MSRs enable cheaper closed nuclear fuel cycles, because they can operate with slow neutrons. Closed fuel cycles can reduce environmental impacts: chemical separation turns long-lived actinides into reactor fuel. Discharged wastes are mostly fission products with shorter half-lives. This can reduce the needed containment to 300 years versus the tens of thousands of years needed by light-water reactor spent fuel.
The fuel's liquid phase can be pyroprocessed to separate fission products from fuels. This may have advantages over conventional reprocessing.
Fuel rod fabrication is replaced with salt synthesis.
Some designs are compatible with fast neutrons, which can "burn" transuranic elements such as , (reactor grade plutonium) from LWRs.
An MSR can react to load changes in under 60 seconds (unlike LWRs that suffer from xenon poisoning).
Molten-salt reactors can run at high temperatures, yielding high thermal efficiency. This reduces size, expense, and environmental impacts.
MSRs can offer a high "specific power", (high power at low mass), as demonstrated by ARE.
Potential neutron economy suggests that MSR may be able to exploit the neutron-poor thorium fuel cycle.
Disadvantages
In circulating-fuel-salt designs, radionuclides dissolved in fuel contact equipment such as pumps and heat exchangers, potentially requiring fully remote maintenance.
Some MSRs require onsite chemical processing to manage core mixture and remove fission products.
Regulatory changes to accommodate non-traditional design features
Some MSR designs rely on expensive nickel alloys to contain the molten salt. Such alloys are prone to embrittlement under high neutron flux.
Corrosion risk. Molten salts require careful management of their oxidation state to manage corrosion risks. This is particularly challenging for circulating designs, in which a mix of isotopes and their decay products circulate through the reactor. Static designs benefit from modularising the problem: the fuel salt is contained within fuel pins whose regular replacement, primarily due to neutron irradiation, is normalized; while the coolant salt has a simpler chemical composition and does not pose a corrosion risk either to the fuel pins or to the reactor vessel. MSRs developed at ORNL in the 1960s were safe to operate only for a few years, and operated at only about . Corrosion risks include dissolution of chromium by liquid fluoride thorium salts at greater than , hence endangering stainless steel components. Neutron radiation can transmute common alloying agents such as Co and Ni, shortening lifespan. Lithium salts such as FLiBe warrant the use of to reduce tritium generation (tritium can permeate stainless steels, cause embrittlement, and escape into the environment). ORNL developed Hastelloy N to help address these issues, while other structural steels may be acceptable, such as 316H, 800H, and inconel 617.
Some MSR designs can be turned into a breeder reactor to produce weapons-grade nuclear material.
MSRE and ARE used high enriched uranium approaching weapons-grade. These levels would be illegal in most modern power plant regulatory regimes. Most modern designs employ lower-enriched fuels.
Neutron damage to solid moderator materials can limit the core lifetime. For example, MSRE was designed so that its graphite moderator had loose tolerances, so neutron damage could change them without consequences. "Two fluid" MSR designs do not use graphite piping because graphite changes size when bombarded with neutrons. MSRs using fast neutrons cannot use graphite, because it moderates neutrons.
Thermal MSRs have lower breeding ratios than fast-neutron breeders, though their doubling time may be shorter.
Coolant
MSRs can be cooled in various ways, including using molten salts.
Molten-salt-cooled solid-fuel reactors are variously called "molten-salt reactor system" in the Generation IV proposal, molten-salt converter reactors (MSCR), advanced high-temperature reactors (AHTRs), or fluoride high-temperature reactors (FHR, preferred DOE designation).
FHRs cannot reprocess fuel easily and have fuel rods that need to be fabricated and validated, requiring up to twenty years from project inception. FHR retains the safety and cost advantages of a low-pressure, high-temperature coolant, also shared by liquid metal cooled reactors. Notably, steam is not created in the core (as is present in boiling water reactors), and no large, expensive steel pressure vessel (as required for pressurized water reactors). Since it can operate at high temperatures, the conversion of the heat to electricity can use an efficient, lightweight Brayton cycle gas turbine.
Much of the current research on FHRs is focused on small, compact heat exchangers that reduce molten salt volumes and associated costs.
Molten salts can be highly corrosive and corrosivity increases with temperature. For the primary cooling loop, a material is needed that can withstand corrosion at high temperatures and intense radiation. Experiments show that Hastelloy-N and similar alloys are suited to these tasks at operating temperatures up to about 700 °C. However, operating experience is limited. Still higher operating temperatures are desirable—at thermochemical production of hydrogen becomes possible. Materials for this temperature range have not been validated, though carbon composites, molybdenum alloys (e.g. TZM), carbides, and refractory metal based or ODS alloys might be feasible.
Fused salt selection
The salt mixtures are chosen to make the reactor safer and more practical.
Fluorine
Fluorine has only one stable isotope (), and does not easily become radioactive under neutron bombardment. Compared to chlorine and other halides, fluorine also absorbs fewer neutrons and slows ("moderates") neutrons better. Low-valence fluorides boil at high temperatures, though many pentafluorides and hexafluorides boil at low temperatures. They must be very hot before they break down into their constituent elements. Such molten salts are "chemically stable" when maintained well below their boiling points. Fluoride salts dissolve poorly in water, and do not form burnable hydrogen.
Chlorine
Chlorine has two stable isotopes ( and ), as well as a slow-decaying isotope between them which facilitates neutron absorption by .
Chlorides permit fast breeder reactors to be constructed. Much less research has been done on reactor designs using chloride salts. Chlorine, unlike fluorine, must be purified to isolate the heavier stable isotope, , thus reducing production of sulfur tetrachloride that occurs when absorbs a neutron to become , then degrades by beta decay to .
Lithium
Lithium must be in the form of purified , because effectively captures neutrons and produces tritium. Even if pure is used, salts containing lithium cause significant tritium production, comparable with heavy water reactors.
Mixtures
Reactor salts are usually close to eutectic mixtures to reduce their melting point. A low melting point simplifies melting the salt at startup and reduces the risk of the salt freezing as it is cooled in the heat exchanger.
Due to the high "redox window" of fused fluoride salts, the redox potential of the fused salt system can be changed. Fluorine-lithium-beryllium ("FLiBe") can be used with beryllium additions to lower the redox potential and nearly eliminate corrosion. However, since beryllium is extremely toxic, special precautions must be engineered into the design to prevent its release into the environment. Many other salts can cause plumbing corrosion, especially if the reactor is hot enough to make highly reactive hydrogen.
To date, most research has focused on FLiBe, because lithium and beryllium are reasonably effective moderators and form a eutectic salt mixture with a lower melting point than each of the constituent salts. Beryllium also performs neutron doubling, improving the neutron economy. This process occurs when the beryllium nucleus emits two neutrons after absorbing a single neutron. For the fuel carrying salts, generally 1% or 2% (by mole) of is added. Thorium and plutonium fluorides have also been used.
Fused salt purification
Techniques for preparing and handling molten salt were first developed at ORNL. The purpose of salt purification is to eliminate oxides, sulfur and metal impurities. Oxides could result in the deposition of solid particles in reactor operation. Sulfur must be removed because of its corrosive attack on nickel-based alloys at operational temperature. Structural metals such as chromium, nickel, and iron must be removed for corrosion control.
A water content reduction purification stage using HF and helium sweep gas was specified to run at 400 °C. Oxide and sulfur contamination in the salt mixtures were removed using gas sparging of HF/ mixture, with the salt heated to 600 °C. Structural metal contamination in the salt mixtures were removed using hydrogen gas sparging, at 700 °C. Solid ammonium hydrofluoride was proposed as a safer alternative for oxide removal.
Fused salt processing
The possibility of online processing can be an MSR advantage. Continuous processing would reduce the inventory of fission products, control corrosion and improve neutron economy by removing fission products with high neutron absorption cross-section, especially xenon. This makes the MSR particularly suited to the neutron-poor thorium fuel cycle. Online fuel processing can introduce risks of fuel processing accidents, which can trigger release of radio isotopes.
In some thorium breeding scenarios, the intermediate product protactinium would be removed from the reactor and allowed to decay into highly pure , an attractive bomb-making material. More modern designs propose to use a lower specific power or a separate thorium breeding blanket. This dilutes the protactinium to such an extent that few protactinium atoms absorb a second neutron or, via a (n, 2n) reaction (in which an incident neutron is not absorbed but instead knocks a neutron out of the nucleus), generate . Because has a short half-life and its decay chain contains hard gamma emitters, it makes the isotopic mix of uranium less attractive for bomb-making. This benefit would come with the added expense of a larger fissile inventory or a 2-fluid design with a large quantity of blanket salt.
The necessary fuel salt reprocessing technology has been demonstrated, but only at laboratory scale. A prerequisite to full-scale commercial reactor design is the R&D to engineer an economically competitive fuel salt cleaning system.
Fuel reprocessing
Reprocessing refers to the chemical separation of fissionable uranium and plutonium from spent fuel. Such recovery could increase the risk of nuclear proliferation. In the United States the regulatory regime has varied dramatically across administrations.
Costs and economics
A systematic literature review from 2020 concludes that there is very limited information on economics and finance of MSRs, with low quality of the information and that cost estimations are uncertain.
In the specific case of the stable salt reactor (SSR) where the radioactive fuel is contained as a molten salt within fuel pins and the primary circuit is not radioactive, operating costs are likely to be lower.
Types of molten-salt reactors
While many design variants have been proposed, there are three main categories regarding the role of molten salt:
The use of molten salt as fuel and as coolant are independent design choices – the original circulating-fuel-salt MSRE and the more recent static-fuel-salt SSR use salt as fuel and salt as coolant; the DFR uses salt as fuel but metal as coolant; and the FHR has solid fuel but salt as coolant.
Designs
MSRs can be burners or breeders. They can be fast or thermal or epithermal. Thermal reactors typically employ a moderator (usually graphite) to slow the neutrons down and moderate temperature. They can accept a variety of fuels (low-enriched uranium, thorium, depleted uranium, waste products) and coolants (fluoride, chloride, lithium, beryllium, mixed). Fuel cycle can be either closed or once-through. They can be monolithic or modular, large or small. The reactor can adopt a loop, modular or integral configuration. Variations include:
Molten salt fast reactor
The molten-salt fast reactor (MSFR) is a proposed design with the fuel dissolved in a fluoride salt coolant. The MSFR is one of the two variants of MSRs selected by the Generation IV International Forum (GIF) for further development, the other being the FHR or AHTR. The MSFR is based on a fast neutron spectrum and is believed to be a long-term substitute to solid-fueled fast reactors. They have been studied for almost a decade, mainly by calculations and determination of basic physical and chemical properties in the European Union and Russian Federation. A MSFR is regarded sustainable because there are no fuel shortages. Operation of a MSFR does in theory not generate or require large amounts of transuranic (TRU) elements. When steady state is achieved in a MSFR, there is no longer a need for uranium enrichment facilities.
MSFRs may be breeder reactors. They operate without a moderator in the core such as graphite, so graphite life-span is no longer a problem. This results in a breeder reactor with a fast neutron spectrum that operates in the Thorium fuel cycle. MSFRs contain relatively small initial inventories of . MSFRs run on liquid fuel with no solid matter inside the core. This leads to the possibility of reaching specific power that is much higher than reactors using solid fuel. The heat produced goes directly into the heat transfer fluid. In the MSFR, a small amount of molten salt is set aside to be processed for fission product removal and then returned to the reactor. This gives MSFRs the capability of reprocessing the fuel without stopping the reactor. This is very different compared to solid-fueled reactors because they have separate facilities to produce the solid fuel and process spent nuclear fuel. The MSFR can operate using a large variety of fuel compositions due to its on-line fuel control and flexible fuel processing.
The standard MSFR would be a 3000 MWth reactor that has a total fuel salt volume of 18 m3 with a mean fuel temperature of 750 °C. The core's shape is a compact cylinder with a height to diameter ratio of 1 where liquid fluoride fuel salt flows from the bottom to the top. The return circulation of the salt, from top to bottom, is broken up into 16 groups of pumps and heat exchangers located around the core. The fuel salt takes approximately 3 to 4 seconds to complete a full cycle. At any given time during operation, half of the total fuel salt volume is in the core and the rest is in the external fuel circuit (salt collectors, salt-bubble separators, fuel heat exchangers, pumps, salt injectors and pipes). MSFRs contain an emergency draining system that is triggered and achieved by redundant and reliable devices such as detection and opening technology. During operation, the fuel salt circulation speed can be adjusted by controlling the power of the pumps in each sector. The intermediate fluid circulation speed can be adjusted by controlling the power of the intermediate circuit pumps. The temperature of the intermediate fluid in the intermediate exchangers can be managed through the use of a double bypass. This allows the temperature of the intermediate fluid at the conversion exchanger inlet to be held constant while its temperature is increased in a controlled way at the inlet of the intermediate exchangers. The temperature of the core can be adjusted by varying the proportion of bubbles injected in the core since it reduces the salt density. As a result, it reduces the mean temperature of the fuel salt. Usually the fuel salt temperature can be brought down by 100 °C using a 3% proportion of bubbles. MSFRs have two draining modes, controlled routine draining and emergency draining. During controlled routine draining, fuel salt is transferred to actively cooled storage tanks. The fuel temperature can be lowered before draining, this may slow down the process. This type of draining could be done every 1 to 5 years when the sectors are replaced. Emergency draining is done when an irregularity occurs during operation. The fuel salt can be drained directly into the emergency draining tank either by active devices or by passive means. The draining must be fast to limit the fuel salt heating in a loss of heat removal event.
Fluoride salt-cooled high-temperature reactor
The fluoride salt-cooled high-temperature reactor (FHR), also called advanced high temperature reactor (AHTR), is also a proposed Generation IV molten-salt reactor variant regarded promising for the long-term future. The FHR/AHTR reactor uses a solid-fuel system along with a molten fluoride salt as coolant.
One version of the Very-high-temperature reactor (VHTR) under study was the liquid-salt very-high-temperature reactor (LS-VHTR). It uses liquid salt as a coolant in the primary loop, rather than a single helium loop. It relies on "TRISO" fuel dispersed in graphite. Early AHTR research focused on graphite in the form of graphite rods that would be inserted in hexagonal moderating graphite blocks, but current studies focus primarily on pebble-type fuel. The LS-VHTR can work at very high temperatures (the boiling point of most molten salt candidates is >1400 °C); low-pressure cooling that can be used to match hydrogen production facility conditions (most thermochemical cycles require temperatures in excess of 750 °C); better electric conversion efficiency than a helium-cooled VHTR operating in similar conditions; passive safety systems and better retention of fission products in the event of an accident.
Liquid-fluoride thorium reactor
Reactors containing molten thorium salt, called liquid fluoride thorium reactors (LFTR), would tap the thorium fuel cycle. Private companies from Japan, Russia, Australia and the United States, and the Chinese government, have expressed interest in developing this technology.
Advocates estimate that five hundred metric tons of thorium could supply U.S. energy needs for one year. The U.S. Geological Survey estimates that the largest-known U.S. thorium deposit, the Lemhi Pass district on the Montana-Idaho border, contains thorium reserves of 64,000 metric tons.
Traditionally, these reactors were known as molten salt breeder reactors (MSBRs) or thorium molten-salt reactors (TMSRs), but the name LFTR was promoted as a rebrand in the early 2000s by Kirk Sorensen.
Stable salt reactor
The stable salt reactor is a relatively recent concept which holds the molten salt fuel statically in traditional LWR fuel pins. Pumping of the fuel salt, and all the corrosion/deposition/maintenance/containment issues arising from circulating a highly radioactive, hot and chemically complex fluid, are no longer required. The fuel pins are immersed in a separate, non-fissionable fluoride salt which acts as primary coolant.
Dual-fluid molten-salt reactors
A prototypical example of a dual fluid reactor is the lead-cooled, salt-fueled reactor.
History
1950s
Aircraft Reactor Experiment, US
MSR research started with the U.S. Aircraft Reactor Experiment (ARE) in support of the U.S. Aircraft Nuclear Propulsion program. ARE was a 2.5 MWth nuclear reactor experiment designed to attain a high energy density for use as an engine in a nuclear-powered bomber.
The project included experiments, including high temperature and engine tests collectively called the Heat Transfer Reactor Experiments: HTRE-1, HTRE-2 and HTRE-3 at the National Reactor Test Station (now Idaho National Laboratory) as well as an experimental high-temperature molten-salt reactor at Oak Ridge National Laboratory – the ARE.
ARE used molten fluoride salt (53-41-6 mol%) as fuel, moderated by beryllium oxide (BeO). Liquid sodium was a secondary coolant.
The experiment had a peak temperature of 860 °C. It produced 100 MWh over nine days in 1954. This experiment used Inconel 600 alloy for the metal structure and piping.
An MSR was operated at the Critical Experiments Facility of the Oak Ridge National Laboratory in 1957. It was part of the circulating-fuel reactor program of the Pratt & Whitney Aircraft Company (PWAC). This was called Pratt and Whitney Aircraft Reactor-1 (PWAR-1). The experiment was run for a few weeks and at essentially zero power, although it reached criticality. The operating temperature was held constant at approximately . The PWAR-1 used as the primary fuel and coolant. It was one of three critical MSRs ever built.
1960s and 1970s
MSRE at Oak Ridge, US
Oak Ridge National Laboratory (ORNL) took the lead in researching MSRs through the 1960s. Much of their work culminated with the Molten-Salt Reactor Experiment (MSRE). MSRE was a 7.4 MWth test reactor simulating the neutronic "kernel" of a type of epithermal thorium molten salt breeder reactor called the liquid fluoride thorium reactor (LFTR). The large (expensive) breeding blanket of thorium salt was omitted in favor of neutron measurements.
MSRE's piping, core vat and structural components were made from Hastelloy-N, moderated by pyrolytic graphite. It went critical in 1965 and ran for four years. Its fuel was (65-29-5-1)mol%. The graphite core moderated it. Its secondary coolant was FLiBe (). It reached temperatures as high as and achieved the equivalent of about 1.5 years of full power operation.
Theoretical designs at Oak Ridge, US
Molten salt breeder reactor
From 1970 to 1976 ORNL researched during the 1970–1976 a molten salt breeder reactor (MSBR) design. Fuel was to be (72-16-12-0.4) mol% with graphite moderator. The secondary coolant was to be . Its peak operating temperature was to be . It would follow a 4-year replacement schedule. The MSR program closed down in the early 1970s in favor of the liquid metal fast-breeder reactor (LMFBR), after which research stagnated in the United States. , ARE and MSRE remained the only molten-salt reactors ever operated.
The MSBR project received funding from 1968 to 1976 of (in dollars) $.
Officially, the program was cancelled because:
The political and technical support for the program in the United States was too thin geographically. Within the United States the technology was well understood only in Oak Ridge.
The MSR program was in competition with the fast breeder program at the time, which got an early start and had copious government development funds with contracts that benefited many parts of the country. When the MSR development program had progressed far enough to justify an expanded program leading to commercial development, the United States Atomic Energy Commission (AEC) could not justify the diversion of substantial funds from the LMFBR to a competing program.
Denatured molten-salt reactor
The denatured molten-salt reactor (DMSR) was an Oak Ridge theoretical design that was never built.
Engel et al. 1980 said the project "examined the conceptual feasibility of a molten-salt power reactor fueled with denatured uranium-235 (i.e. with low-enriched uranium) and operated with a minimum of chemical processing." The main design priority was proliferation resistance. Although the DMSR can theoretically be fueled partially by thorium or plutonium, fueling solely with low enriched uranium (LEU) helps maximize proliferation resistance.
Other goals of the DMSR were to minimize research and development and to maximize feasibility. The Generation IV international Forum (GIF) includes "salt processing" as a technology gap for molten-salt reactors. The DMSR design theoretically requires minimal chemical processing because it is a burner rather than a breeder.
United Kingdom
The UK's Atomic Energy Research Establishment (AERE) was developing an alternative MSR design across its National Laboratories at Harwell, Culham, Risley and Winfrith. AERE opted to focus on a lead-cooled 2.5 GWe Molten Salt Fast Reactor (MSFR) concept using a chloride. They also researched helium gas as a coolant.
The UK MSFR would have been fuelled by plutonium, a fuel considered to be 'free' by the program's research scientists, because of the UK's plutonium stockpile.
Despite their different designs, ORNL and AERE maintained contact during this period with information exchange and expert visits. Theoretical work on the concept was conducted between 1964 and 1966, while experimental work was ongoing between 1968 and 1973. The program received annual government funding of around £100,000–£200,000 (equivalent to £2m–£3m in 2005). This funding came to an end in 1974, partly due to the success of the Prototype Fast Reactor at Dounreay which was considered a priority for funding as it went critical in the same year.
Soviet Union
In the USSR, a molten-salt reactor research program was started in the second half of the 1970s at the Kurchatov Institute. It included theoretical and experimental studies, particularly the investigation of mechanical, corrosion and radiation properties of the molten salt container materials. The main findings supported the conclusion that no physical nor technological obstacles prevented the practical implementation of MSRs.
Twenty-first century
MSR interest resumed in the new millennium due to continuing delays in fusion power and other nuclear power programs and increasing demand for energy sources that would incur minimal greenhouse gas (GHG) emissions.
Commercial/national/international projects
Canada
Terrestrial Energy, a Canadian-based company, is developing a DMSR design called the Integral Molten Salt Reactor (IMSR). The IMSR is designed to be deployable as a small modular reactor (SMR). Their design currently undergoing licensing is 400MW thermal (190MW electrical). With high operating temperatures, the IMSR has applications in industrial heat markets as well as traditional power markets. The main design features include neutron moderation from graphite, fueling with low-enriched uranium and a compact and replaceable Core-unit. Decay heat is removed passively using nitrogen (with air as an emergency alternative). The latter feature permits the operational simplicity necessary for industrial deployment.
Terrestrial completed the first phase of a prelicensing review by the Canadian Nuclear Safety Commission in 2017, which provided a regulatory opinion that the design features are generally safe enough to eventually obtain a license to construct the reactor.
Moltex Energy Canada, a subsidiary of UK-based Moltex Energy Ltd, has obtained support from New Brunswick Power for the development of a pilot plant in Point Lepreau, Canada, and financial backing from IDOM (an international engineering firm) and is currently engaged in the Canadian Vendor Design Review process. The plant will employ the waste-burning version of the company's stable salt reactor design.
China
China initiated a thorium research project in January 2011, and spent about 3 billion yuan (US$500 million) on it by 2021. A 100 MW demonstrator of the solid fuel version (TMSR-SF), based on pebble bed technology, was planned to be ready by 2024. A 10 MW pilot and a larger demonstrator of the liquid fuel (TMSR-LF) variant were targeted for 2024 and 2035, respectively. China then accelerated its program to build two 12 MW reactors underground at Wuwei research facilities by 2020, beginning with the 2 megawatt TMSR-LF1 prototype. The project sought to test new corrosion-resistant materials. In 2017, ANSTO/Shanghai Institute Of Applied Physics announced the creation of a NiMo-SiC alloy for use in MSRs.
In 2021, China stated that Wuwei prototype operation could start power generation from thorium in September, with a prototype providing energy for around 1,000 homes. It is the world's first nuclear molten-salt reactor after the Oak Ridge project.
The 100 MW successor was expected to be 3 meters tall and 2.5 meters wide, capable of providing energy to 100,000 homes.
Further work on commercial reactors was announced with the target completion date of 2030. Chinese government plans to realize similar reactors in deserts and plains of western China as well as up to 30 in countries involved in China's "Belt and Road" initiative.
In 2022, Shanghai Institute of Applied Physics (SINAP) was given approval by the Ministry of Ecology and Environment to commission an experimental thorium-powered MSR.
Denmark
Copenhagen Atomics is a Danish molten salt technology company developing mass manufacturable molten salt reactors. The Copenhagen Atomics Waste Burner is a single-fluid, heavy water moderated, fluoride-based, thermal spectrum and autonomously controlled molten-salt reactor. This is designed to fit inside of a leak-tight, 40-foot, stainless steel shipping container. The heavy water moderator is thermally insulated from the salt and continuously drained and cooled to below . A molten lithium-7 deuteroxide () moderator version is also being researched. The reactor utilizes the thorium fuel cycle using separated plutonium from spent nuclear fuel as the initial fissile load for the first generation of reactors, eventually transitioning to a thorium breeder. Copenhagen Atomics is actively developing and testing valves, pumps, heat exchangers, measurement systems, salt chemistry and purification systems, and control systems and software for molten salt applications.
Seaborg Technologies is developing the core for a compact molten-salt reactor (CMSR). The CMSR is a high temperature, single salt, thermal MSR designed to go critical on commercially available low enriched uranium. The CMSR design is modular, and uses proprietary NaOH moderator. The reactor core is estimated to be replaced every 12 years. During operation, the fuel will not be replaced and will burn for the entire 12-year reactor lifetime. The first version of the Seaborg core is planned to produce 250 MWth power and 100 MWe power. As a power plant, the CMSR will be able to deliver electricity, clean water and heating/cooling to around 200,000 households.
France
The CNRS project EVOL (Evaluation and viability of liquid fuel fast reactor system) project, with the objective of proposing a design of the molten salt fast reactor (MSFR), released its final report in 2014. Various MSR projects like FHR, MOSART, MSFR, and TMSR have common research and development themes.
The EVOL project will be continued by the EU-funded Safety Assessment of the Molten Salt Fast Reactor (SAMOFAR) project, in which several European research institutes and universities collaborate.
Germany
The German Institute for Solid State Nuclear Physics in Berlin has proposed the dual fluid reactor as a concept for a fast breeder lead-cooled MSR. The original MSR concept used the fluid salt to provide the fission materials and also to remove the heat. Thus it had problems with the needed flow speed. Using 2 different fluids in separate circles is thought to solve the problem.
India
In 2015, Indian researchers published a MSR design, as an alternative path to thorium-based reactors, according to India's three-stage nuclear power programme.
Indonesia
Thorcon is developing the TMSR-500 molten-salt reactor for the Indonesian market. National Research and Innovation Agency, through its Research Organization for Nuclear Energy announced its renewal of interest on MSR reactor research on 29 March 2022 and planned to study and develop MSR for thorium-fueled nuclear reactors.
Japan
The Fuji Molten-Salt Reactor is a 100 to 200 MWe LFTR, using technology similar to the Oak Ridge project. A consortium including members from Japan, the U.S. and Russia are developing the project. The project would likely take 20 years to develop a full size reactor, but the project seems to lack funding.
The UNOMI Molten-Salt Reactor is a small reactor up to 10 MWe, which eliminates external primary fuel circuit causing loss of delayed neutron, mass transfer phenomenon and corrosion on metallic surface.
Russia
In 2020, Rosatom announced plans to build a 10 MWth FLiBe burner MSR. It would be fueled by plutonium from reprocessed VVER spent nuclear fuel and fluorides of minor actinides. It is expected to launch in 2031 at Mining and Chemical Combine.
United Kingdom
The Alvin Weinberg Foundation is a British non-profit organization founded in 2011, dedicated to raising awareness about the potential of thorium energy and LFTR. It was formally launched at the House of Lords on 8 September 2011. It is named after American nuclear physicist Alvin M. Weinberg, who pioneered thorium MSR research.
Moltex Energy's stable-salt reactor design was selected as the most suitable of six MSR designs for UK implementation in a 2015 study commissioned by the UK's innovation agency, Innovate UK. UK government support has been weak, but the company's UK arm, MoltexFLEX, launched its FLEX small modular design in October 2022.
United States
Idaho National Laboratory designed a molten-salt-cooled, molten-salt-fuelled reactor with a prospective output of 1000 MWe.
Kirk Sorensen, former NASA scientist and chief nuclear technologist at Teledyne Brown Engineering, is a long-time promoter of the thorium fuel cycle, coining the term liquid fluoride thorium reactor. In 2011, Sorensen founded Flibe Energy, a company aimed at developing 20–50 MW LFTR reactor designs to power military bases. (It is easier to approve novel military designs than civilian power station designs in the US nuclear regulatory environment).
Transatomic Power pursued what it termed a waste-annihilating molten-salt reactor (WAMSR), intended to consume existing spent nuclear fuel, from 2011 until ceasing operation in 2018 and open-sourcing their research.
In January 2016, the United States Department of Energy announced a $80m award fund to develop Generation IV reactor designs. One of the two beneficiaries, Southern Company will use the funding to develop a molten chloride fast reactor (MCFR), a type of MSR developed earlier by British scientists.
In 2021, Tennessee Valley Authority (TVA) and Kairos Power announced a TRISO-fueled, low-pressure fluoride salt-cooled 140 MWe test reactor to be built in Oak Ridge, Tennessee. A construction permit for the project was issued by the US Nuclear Regulatory Commission (NCR) in 2023. The design is expected to operate at 45% efficiency. The outlet temperature is . The main steam pressure is 19 MPa. The reactor structure is 316 stainless steel. The fuel is enriched to 19.75%. Loss-of-power cooling is passive. In February 2024 DOE and Kairos Power signed a $303M Technology Investment Agreement to support the design, construction, and commissioning of the reactor. The company is to receive fixed payments upon completing project milestones.
Also in 2021, Southern Company, in collaboration with TerraPower and the U.S. Department of Energy announced plans to build the Molten Chloride Reactor Experiment, the first fast-spectrum salt reactor at the Idaho National Laboratory.
Abilene Christian University (ACU) has applied to the NRC for a construction licence for a 1MWt molten-salt research reactor (MSRR), to be built on its campus in Abilene, Texas, as part of the Nuclear Energy eXperimental Testing (NEXT) laboratory. ACU plans for the MSRR to achieve criticality by December 2025.
| Technology | Power generation | null |
1584036 | https://en.wikipedia.org/wiki/Kidney%20transplantation | Kidney transplantation | Kidney transplant or renal transplant is the organ transplant of a kidney into a patient with end-stage kidney disease (ESRD). Kidney transplant is typically classified as deceased-donor (formerly known as cadaveric) or living-donor transplantation depending on the source of the donor organ. Living-donor kidney transplants are further characterized as genetically related (living-related) or non-related (living-unrelated) transplants, depending on whether a biological relationship exists between the donor and recipient. The first successful kidney transplant was performed in 1954 by a team including Joseph Murray, the recipient's surgeon, and Hartwell Harrison, surgeon for the donor. Murray was awarded a Nobel Prize in Physiology or Medicine in 1990 for this and other work. In 2018, an estimated 95,479 kidney transplants were performed worldwide, 36% of which came from living donors.
Before receiving a kidney transplant, a person with ESRD must undergo a thorough medical evaluation to make sure that they are healthy enough to undergo transplant surgery. If they are deemed a good candidate, they can be placed on a waiting list to receive a kidney from a deceased donor. Once they are placed on the waiting list, they can receive a new kidney very quickly, or they may have to wait many years; in the United States, the average waiting time is three to five years. During transplant surgery, the new kidney is usually placed in the lower abdomen (belly); the person's two native kidneys are not usually taken out unless there is a medical reason to do so.
People with ESRD who receive a kidney transplant generally live longer than people with ESRD who are on dialysis and may have a better quality of life. However, kidney transplant recipients must remain on immunosuppressants (medications to suppress the immune system) for as long as the new kidney is working to prevent their body from rejecting it. This long-term immunosuppression puts them at higher risk for infections and cancer. Kidney transplant rejection can be classified as cellular rejection or antibody-mediated rejection. Antibody-mediated rejection can be classified as hyperacute, acute, or chronic, depending on how long after the transplant it occurs. If rejection is suspected, a kidney biopsy should be obtained. It is important to regularly monitor the new kidney's function by measuring serum creatinine and other tests; these should be done at least every three months.
History
One of the earliest mentions about the possibility of a kidney transplant was by American medical researcher Simon Flexner, who declared in a reading of his paper on "Tendencies in Pathology" in the University of Chicago in 1907 that it would be possible in the then-future for diseased human organs substitution for healthy ones by surgery, including arteries, stomach, kidneys and heart.
In 1933, surgeon Yuriy Vorony from Kherson in Ukraine attempted the first human kidney transplant, using a kidney removed six hours earlier from a deceased donor to be reimplanted into the thigh. He measured kidney function using a connection between the kidney and the skin. His first patient died two days later, as the graft was incompatible with the recipient's blood group and was rejected.
It was not until 17 June 1950, when a successful transplant was performed on Ruth Tucker, a 44-year-old woman with polycystic kidney disease, by Dr. Richard Lawler at Little Company of Mary Hospital in Evergreen Park, Illinois. Although the donated kidney was rejected ten months later because no immunosuppressive therapy was available at the time, the intervening time gave Tucker's remaining kidney time to recover and she lived another five years.
A kidney transplant between living patients was undertaken in 1952 at the Necker hospital in Paris by Jean Hamburger, although the kidney failed after three weeks. The first truly successful transplant of this kind occurred in 1954 in Boston. The Boston transplantation, performed on 23 December 1954 at Brigham Hospital, was performed by Joseph Murray, J. Hartwell Harrison, John P. Merrill and others. The procedure was done between identical twins Ronald and Richard Herrick which reduced problems of an immune reaction. For this and later work, Murray received the Nobel Prize for Medicine in 1990. The recipient, Richard Herrick, died eight years after the transplantation due to complications with the donor kidney that were unrelated to the transplant.
In 1955, Charles Rob, William James "Jim" Dempster (St Marys and Hammersmith, London) carried out the first deceased donor transplant in United Kingdom, which was unsuccessful. In July 1959, "Fred" Peter Raper (Leeds) performed the first successful (8 months) deceased donor transplant in the UK. A year later, in 1960, the first successful living kidney transplant in the UK occurred, when Michael Woodruff performed one between identical twins in Edinburgh.
In November 1994, the Sultan Qaboos University Hospital, in Oman, performed successfully the world's youngest cadaveric kidney transplant. The work took place from a newborn of 33 weeks to a 17-month-old recipient who survived for 22 years (thanks to the couple of organs transplanted into him).
Until the routine use of medication to prevent and treat acute rejection, introduced in 1964, deceased donor transplantation was not performed. The kidney was the easiest organ to transplant: tissue typing was simple; the organ was relatively easy to remove and implant; live donors could be used without difficulty; and in the event of failure, kidney dialysis was available from the 1940s. As explained in Thomas Starzl's 1992 memoir, these factors explain why Starzl's team and others began with kidney transplantation as the first type of solid organ transplantation to translate to clinical practice before attempting to move on to liver transplantation, heart transplantation, and other types.
The major barrier to organ transplantation between genetically non-identical patients lay in the recipient's immune system, which would treat a transplanted kidney as a 'non-self' and immediately or chronically reject it. Thus, having medication to suppress the immune system was essential. However, suppressing an individual's immune system places that individual at greater risk of infection and cancer (particularly skin cancer and lymphoma), in addition to the side effects of the medications.
The basis for most immunosuppressive regimens is prednisolone, a corticosteroid. Prednisolone suppresses the immune system, but its long-term use at high doses causes a multitude of side effects, including glucose intolerance and diabetes, weight gain, osteoporosis, muscle weakness, hypercholesterolemia, and cataract formation. Prednisolone alone is usually inadequate to prevent rejection of a transplanted kidney. Thus, other, non-steroid immunosuppressive agents are needed, which also allow lower doses of prednisolone. These include: azathioprine and mycophenolate, and ciclosporin and tacrolimus.
Indications
The indication for kidney transplantation is end-stage renal disease (ESRD), regardless of the primary cause. This is defined as a glomerular filtration rate below 15 ml/min/1.73 m2. Common diseases leading to ESRD include renovascular disease, infection, diabetes mellitus, and autoimmune conditions such as chronic glomerulonephritis and lupus; genetic causes include polycystic kidney disease, and a number of inborn errors of metabolism. The most common 'cause' is idiopathic (i.e. unknown).
Diabetes is the most common known cause of kidney transplantation, accounting for approximately 25% of those in the United States. The majority of renal transplant recipients are on dialysis (peritoneal dialysis or hemodialysis) at the time of transplantation. However, individuals with chronic kidney disease who have a living donor available may undergo pre-emptive transplantation before dialysis is needed. If a patient is put on the waiting list for a deceased donor transplant early enough, this may also occur pre-dialysis.
Evaluation of kidney donors and recipients
Both potential kidney donors and kidney recipients are carefully screened to assure positive outcomes.
Contraindications for kidney recipients
Contraindications to receive a kidney transplant include both cardiac and pulmonary insufficiency, as well as hepatic disease and some cancers. Concurrent tobacco use and morbid obesity are also among the indicators putting a patient at a higher risk for surgical complications.
Kidney transplant requirements vary from program to program and country to country. Many programs place limits on age (e.g. the person must be under a certain age to enter the waiting list) and require that one must be in good health (aside from kidney disease). Significant cardiovascular disease, incurable terminal infectious diseases and cancer are often transplant exclusion criteria. In addition, candidates are typically screened to determine if they will be compliant with their medications, which is essential for survival of the transplant. People with mental illness and/or significant ongoing substance abuse issues may be excluded.
HIV was at one point considered to be a complete contraindication to transplantation. There was fear that immunosuppressing someone with a depleted immune system would result in the progression of the disease. However, some research seem to suggest that immunosuppressive drugs and antiretrovirals may work synergistically to help both HIV viral loads/CD4 cell counts and prevent active rejection.
Living kidney donor evaluation
As candidates for a significant elective surgery, potential kidney donors are carefully screened to assure good long term outcomes. The screening includes medical and psychosocial components. Sometimes donors can be successfully screened in a few months, but the process can take longer, especially if test results indicate additional tests are required. A total approval time of under six months has been identified as an important goal for transplant centers to avoid missed opportunities for kidney transplant (for example, that the intended recipient becomes too ill for transplant while the donor is being evaluated).
The psychosocial screening attempts to determine the presence of psychosocial problems that might complicate donation such as lack of social support to aid in their post operative recovery, coercion by family members, or lack of understanding of medical risks.
The medical screening assesses the general health and surgical risk of the donor including for conditions that might indicate complications from living with a single kidney. It also assesses whether the donor has diseases that might be transmitted to the recipient (who usually will be immunosuppressed), assesses the anatomy of the donor's kidneys including differences in size and issues that might complicate surgery, and determines the immunological compatibility of the donor and recipient. Specific rules vary by transplant center, but key exclusion criteria often include:
diabetes;
uncontrolled hypertension;
morbid obesity;
heart or lung disease;
history of cancer;
family history of kidney disease; and
impaired kidney performance or proteinuria.Guidance for the Development of Program-Specific Living Kidney Donor Medical Evaluation Protocols - OPTN
Sources of kidneys
Since medication to prevent rejection is so effective, donors do not need to be similar to their recipients. Most donated kidneys come from deceased donors; however, the use of living donors in the United States is on the rise. In 2006, 47% of donated kidneys were from living donors. This varies by country: for example, only 3% of kidneys transplanted during 2006 in Spain came from living donors. In Spain all citizens are potential organ donors in the case of their death, unless they explicitly opt out during their lifetime.
Living donors
Approximately one in three donations in the US, UK, and Israel is now from a live donor. Potential donors are carefully evaluated on medical and psychological grounds. This ensures that the donor is fit for surgery and has no disease which brings undue risk or likelihood of a poor outcome for either the donor or recipient. The psychological assessment is to ensure the donor gives informed consent and is not coerced. In countries where paying for organs is illegal, the authorities may also seek to ensure that a donation has not resulted from a financial transaction.
The relationship the donor has to the recipient has evolved over the years. In the 1950s, the first successful living donor transplants were between identical twins. In the 1960s–1970s, live donors were genetically related to the recipient. However, during the 1980s–1990s, the donor pool was expanded further to emotionally related individuals (spouses, friends). Now the elasticity of the donor relationship has been stretched to include acquaintances and even strangers ('altruistic donors'). In 2009, US transplant recipient Chris Strouth received a kidney from a donor who connected with him on Twitter, which is believed to be the first such transplant arranged entirely through social networking.
Exchanges and chains are a novel approach to expand the living donor pool. In February 2012, this novel approach to expand the living donor pool resulted in the largest chain in the world, involving 60 participants organized by the National Kidney Registry. In 2014 the record for the largest chain was broken again by a swap involving 70 participants. The acceptance of altruistic donors has enabled chains of transplants to form. Kidney chains are initiated when an altruistic donor donates a kidney to a patient who has a willing but incompatible donor. This incompatible donor then 'pays it forward' and passes on the generosity to another recipient who also had a willing but incompatible donor. Michael Rees from the University of Toledo developed the concept of open-ended chains. This was a variation of a concept developed at Johns Hopkins University. On 30 July 2008, an altruistic donor kidney was shipped via commercial airline from Cornell to UCLA, thus triggering a chain of transplants. The shipment of living donor kidneys, computer-matching software algorithms, and cooperation between transplant centers has enabled long-elaborate chains to be formed.
In 2004, the FDA approved the Cedars-Sinai High Dose IVIG therapy which reduces the need for the living donor to be the same blood type (ABO compatible) or even a tissue match. The therapy reduced the incidence of the recipient's immune system rejecting the donated kidney in highly sensitized patients.
In carefully screened kidney donors, survival and the risk of end-stage renal disease appear to be similar to those in the general population. However, some more recent studies suggest that lifelong risk of chronic kidney disease is several-fold higher in kidney donors although the absolute risk is still very small.
A 2017 article in the New England Journal of Medicine suggests that persons with only one kidney, including those who have donated a kidney for transplantation, should avoid a high protein diet and limit their protein intake to less than one gram per kilogram body weight per day in order to reduce the long-term risk of chronic kidney disease. Women who have donated a kidney have a higher risk of gestational hypertension and preeclampsia than matched nondonors with similar indicators of baseline health.
Surgical procedure
Traditionally, the donor procedure has been through a single incision of , but live donation is being increasingly performed by laparoscopic surgery. This reduces pain and accelerates recovery for the donor. Operative time and complications decreased significantly after a surgeon performed 150 cases. Live donor kidney grafts have higher long-term success rates than those from deceased donors. Since the increase in the use of laparoscopic surgery, the number of live donors has increased. Any advance which leads to a decrease in pain and scarring and swifter recovery has the potential to boost donor numbers. In January 2009, the first all-robotic kidney transplant was performed at Saint Barnabas Medical Center, located in Livingston, New Jersey, through a two-inch incision. In the following six months, the same team performed eight more robotic-assisted transplants.
In 2009, at Johns Hopkins Hospital in Baltimore, a healthy kidney was removed through the donor's vagina. Vaginal donations promise to speed recovery and reduce scarring. The first donor was chosen as she had previously had a hysterectomy. The extraction was performed using natural orifice transluminal endoscopic surgery, where an endoscope is inserted through an orifice, then through an internal incision, so that there is no external scar. The recent advance of single port laparoscopy requiring only one entry point at the navel is another advance with potential for more frequent use.
Organ trade
In the developing world, some people sell their organs illegally. Such people are often in grave poverty or are exploited by salespersons. The people who travel to make use of these kidneys are often known as 'transplant tourists'. This practice is opposed by a variety of human rights groups, including Organs Watch, a group established by medical anthropologists, which was instrumental in exposing illegal international organ selling rings. These patients may have increased complications owing to poor infection control and lower medical and surgical standards. One surgeon has said that organ trade could be legalised in the UK to prevent such tourism, but this is not seen by the National Kidney Research Fund as the answer to a deficit in donors.
In the illegal black market, the donors may not get sufficient after-operation care, the price of a kidney may be above $160,000, middlemen take most of the money, the operation is more dangerous to both the donor and receiver, and the buyer often gets hepatitis or HIV. In legal markets of Iran the price of a kidney is $2,000 to $4,000.
An article by Gary Becker and Julio Elias on "Introducing Incentives in the market for Live and Cadaveric Organ Donations" said that a free market could help solve the problem of a scarcity in organ transplants. Their economic modeling was able to estimate the price tag for human kidneys ($15,000) and human livers ($32,000).
Jason Brennan and Peter Jaworski from Georgetown University have also argued that any moral objections to a market for organs are not inherent in the market, but rather the activity itself.
Monetary compensation for organ donors in the form of reimbursement for out-of-pocket expenses, has been legalised in 23 countries including the United States United Kingdom, Australia and Singapore.
Deceased donors
Deceased donors can be divided in two groups:
Brain-dead (BD) donors
Donation after Cardiac Death (DCD) donors
Although brain-dead (or 'heart beating') donors are considered medically and legally dead, the donor's heart continues to pump and maintain circulation. This makes it possible for surgeons to start operating while the organs are still being perfused (supplied blood). During the operation, the aorta will be cannulated, after which the donor's blood will be replaced by an ice-cold storage solution, such as UW (Viaspan), HTK, or Perfadex. Depending on which organs are transplanted, more than one solution may be used simultaneously. Due to the temperature of the solution, and since large amounts of cold NaCl-solution are poured over the organs for a rapid cooling, the heart will stop pumping.
'Donation after Cardiac Death' donors are patients who do not meet the brain-dead criteria but, due to the unlikely chance of recovery, have elected via a living will or through family to have support withdrawn. In this procedure, treatment is discontinued (mechanical ventilation is shut off). After a time of death has been pronounced, the patient is rushed to the operating room where the organs are recovered. Storage solution is flushed through the organs. Since the blood is no longer being circulated, coagulation must be prevented with large amounts of anti-coagulation agents such as heparin. Several ethical and procedural guidelines must be followed; most importantly, the organ recovery team should not participate in the patient's care in any manner until after death has been declared.
Increased donors
Many governments have passed laws whereby the default is an opt-in system in order to increase the number of donors.
Since December 2015, Human Transplantation (Wales) Act 2013 passed by the Welsh Government has enabled an opt-out organ donation register, the first country in the UK to do so. The legislation is 'deemed consent', whereby all citizens are considered to have no objection to becoming a donor unless they have opted out on this register.
With the approval of Epclusa in 2020, the number of donors has increased. The medication allows for the curing of Hepatitis C positive individuals which has increased the pool of available organs.
Animal transplants
In 2022, University of Alabama Birmingham announced the first peer-reviewed research outlining the successful transplant of genetically-modified, clinical-grade pig kidneys into a brain-dead human individual, replacing the recipient's native kidneys. In the study, which was published in the American Journal of Transplantation, researchers tested the first human preclinical model for transplanting genetically-modified pig kidneys into humans. The recipient of the study had his native kidneys removed and received two genetically-modified pig kidneys in their place. The organs came from a genetically-modified pig from a pathogen-free facility. In March 2024, a team of surgeons at Massachusetts General Hospital transplanted a kidney from a genetically-modified pig into a 62-year-old man. Two weeks after the surgery, the doctor said the patient was well enough to be discharged.
Risks of kidney transplantation
Kidney transplantation is generally considered a safe and effective treatment for end-stage kidney disease. However, like any surgery and medical procedure, it does carry certain risks and potential complications. Some of these risks include:
Rejection: The body's immune system may recognize the transplanted kidney as foreign and attack it. This can happen immediately after transplantation or even years later. Immunosuppressive medications are prescribed to prevent rejection.
Infection: Because immunosuppressive drugs weaken the immune system, transplant recipients are more susceptible to infections. These can range from minor infections to more serious ones affecting the new kidney or other parts of the body.
Side effects of medications: Immunosuppressive drugs used to prevent rejection can have side effects such as increased risk of infections, diabetes, high blood pressure, osteoporosis, and others.
Surgical complications: These can include bleeding, blood clots, and damage to nearby organs during the surgery.
Recurrence of original kidney disease: In some cases, the disease that caused the original kidney failure may recur in the transplanted kidney.
Post-surgery complications: These can include issues like fluid collections, wound healing problems, or complications related to anesthesia.
Cardiovascular disease: Kidney transplant recipients have a higher risk of developing heart disease compared to the general population, partly due to the effects of long-term kidney disease and immunosuppressive medications.
It's important to note that advances in surgical techniques, better immunosuppressive medications, and improved post-transplant care have significantly reduced these risks over the years. Kidney transplantation remains the best option for many people with end-stage kidney disease, offering a better quality of life and improved long-term outcomes compared to dialysis.
Compatibility
In general, the donor and recipient should be ABO blood group and crossmatch (human leukocyte antigen – HLA) compatible. If a potential living donor is incompatible with their recipient, the donor could be exchanged for a compatible kidney. Kidney exchange, also known as "kidney paired donation" or "chains" have recently gained popularity.
In an effort to reduce the risk of rejection during incompatible transplantation, ABO-incompatible and desensitization protocols utilizing intravenous immunoglobulin (IVIG) have been developed, with the aim to reduce ABO and HLA antibodies that the recipient may have to the donor. In 2004, the FDA approved the Cedars-Sinai High Dose IVIG therapy which reduces the need for the living donor to be the same blood type (ABO compatible) or even a tissue match. The therapy reduced the incidence of the recipient's immune system rejecting the donated kidney in highly sensitized patients.
In the 1980s, experimental protocols were developed for ABO-incompatible transplants using increased immunosuppression and plasmapheresis. Through the 1990s, these techniques were improved and an important study of long-term outcomes in Japan was published. Now, a number of programs around the world are routinely performing ABO-incompatible transplants.
The level of sensitization to donor HLA antigens is determined by performing a panel reactive antibody test on the potential recipient. In the United States, up to 17% of all deceased donor kidney transplants have no HLA mismatch. However, HLA matching is a relatively minor predictor of transplant outcomes. In fact, living non-related donors are now almost as common as living (genetically)-related donors.
Procedure
In most cases, the barely functioning existing kidneys are not removed, as removal has been shown to increase the rates of surgical morbidity. Therefore, the kidney is usually placed in a location different from the original kidney. Often, this is in the iliac fossa so it is often necessary to use a different blood supply:
The renal artery of the new kidney, previously branching from the abdominal aorta in the donor, is often connected to the external iliac artery in the recipient.
The renal vein of the new kidney, previously draining to the inferior vena cava in the donor, is often connected to the external iliac vein in the recipient.
The donor ureter is anastomosed with the recipient bladder. In some cases, a ureteral stent is placed at the time of the anastomosis, with the assumption that it allows for better drainage and healing. However, using a modified Lich-Gregoir technique, Gaetano Ciancio developed a technique which no longer requires ureteral stenting, avoiding many stent related complications.
There is disagreement in surgical textbooks regarding which side of the recipient's pelvis to use in receiving the transplant. Campbell's Urology (2002) recommends placing the donor kidney in the recipient's contralateral side (i.e. a left sided kidney would be transplanted in the recipient's right side) to ensure the renal pelvis and ureter are anterior in the event that future surgeries are required. In an instance where there is doubt over whether there is enough space in the recipient's pelvis for the donor's kidney, the textbook recommends using the right side because the right side has a wider choice of arteries and veins for reconstruction.
Glen's Urological Surgery (2004) recommends putting the kidney in the contralateral side in all circumstances. No reason is explicitly put forth; however, one can assume the rationale is similar to that of Campbell, i.e. to ensure that the renal pelvis and ureter are most anterior in the event that future surgical correction becomes necessary.
Smith's Urology (2004) states that either side of the recipient's pelvis is acceptable; however the right vessels are 'more horizontal' with respect to each other and therefore easier to use in the anastomoses. It is unclear what is meant by the words 'more horizontal'.
Kidney-pancreas transplant
Occasionally, the kidney is transplanted together with the pancreas. University of Minnesota surgeons Richard Lillehei and William Kelly perform the first successful simultaneous pancreas-kidney transplant in the world in 1966. This is done in patients with diabetes mellitus type 1, in whom the diabetes is due to destruction of the beta cells of the pancreas and in whom the diabetes has caused kidney failure (diabetic nephropathy). This is almost always a deceased donor transplant. Only a few living donor (partial) pancreas transplants have been done. For individuals with diabetes and kidney failure, the advantages of an earlier transplant from a living donor (if available) are far superior to the risks of continued dialysis until a combined kidney and pancreas are available from a deceased donor. A patient can either receive a living kidney followed by a donor pancreas at a later date (PAK, or pancreas-after-kidney) or a combined kidney-pancreas from a donor (SKP, simultaneous kidney-pancreas).
Transplanting just the islet cells from the pancreas is still in the experimental stage but shows promise. This involves taking a deceased donor pancreas, breaking it down, and extracting the islet cells that make insulin. The cells are then injected through a catheter into the recipient and they generally lodge in the liver. The recipient still needs to take immunosuppressants to avoid rejection, but no surgery is required. Most people need two or three such injections, and many are not completely insulin-free.
Post operation
The transplant surgery takes about three hours. The donor kidney will be placed in the lower abdomen and its blood vessels connected to arteries and veins in the recipient's body. When this is complete, blood will be allowed to flow through the kidney again. The final step is connecting the ureter from the donor kidney to the bladder. In most cases, the kidney will soon start producing urine.
Depending on its quality, the new kidney usually begins functioning immediately. Living donor kidneys normally require 3–5 days to reach normal functioning levels, while cadaveric donations stretch that interval to 7–15 days. Hospital stay is typically for 4–10 days. If complications arise, additional medications (diuretics) may be administered to help the kidney produce urine.
Immunosuppressant drugs are used to suppress the immune system from rejecting the donor kidney. These medicines must be taken for the rest of the recipient's life. The most common medication regimen today is a mixture of tacrolimus, mycophenolate, and prednisolone. Some recipients may instead take ciclosporin, sirolimus, or azathioprine. The risk of early rejection of the transplanted kidney is increased if corticosteroids are avoided or withdrawn after the transplantation. Ciclosporin, considered a breakthrough immunosuppressive when first discovered in the 1980s, ironically causes nephrotoxicity and can result in iatrogenic damage to the newly transplanted kidney. Tacrolimus, which is a similar drug, also causes nephrotoxicity. Blood levels of both must be monitored closely and if the recipient seems to have declining kidney function or proteinuria, a kidney transplant biopsy may be necessary to determine whether this is due to rejection or ciclosporin or tacrolimus intoxication .
Imaging
Post operatively, kidneys are periodically assessed by ultrasound to assess for the imaging and physiologic changes that accompany transplant rejection. Imaging also allows evaluation of supportive structures such as the anastomosed transplant artery, vein, and ureter, to ensure they are stable in appearance.
The major sonographic scale in quantitative ultrasound assessment is with a multipoint assessment of the resistive index (RI), beginning at the main renal artery and vein and ending at the arcuate vessels. It is calculated as follows:
RI = (peak systolic velocity – end diastolic velocity ) / peak systolic velocity
The normal value is ≈ 0.60, with 0.70 being the upper limits of normal.
Post-transplantation radioisotope renography can be used for the diagnosis of vascular and urological complications. Also, early post-transplantation renography is used for the assessment of delayed graft function.
Diet
Kidney transplant recipients are discouraged from consuming grapefruit, pomegranate and green tea products. These food products are known to interact with the transplant medications, specifically tacrolimus, cyclosporin and sirolimus; the blood levels of these drugs may be increased, potentially leading to an overdose.
Complications
Problems after a transplant may include:
Post operative complications, such as bleeding, infection, vascular thrombosis and urinary complications
Transplant rejection (hyperacute, acute or chronic)
Infections and sepsis due to the immunosuppressant drugs that are required to decrease risk of rejection (e.g., Tuberculosis, Cytomegalovirus colitis)
Post-transplant lymphoproliferative disorder (a form of lymphoma due to the immune suppressants). This occurs in about 2% of patients, occurring especially in the first 2 years post-transplant
Skin tumours
Imbalances in electrolytes including calcium and phosphate which can lead to bone problems
Proteinuria
Hypertension
Recurrence of original cause of kidney failure
Other side effects of medications including gastrointestinal inflammation and ulceration of the stomach and esophagus, hirsutism (excessive hair growth in a male-pattern distribution) with ciclosporin, hair loss with tacrolimus, obesity, acne, diabetes mellitus type 2, hypercholesterolemia, and osteoporosis.
Alloimmune injury and recurrent glomerulonephritis are major causes of transplant failure. Within 1 year post-transplant, the majority of transplant losses are due to technical issues with the transplant or vascular complications (41% of losses) with acute rejection and glomerulonephritis being less common causes at 17% and 3% respectively. Later causes of transplant failure, 1 year or greater after transplantation, include chronic rejection (63% of losses) and glomerulonephritis (6%).
Infections due to the immunosuppressant drugs used in people with kidney transplants most commonly occur in mucocutaneous areas (41%), the urinary tract (17%) and the respiratory tract (14%). The most common infective agents are bacterial (46%), viral (41%), fungal (13%), and protozoan (1%). Of the viral illnesses, the most common agents are human cytomegalovirus (31.5%), herpes simplex (23.4%), and herpes zoster (23.4%). Cytomegalovirus (CMV) is the most common opportunistic infection that may occur after a kidney and other solid organ transplants and is a risk factor for graft failure or acute rejection. BK virus is now being increasingly recognised as a transplant risk factor which may lead to kidney disease or transplant failure if untreated. Infection is the cause of death in about one third of people with renal transplants, and pneumonias account for 50% of the patient deaths from infection.
Delayed graft function is defined as the need for hemodialysis within 1 week of kidney transplant and is the result of excessive perfusion related injury after transplant. Delayed graft function occurs in approximately 25% of recipients of kidneys from deceased donors. Delayed graft function leads to graft fibrosis and inflammation, and is a risk factor for graft failure in the future. Hypothermic pulsatile machine perfusion; using a machine to perfuse donor kidneys ex vivo with cold solution, rather than static cold storage, is associated with a lower incidence of delayed graft function. Deceased donor kidneys with higher kidney donor profile index (KDPI) scores (a score used to determine suitability of donor kidneys based on factors such as age of donor, cause of death, kidney function at time of death, history of diabetes or hypertension, etc.)(with higher scores indicating lower suitability) are associated with an increased risk of delayed graft function.
Acute rejection is another possible complication of kidney transplantation; it is graded according to the Banff Classification which incorporates various serologic, molecular and histologic markers to determine the severity of the rejection. Acute rejection can be classified as T-cell mediated, antibody mediated or both (mixed rejection). Common causes of acute rejection include inadequate immunosuppression treatment or non-compliance with the immunosuppressive regiment. Clinical acute rejection (seen in approximately 10-15% of kidney transplants within the first year of transplantation) presents as kidney rejection with associated kidney dysfunction. Subclinical rejection (seen in approximately 5-15% of kidney transplants within the first year of transplantation) presents as rejection incidentally seen on biopsy but with normal kidney function. Acute rejection with onset 3 months or later after transplantation is associated with a worse prognosis. Acute rejection with onset less than 1 year after transplantation is usually T cell mediated, whereas onset greater than 1 year after transplantation is associated with a mixed T cell and antibody mediated inflammation.
The mortality rate due to Covid-19 in kidney transplant recipients is 13-32% which is significantly higher than that of the general population. This is thought to be due to immunosuppression status and medical co-morbidities in transplant recipients. Covid-19 vaccination with booster doses is recommended for all kidney transplant recipients.
Prognosis
Kidney transplantation is a life-extending procedure. The typical patient will live 10 to 15 years longer with a kidney transplant than if kept on dialysis. The increase in longevity is greater for younger patients, but even 75-year-old recipients (the oldest group for which there is data) gain an average four more years of life. Graft and patient survival after transplantation have also improved over time, with 10 year graft survival rates for deceased donor transplants increasing from 42.3% in 1996–1999 to 53.6% in 2008-2011 and 10 year patient survival rate increasing from 60.5% in 1996–1999 to 66.9% in 2008–2011. There is a survival benefit among recipients of kidney transplant (both living or dead recipients) as compared to those on long term dialysis without a kidney transplant, including in those with co-morbidities such as type 2 diabetes, advanced age, obesity or those with HLA mismatches. People generally have more energy, a less-restricted diet, and fewer complications with a kidney transplant than if they stay on conventional dialysis.
Some studies seem to suggest that the longer a patient is on dialysis before the transplant, the less time the kidney will last. It is not clear why this occurs, but it underscores the need for rapid referral to a transplant program. A recent study also suggests that the muscle wasting and frailty that occur during prolonged dialysis has a negative impact on a patient's physical functioning post transplantation. Ideally, a kidney transplant should be pre-emptive, i.e., take place before the patient begins dialysis. The reason why kidneys fail over time after transplantation has been elucidated in recent years. Apart from recurrence of the original kidney disease, rejection (mainly antibody-mediated rejection) and progressive scarring (multifactorial) also play a decisive role. Avoiding rejection by strict medication adherence is of utmost importance to avoid failure of the kidney transplant.
At least four professional athletes have made a comeback to their sport after receiving a transplant: New Zealand rugby union player Jonah Lomu, German-Croatian soccer player Ivan Klasnić, and NBA basketballers Sean Elliott and Alonzo Mourning.
For live kidney donors, prognostic studies are potentially confounded a selection bias wherein kidney donors are selected among people who are healthier than the general population, but when matching to a corresponding healthy control group, there appears to be no difference in overall long-term mortality rates among kidney donors.
Statistics
In addition to nationality, transplantation rates differ based on race, sex, and income. A study done with patients beginning long-term dialysis showed that the socio-demographic barriers to renal transplantation are relevant even before patients are on the transplant list. For example, different socio-demographic groups express different interest and complete pre-transplant workup at different rates. Previous efforts to create fair transplantation policies have focused on patients currently on the transplantation waiting list.
In the U.S. health system
Transplant recipients must take immunosuppressive anti-rejection drugs for as long as the transplanted kidney functions. The routine immunosuppressives are tacrolimus (Prograf), mycophenolate (Cellcept), and prednisolone; these drugs cost US$1,500 per month. In 1999, the United States Congress passed a law that restricts Medicare from paying for more than three years for these drugs unless the patient is otherwise Medicare-eligible. Transplant programs may not transplant a patient unless the patient has a reasonable plan to pay for medication after Medicare coverage expires; however, patients are almost never turned down for financial reasons alone. Half of end-stage renal disease patients only have Medicare coverage. This provision was repealed in December 2020; the repeal will come into effect on January 1, 2023. People who were on Medicare, or who had applied for Medicare at the time of their procedure, will have lifetime coverage of post-transplant drugs.
The United Network for Organ Sharing, which oversees the organ transplants in the United States, allows transplant candidates to register at two or more transplant centers, a practice known as 'multiple listing'. The practice has been shown to be effective in mitigating the dramatic geographic disparity in the waiting time for organ transplants, particularly for patients residing in high-demand regions such as Boston. The practice of multiple-listing has also been endorsed by medical practitioners.
Notable recipients
| Biology and health sciences | Surgery | Health |
1585380 | https://en.wikipedia.org/wiki/Edmontosaurus | Edmontosaurus | Edmontosaurus ( ) (meaning "lizard from Edmonton"), with the second species often colloquially and historically known as Anatosaurus or Anatotitan (meaning "duck lizard" and "giant duck"), is a genus of hadrosaurid (duck-billed) dinosaur. It contains two known species: Edmontosaurus regalis and Edmontosaurus annectens. Fossils of E. regalis have been found in rocks of western North America that date from the late Campanian age of the Cretaceous period 73 million years ago, while those of E. annectens were found in the same geographic region from rocks dated to the end of the Maastrichtian age, 66 million years ago. Edmontosaurus was one of the last non-avian dinosaurs to ever exist, and lived alongside dinosaurs like Triceratops, Tyrannosaurus, Ankylosaurus, and Pachycephalosaurus shortly before the Cretaceous–Paleogene extinction event.
Edmontosaurus included two of the largest hadrosaurid species, with E. annectens measuring up to in length and weighing around in average asymptotic body mass, although some individuals would have been much larger. Several well-preserved specimens are known that include numerous bones, as well as extensive skin impressions and possible gut contents. Edmontosaurus is classified as a genus of saurolophine (or hadrosaurine) hadrosaurid, a member of the group of hadrosaurids that lacked large, hollow crests and instead had smaller, solid crests or fleshy combs.
The first fossils named Edmontosaurus were discovered in southern Alberta (named after Edmonton, the capital city), in the Horseshoe Canyon Formation (formerly called the lower Edmonton Formation). The type species, E. regalis, was named by Lawrence Lambe in 1917, although several other species that are now classified in Edmontosaurus were named earlier. The best known of these is E. annectens, named by Othniel Charles Marsh in 1892. This species was originally as a species of Claosaurus, known for many years as a species of Trachodon, and later known as Anatosaurus annectens. Anatosaurus, Anatotitan, and probably Ugrunaaluk are now generally regarded as synonyms of Edmontosaurus.
Edmontosaurus was widely distributed across western North America, ranging from Colorado to the northern slopes of Alaska. The distribution of Edmontosaurus fossils suggests that it preferred coasts and coastal plains. It was a herbivore that could move on both two legs and four. Because it is known from several bone beds, Edmontosaurus is thought to have lived in groups and may have been migratory as well. The wealth of fossils has allowed researchers to study its paleobiology in detail, including its brain, how it may have fed, and its injuries and pathologies, such as evidence for tyrannosaur attacks on a few specimens.
Discovery and history
Claosaurus annectens
Edmontosaurus has had a very long and complicated history in paleontology, having spent decades with various species classified in other genera. Its taxonomic history intertwines at various points with the genera Agathaumas, Anatosaurus, Anatotitan, Claosaurus, Hadrosaurus, Thespesius, and Trachodon, with references predating the 1980s typically using Anatosaurus, Claosaurus, Thespesius, or Trachodon for edmontosaur fossils (excluding those assigned to E. regalis) depending on the author and the date. Although Edmontosaurus was only named in 1917, its oldest well-supported species (E. annectens) was named in 1892 as a species of Claosaurus.
The first well-supported species of Edmontosaurus was named in 1892 as Claosaurus annectens by Othniel Charles Marsh. This species is based on USNM 2414, which is a partial skull-roof and skeleton, with a second skull and skeleton, YPM 2182, designated as the paratype. Both were collected in 1891 by John Bell Hatcher from the late Maastrichtian-age Upper Cretaceous Lance Formation of Niobrara County (then part of Converse County), Wyoming. This species has some historical footnotes attached, as it is among the first dinosaurs to receive a skeletal restoration and is the first hadrosaurid so restored. YPM 2182 and UNSM 2414 are, respectively, the first and second essentially complete mounted dinosaur skeletons in the United States. YPM 2182 was put on display in 1901 and USNM 2414 was put on display in 1904.
Because of the incomplete understanding of hadrosaurids at the time, following Marsh's death in 1897, Claosaurus annectens was variously classified as a species of Claosaurus, Thespesius or Trachodon. Opinions varied greatly, as textbooks and encyclopedias drew a distinction between the "Iguanodon-like" Claosaurus annectens and the "duck-billed" Hadrosaurus (based on remains now known as adult Edmontosaurus annectens), while Hatcher explicitly identified C. annectens as synonymous with the hadrosaurid represented by those same duck-billed skulls. Hatcher's revision, published in 1902, was sweeping, as he considered almost all hadrosaurid genera then known as synonyms of Trachodon. This included Cionodon, Diclonius, Hadrosaurus, Ornithotarsus, Pteropelyx, and Thespesius, as well as Claorhynchus and Polyonax, which are fragmentary genera now thought to be ceratopsians. Hatcher's work led to a brief consensus until post-1910, when new material from Canada and Montana showed a greater diversity of hadrosaurids than previously suspected. Charles W. Gilmore, in 1915, reassessed hadrosaurids and recommended that Thespesius be reintroduced for hadrosaurids from the Lance Formation and rock units of equivalent age and that Trachodon, based on inadequate material, should be restricted to a hadrosaurid from the older Judith River Formation and its equivalents. In regards to Claosaurus annectens, he recommended that it be considered the same as Thespesius occidentalis. His reinstatement of Thespesius for Lance-age hadrosaurids would have other consequences for the taxonomy of Edmontosaurus in the following decades.
During this time frame (1902–1915), two additional important specimens of C. annectens were recovered. The first, the "mummified" specimen AMNH 5060, was discovered in 1908 by Charles Hazelius Sternberg and his sons in Lance Formation rocks near Lusk, Wyoming. Sternberg was working for the British Museum of Natural History, but Henry Fairfield Osborn of the American Museum of Natural History was able to purchase the specimen for $2,000. The Sternbergs recovered a second similar specimen from the same area in 1910, which was not as well preserved. However, it was also found with skin impressions. They sold the specimen, SM 4036, to the Senckenberg Museum in Germany.
As a side note, Trachodon selwyni, described by Lawrence Lambe in 1902 for a lower jaw from what is now known as the Dinosaur Park Formation of Alberta, was erroneously described by Glut (1997) as having been assigned to Edmontosaurus regalis by Lull and Wright. It was not, instead being designated "of very doubtful validity." More recent reviews of hadrosaurids have concurred.
Canadian discoveries
Edmontosaurus itself was coined in 1917 by Lawrence Lambe for two partial skeletons found in the Horseshoe Canyon Formation (formerly the lower Edmonton Formation) along the Red Deer River of southern Alberta. These rocks are older than the rocks in which Claosaurus annectens was found. The Edmonton Formation lends Edmontosaurus its name. The type species, E. regalis (meaning "regal", or, more loosely, "king-sized"), is based on NMC 2288, which consists of a skull, articulated vertebrae up to the sixth tail vertebra, ribs, partial hips, an upper arm bone, and most of a leg. It was discovered in 1912 by Levi Sternberg. The second specimen, paratype NMC 2289, consists of a skull and skeleton lacking the beak, most of the tail, and part of the feet. It was discovered in 1916 by George F. Sternberg. Lambe found that his new dinosaur compared best to Diclonius mirabilis (specimens now assigned to Edmontosaurus annectens) and drew attention to the size and robustness of Edmontosaurus. Initially, Lambe only described the skulls of the two skeletons, but returned to the genus in 1920 to describe the skeleton of NMC 2289. The postcrania of the type specimen remains undescribed, still in its plaster jackets to this day.
Two more species that would come to be included with Edmontosaurus were named from Canadian remains in the 1920s, but both would initially be assigned to Thespesius. Gilmore named the first, Thespesius edmontoni, in 1924. T. edmontoni also came from the Horseshoe Canyon Formation. It was based on NMC 8399, another nearly complete skeleton lacking most of the tail. NMC 8399 was discovered on the Red Deer River in 1912 by a Sternberg party. Its arms, ossified tendons, and skin impressions were briefly described in 1913 and 1914 by Lambe, who at first thought it was an example of a species he had named Trachodon marginatus, but then changed his mind. The specimen became the first dinosaur skeleton to be mounted for exhibition in a Canadian museum. Gilmore found that his new species compared closely to what he called Thespesius annectens, but left the two apart because of details of the arms and hands. He also noted that his species had more vertebrae than Marsh's in the back and neck, but proposed that Marsh was mistaken in assuming that the annectens specimens were complete in those regions.
In 1926, Charles Mortram Sternberg named Thespesius saskatchewanensis for NMC 8509, which is a skull and partial skeleton from the Wood Mountain plateau of southern Saskatchewan. He had collected this specimen in 1921 from rocks that were assigned to the Lance Formation, now the Frenchman Formation. NMC 8509 included an almost complete skull, numerous vertebrae, partial shoulder and hip girdles, and partial legs, representing the first substantial dinosaur specimen recovered from Saskatchewan. Sternberg opted to assign it to Thespesius because that was the only hadrosaurid genus known from the Lance Formation at the time. At the time, T. saskatchewanensis was unusual because of its small size, estimated at in length.
Anatosaurus to the present
In 1942, Lull and Wright attempted to resolve the complicated taxonomy of crestless hadrosaurids by naming a new genus, Anatosaurus, to take in several species that did not fit well under their previous genera. Anatosaurus, meaning "duck lizard", because of its wide, duck-like beak (Latin anas = duck + Greek sauros = lizard), had as its type species Marsh's old Claosaurus annectens. Also assigned to this genus were Thespesius edmontoni, T. saskatchewanensis, a large lower jaw that Marsh had named Trachodon longiceps in 1890, and a new species named Anatosaurus copei for two skeletons on display at the American Museum of Natural History that had long been known as Diclonius mirabilis (or variations thereof). Thus, the various species became Anatosaurus annectens, A. copei, A. edmontoni, A. longiceps, and A. saskatchewanensis. Anatosaurus would come to be called the "classic duck-billed dinosaur."
This state of affairs persisted for several decades until Michael K. Brett-Surman reexamined the pertinent material for his graduate studies in the 1970s and 1980s. He concluded that the type species of Anatosaurus, A. annectens, was actually a species of Edmontosaurus and that A. copei was different enough to warrant its own genus. Although theses and dissertations are not regarded as official publications by the International Commission on Zoological Nomenclature, which regulates the naming of organisms, his conclusions were known to other paleontologists and were adopted by several popular works of the time. Brett-Surman and Ralph Chapman designated a new genus for A. copei (Anatotitan) in 1990. Of the remaining species, A. saskatchewanensis and A. edmontoni were assigned to Edmontosaurus as well and A. longiceps went to Anatotitan as either a second species or as a synonym of A. copei. Because the type species of Anatosaurus (A. annectens) was sunk into Edmontosaurus, the name Anatosaurus is abandoned as a junior synonym of Edmontosaurus.
The conception of Edmontosaurus that emerged included three valid species: the type species E. regalis, E. annectens (including Anatosaurus edmontoni, amended to edmontonensis), and E. saskatchewanensis. The debate about the proper taxonomy of the A. copei specimens continues to the present day. Returning to Hatcher's argument of 1902, Jack Horner, David B. Weishampel, and Catherine Forster regarded Anatotitan copei as representing specimens of Edmontosaurus annectens with crushed skulls. In 2007, another "mummy" was announced. Nicknamed "Dakota", it was discovered in 1999 by Tyler Lyson and came from the Hell Creek Formation of North Dakota.
In a 2011 study by Nicolás Campione and David Evans, the authors conducted the first ever morphometric analysis to compare the various specimens assigned to Edmontosaurus. They concluded that only two species are valid: E. regalis, from the late Campanian, and E. annectens, from the late Maastrichtian. Their study provided further evidence that Anatotitan copei is a synonym of E. annectens. Specifically, the long, low skull of A. copei is the result of ontogenetic change and represents mature E. annectens individuals.
Species and distribution
Edmontosaurus is currently regarded as having two valid species: the type species E. regalis and E. annectens. E. regalis is known only from the Horseshoe Canyon Formation of Alberta, dating from the late Campanian age of the late Cretaceous period. At least a dozen individuals are known, including seven skulls with associated postcrania and five to seven other skulls. The species formerly known as Thespesius edmontoni or Anatosaurus edmontoni represents immature individuals of E. regalis.
E. annectens is known from the Frenchman Formation of Saskatchewan, the Hell Creek Formation of Montana, and the Lance Formation of South Dakota and Wyoming. It is limited to late Maastrichtian rocks and is represented by at least twenty skulls, some with postcranial remains. One author, Kraig Derstler, has described E. annectens as "perhaps the most perfectly-known dinosaur to date [1994]." Anatosaurus copei and E. saskatchewanensis are now thought to be growth stages of E. annectens, with A. copei as adults and E. saskatchewanensis as juveniles. Trachodon longiceps may be a synonym of E. annectens as well. Anatosaurus edmontoni was mistakenly listed as a synonym of E. annectens in both reviews of Dinosauria, but this does not appear to be the case.
E. annectens differed from E. regalis by having a longer, lower, and less robust skull and the lack of a comb-like crest. Although Brett-Surman regarded E. regalis and E. annectens as potentially representing males and females of the same species, all E. regalis specimens come from older formations than E. annectens specimens. Edmontosaurine specimens from the Prince Creek Formation of Alaska formerly assigned to Edmontosaurus sp. were given their own genus and species name, Ugrunaaluk kuukpikensis, in 2015. However, the identification of Ugrunaaluk as a separate genus was questioned by a 2017 study from Hai Xing and colleagues, who regarded it as a nomen dubium that was indistinguishable from other Edmontosaurus. In 2020, Ryuji Takasaki and colleagues agreed that the Prince Creek remains should be classified as Edmontosaurus, though species designation is unclear because the specimens are juveniles. Another study found the Alaskan material to be referable to Edmontosaurus cf. regalis based on craniomandibular anatomy. Edmontosaurus was also reported from the Javelina Formation of Big Bend National Park, western Texas based on TMM 41442-1, but was later referred to Kritosaurus cf. navajovius by Wagner (2001), before being assigned to Kritosaurus sp. by Lehman et al. (2016).
Description
Edmontosaurus has been described in detail from numerous specimens. Traditionally, E. regalis has been regarded as the largest species, though this was challenged by the hypothesis that the larger hadrosaurid Anatotitan copei is a synonym of Edmontosaurus annectens, as put forward by Jack Horner and colleagues in 2004, and supported in studies by Campione and Evans in 2011.
Size
Edmontosaurus was among the largest hadrosaurids to ever exist. Like other hadrosaurids, it was a bulky animal with a long, laterally flattened tail and an expanded, duck-like beak. The arms were not as heavily built as the legs, but were long enough to be used for standing or for quadrupedal movement. Depending on the species, previous estimates suggested that a fully grown adult could have been long and some of the larger specimens reached the range of with a body mass on the order of .
E. annectens is often seen as smaller. Two mounted skeletons, USNM 2414 and YPM 2182, measure long and long, respectively. However, these are probably subadult individuals There is also at least one report of a much larger potential E. annectens specimen that is almost long. Two specimens still under study in the collection of the Museum of the Rockies - a tail labelled as MOR 1142 and another labelled as MOR 1609 - indicate that Edmontosaurus annectens could have grown to much larger sizes and reach nearly in length, similar to the closesly related Shantungosaurus which weighed , but such large individuals were likely very rare.
A 2022 study on the osteohistology and growth of E. annectens suggested that previous estimates might have underestimated or overestimated the size of this dinosaur and proposed that a fully grown adult E. annectens would have measured up to in length and approximately in average asymptotic body mass, while the largest individuals measured more than and even up to when based on the comparison between various specimens of different sizes from the Ruth Mason Dinosaur Quarry and other specimens from different localities. According to this analysis, E. regalis may have been heavier, but not enough samples exist to provide a valid estimate and examination on its osteohistology and growth, so the results for E. regalis aren't statistically significant.
Skull
The skull of a fully grown Edmontosaurus could be over a metre long. One skull of E. annectens (formerly Anatotitan) measures long. The skull was roughly triangular in profile, with no bony cranial crest. Viewed from above, the front and rear of the skull were expanded, with the broad front forming a duck-bill or spoon-bill shape. The beak was toothless, and both the upper and lower beaks were extended by keratinous material. Substantial remains of the keratinous upper beak are known from the "mummy" kept at the Senckenberg Museum. In this specimen, the preserved nonbony part of the beak extended for at least beyond the bone, projecting down vertically. The nasal openings of Edmontosaurus were elongate and housed in deep depressions surrounded by distinct bony rims above, behind, and below.
In at least one case (the Senckenberg specimen), rarely preserved sclerotic rings were preserved in the eye sockets. Another rarely seen bone, the stapes (the reptilian ear bone), has also been seen in a specimen of Edmontosaurus. It has been suggested that Edmontosaurus may have had binocular vision based on the 3D scan of a nearly complete skull of E. regalis (CMN 2289).
Teeth were present only in the maxillae (upper cheeks) and dentaries (main bone of the lower jaw). The teeth were continually replaced, taking about half a year to form. They were composed of six types of tissues, rivaling the complexity of mammal teeth. They grew in columns, with an observed maximum of six in each, and the number of columns varied based on the animal's size. Known column counts for the two species are: 51 to 53 columns per maxilla and 48 to 49 per dentary (teeth of the upper jaw being slightly narrower than those in the lower jaw) for E. regalis; and 52 columns per maxilla and 44 per dentary for E. annectens (an E. saskatchewanensis specimen).
Postcranial skeleton
The number of vertebrae differs between specimens. E. regalis had thirteen neck vertebrae, eighteen back vertebrae, nine hip vertebrae, and an unknown number of tail vertebrae. A specimen once identified as belonging to Anatosaurus edmontoni (now considered to be the same as E. regalis) is reported as having an additional back vertebra and 85 tail vertebrae, with an undisclosed amount of restoration. Other hadrosaurids are only reported as having 50 to 70 tail vertebrae, so this appears to have been an overestimate. The anterior back was curved toward the ground, with the neck flexed upward and the rest of the back and tail held horizontally. Most of the back and tail were lined by ossified tendons arranged in a latticework along the neural spines of the vertebrae. This condition has been described as making the back and at least part of the tail "ramrod" straight. The ossified tendons are interpreted as having strengthened the vertebral column against gravitational stress, incurred through being a large animal with a horizontal vertebral column otherwise supported mostly by the hind legs and hips.
The shoulder blades were long flat blade-like bones, held roughly parallel to the vertebral column. The hips were composed of three elements each: an elongate ilium above the articulation with the leg, an ischium below and behind with a long thin rod, and a pubis in front that flared into a plate-like structure. The structure of the hip hindered the animal from standing with its back erect, because in such a position the thigh bone would have pushed against the joint of the ilium and pubis, instead of pushing only against the solid ilium. The nine fused hip vertebrae provided support for the hip.
The fore legs were shorter and less heavily built than the hind legs. The upper arm had a large deltopectoral crest for muscle attachment, while the ulna and radius were slim. The upper arm and forearm were similar in length. The wrist was simple, with only two small bones. Each hand had four fingers, with no thumb (first finger). The index (second), third, and fourth fingers were approximately the same length and were united in life within a fleshy covering. Although the second and third finger had hoof-like unguals, these bones were also within the skin and not apparent from the outside. The little finger diverged from the other three and was much shorter. The thigh bone was robust and straight, with a prominent flange about halfway down the posterior side. This ridge was for the attachment of powerful muscles attached to the hips and tail that pulled the thighs (and thus the hind legs) backward and helped maintain the use of the tail as a balancing organ. Each foot had three toes, with no big toe or little toe. The toes had hoof-like tips.
Soft tissue
Multiple specimens of Edmontosaurus annectens have been found with preserved skin impressions. Several have been well-publicized, such as the "Trachodon mummy" of the early 20th century, and the specimen nicknamed "Dakota", the latter apparently including remnant organic compounds from the skin. Because of these finds, the scalation of Edmontosaurus annectens is known for most areas of the body. Skin impressions are less well known for E. regalis, but some well-preserved examples have been studied, including one which preserves a soft tissue crest or wattle on the head. It is unknown whether such a crest was present on E. annectens, and whether it was an indicator of sexual dimorphism.
A preserved rhamphotheca present in the E. annectens specimen LACM 23502, housed in the Los Angeles County Museum, indicates the beak of Edmontosaurus was more hook-shaped and extensive than many illustrations in scientific and public media have previously depicted. Whether or not the specimen in question preserved the true rhamphotheca or just a cast of the inner structure attached to the bone is not known at present.
Classification
Edmontosaurus was a hadrosaurid (a duck-billed dinosaur), a member of a family of dinosaurs which to date are known only from the Late Cretaceous. It is classified within the Saurolophinae (alternately Hadrosaurinae), a clade of hadrosaurids which lacked hollow crests. Other members of the group include Brachylophosaurus, Gryposaurus, Lophorhothon, Maiasaura, Naashoibitosaurus, Prosaurolophus, and Saurolophus. It was either closely related to or includes the species Anatosaurus annectens (alternately Edmontosaurus annectens), a large hadrosaurid from various latest Cretaceous formations of western North America. The giant Chinese hadrosaurine Shantungosaurus giganteus is also anatomically similar to Edmontosaurus; M. K. Brett-Surman found the two to differ only in details related to the greater size of Shantungosaurus, based on what had been described of the latter genus.
While the status of Edmontosaurus as a saurolophine has not been challenged, its exact placement within the clade is uncertain. Early phylogenies, such as that presented in R. S. Lull and Nelda Wright's influential 1942 monograph, had Edmontosaurus and various species of Anatosaurus (most of which would be later considered as additional species or specimens of Edmontosaurus) as one lineage among several lineages of "flat-headed" hadrosaurs. One of the first analyses using cladistic methods found it to be linked with Anatosaurus (=Anatotitan) and Shantungosaurus in an informal "edmontosaur" clade, which was paired with the spike-crested "saurolophs" and more distantly related to the "brachylophosaurs" and arch-snouted "gryposaurs". A 2007 study by Terry Gates and Scott Sampson found broadly similar results, in that Edmontosaurus remained close to Saurolophus and Prosaurolophus and distant from Gryposaurus, Brachylophosaurus, and Maiasaura. However, the most recent review of Hadrosauridae, by Jack Horner and colleagues (2004), came to a noticeably different result: Edmontosaurus was nested between Gryposaurus and the "brachylophosaurs", and distant from Saurolophus. Edmontosaurus is the namesake genus of the saurolophine tribe Edmontosaurini, which also includes taxa like Shantungosaurus, Kerberosaurus and Laiyangosaurus.
Paleobiology
Diet and feeding
As a hadrosaurid, Edmontosaurus was a large terrestrial herbivore. Its teeth were continually replaced and packed into dental batteries that contained hundreds of teeth, only a relative handful of which were in use at any time. It used its broad beak to cut loose food, perhaps by cropping, or by closing the jaws in a clamshell-like manner over twigs and branches and then stripping off the more nutritious leaves and shoots. Because the tooth rows are deeply indented from the outside of the jaws, and because of other anatomical details, it is inferred that Edmontosaurus and most other ornithischians had cheek-like structures, muscular or non-muscular. The function of the cheeks was to retain food in the mouth. The animal's feeding range would have been from ground level to around above.
Before the 1960s and 1970s, the prevailing interpretation of hadrosaurids like Edmontosaurus was that they were aquatic and fed on aquatic plants. An example of this is William Morris's 1970 interpretation of an edmontosaur skull with nonbony beak remnants. He proposed that the animal had a diet much like that of some modern ducks, filtering plants and aquatic invertebrates like mollusks and crustaceans from the water and discharging water via V-shaped furrows along the inner face of the upper beak. This interpretation of the beak has been rejected, as the furrows and ridges are more like those of herbivorous turtle beaks than the flexible structures seen in filter-feeding birds.
Because scratches dominate the microwear texture of the teeth, Williams et al. suggested Edmontosaurus was a grazer instead of a browser, which would be predicted to have fewer scratches due to eating less abrasive materials. Candidates for ingested abrasives include silica-rich plants like horsetails and soil that was accidentally ingested due to feeding at ground level. The tooth structure indicates combined slicing and grinding capabilities.
Reports of gastroliths, or stomach stones, in the hadrosaurid Claosaurus is actually based on a probable double misidentification. First, the specimen is actually of Edmontosaurus annectens. Barnum Brown, who discovered the specimen in 1900, referred to it as Claosaurus because E. annectens was thought to be a species of Claosaurus at the time. Additionally, it is more likely that the supposed gastroliths represent gravel washed in during burial.
Gut contents
Both of the "mummy" specimens collected by the Sternbergs were reported to have had possible gut contents. Charles H. Sternberg reported the presence of carbonized gut contents in the American Museum of Natural History specimen, but this material has not been described. The plant remains in the Senckenberg Museum specimen have been described, but have proven difficult to interpret. The plants found in the carcass included needles of the conifer Cunninghamites elegans, twigs from conifer and broadleaf trees, and numerous small seeds or fruits. Upon their description in 1922, they were the subject of a debate in the German-language journal Paläontologische Zeitschrift. Kräusel, who described the material, interpreted it as the gut contents of the animal, while Abel could not rule out that the plants had been washed into the carcass after death.
At the time, hadrosaurids were thought to have been aquatic animals, and Kräusel made a point of stating that the specimen did not rule out hadrosaurids eating water plants. The discovery of possible gut contents made little impact in English-speaking circles, except for another brief mention of the aquatic-terrestrial dichotomy, until it was brought up by John Ostrom in the course of an article reassessing the old interpretation of hadrosaurids as water-bound. Instead of trying to adapt the discovery to the aquatic model, he used it as a line of evidence that hadrosaurids were terrestrial herbivores. While his interpretation of hadrosaurids as terrestrial animals has been generally accepted, the Senckenberg plant fossils remain equivocal. Kenneth Carpenter has suggested that they may actually represent the gut contents of a starving animal, instead of a typical diet. Other authors have noted that because the plant fossils were removed from their original context in the specimen and were heavily prepared, it is no longer possible to follow up on the original work, leaving open the possibility that the plants were washed-in debris.
Isotopic studies
The diet and physiology of Edmontosaurus have been probed by using stable isotopes of carbon and oxygen as recorded in tooth enamel. When feeding, drinking, and breathing, animals take in carbon and oxygen, which become incorporated into bone. The isotopes of these two elements are determined by various internal and external factors, such as the type of plants being eaten, the physiology of the animal, salinity, and climate. If isotope ratios in fossils are not altered by fossilization and later changes, they can be studied for information about the original factors; warmblooded animals will have certain isotopic compositions compared to their surroundings, animals that eat certain types of plants or use certain digestive processes will have distinct isotopic compositions, and so on. Enamel is typically used because the structure of the mineral that forms enamel makes it the most resistant material to chemical change in the skeleton.
A 2004 study by Kathryn Thomas and Sandra Carlson used teeth from the upper jaw of three individuals interpreted as a juvenile, a subadult, and an adult, recovered from a bone bed in the Hell Creek Formation of Corson County, South Dakota. In this study, successive teeth in columns in the edmontosaurs' dental batteries were sampled from multiple locations along each tooth using a microdrilling system. This sampling method takes advantage of the organization of hadrosaurid dental batteries to find variation in tooth isotopes over a period of time. From their work, it appears that edmontosaur teeth took less than about 0.65 years to form, slightly faster in younger edmontosaurs. The teeth of all three individuals appeared to show variation in oxygen isotope ratios that could correspond to warm/dry and cool/wet periods; Thomas and Carlson considered the possibility that the animals were migrating instead, but favored local seasonal variations because migration would have more likely led to ratio homogenization, as many animals migrate to stay within specific temperature ranges or near particular food sources.
The edmontosaurs also showed enriched carbon isotope values, which for modern mammals would be interpreted as a mixed diet of C3 plants (most plants) and C4 plants (grasses); however, C4 plants were extremely rare in the Late Cretaceous if present at all. Thomas and Carlson put forward several factors that may have been operating, and found the most likely to include a diet heavy in gymnosperms, consuming salt-stressed plants from coastal areas adjacent to the Western Interior Seaway, and a physiological difference between dinosaurs and mammals that caused dinosaurs to form tissue with different carbon ratios than would be expected for mammals. A combination of factors is also possible.
Chewing
Between the mid-1980s and the 2000s, the prevailing interpretation of how hadrosaurids processed their food followed the model put forward in 1984 by David B. Weishampel. He proposed that the structure of the skull permitted motion between bones that resulted in backward and forward motion of the lower jaw, and outward bowing of the tooth-bearing bones of the upper jaw when the mouth was closed. The teeth of the upper jaw would grind against the teeth of the lower jaw like rasps, processing plant material trapped between them. Such a motion would parallel the effects of mastication in mammals, although accomplishing the effects in a completely different way. Work in the early 2000s has challenged the Weishampel model. A study published in 2008 by Casey Holliday and Lawrence Witmer found that ornithopods like Edmontosaurus lacked the types of skull joints seen in those modern animals that are known to have kinetic skulls (skulls that permit motion between their constituent bones), such as squamates and birds. They proposed that joints that had been interpreted as permitting movement in dinosaur skulls were actually cartilaginous growth zones. An important piece of evidence for Weishampel's model is the orientation of scratches on the teeth, showing the direction of jaw action. Other movements could produce similar scratches though, such as movement of the bones of the two halves of the lower jaw. Not all models have been scrutinized under present techniques. Vincent Williams and colleagues (2009) published additional work on hadrosaurid tooth microwear. They found four classes of scratches on Edmontosaurus teeth. The most common class was interpreted as resulting from an oblique motion, not a simple up-down or front-back motion, which is consistent with the Weishampel model. This motion is thought to have been the primary motion for grinding food. Two scratch classes were interpreted as resulting from forward or backward movement of the jaws. The other class was variable and probably resulted from opening the jaws. The combination of movements is more complex than had been previously predicted.
Weishampel developed his model with the aid of a computer simulation. Natalia Rybczynski and colleagues have updated this work with a much more sophisticated three-dimensional animation model, scanning a skull of E. regalis with lasers. They were able to replicate the proposed motion with their model, although they found that additional secondary movements between other bones were required, with maximum separations of between some bones during the chewing cycle. Rybczynski and colleagues were not convinced that the Weishampel model is viable, but noted that they have several improvements to implement to their animation. Planned improvements include incorporating soft tissue and tooth wear marks and scratches, which should better constrain movements. They note that there are several other hypotheses to test as well. Further research published in 2012 by Robin Cuthbertson and colleagues found the motions required for Weishampel's model to be unlikely, and favored a model in which movements of the lower jaw produced grinding action. The lower jaw's joint with the upper jaw would permit anterior–posterior motion along with the usual rotation, and the anterior joint of the two halves of the lower jaw would also permit motion; in combination, the two halves of the lower jaw could move slightly back and forth as well as rotating slightly along their long axes. These motions would account for the observed tooth wear and a more solidly constructed skull than modeled by Weishampel.
Growth
In a 2011 study, Campione and Evans recorded data from all known "edmontosaur" skulls from the Campanian and Maastrichtian and used it to plot a morphometric graph, comparing variable features of the skull with skull size. Their results showed that within both recognized Edmontosaurus species, many features previously used to classify additional species or genera were directly correlated with skull size. Campione and Evans interpreted these results as strongly suggesting that the shape of Edmontosaurus skulls changed dramatically as they grew. This has led to several apparent mistakes in classification in the past. The Campanian species Thespesius edmontoni, previously considered a synonym of E. annectens due to its small size and skull shape, is more likely a subadult specimen of the contemporary E. regalis. Similarly, the three previously recognized Maastrichtian edmontosaur species likely represent growth stages of a single species, with E. saskatchewanensis representing juveniles, E. annectens subadults, and Anatotitan copei fully mature adults. The skulls became longer and flatter as the animals grew.
In a 2014 study, researchers proposed that E. regalis reached maturity in 10-15 years of age. In a 2022 study, Wosik and Evans proposed that E. annectens reached maturity in 9 years of age based on their analysis for various specimens from different localities. They found the result to be similar to that of other hadrosaurs.
Brain and nervous system
The brain of Edmontosaurus has been described in several papers and abstracts through the use of endocasts of the cavity where the brain had been. E. annectens and E. regalis, as well as specimens not identified to species, have been studied in this way. The brain was not particularly large for an animal the size of Edmontosaurus. The space holding it was only about a quarter of the length of the skull, and various endocasts have been measured as displacing to , which does not take into account that the brain may have occupied as little as 50% of the space of the endocast, the rest of the space being taken up by the dura mater surrounding the brain. For example, the brain of the specimen with the 374 millilitre endocast is estimated to have had a volume of . The brain was an elongate structure, and as with other non-mammals, there would have been no neocortex. Like Stegosaurus, the neural canal was expanded in the hips, but not to the same degree: the endosacral space of Stegosaurus had 20 times the volume of its endocranial cast, whereas the endosacral space of Edmontosaurus was only 2.59 times larger in volume.
Pathologies and health
In 2003, evidence of tumors, including hemangiomas, desmoplastic fibroma, metastatic cancer, and osteoblastoma, was described in Edmontosaurus bones. Rothschild et al. tested dinosaur vertebrae for tumors using computerized tomography and fluoroscope screening. Several other hadrosaurids, including Brachylophosaurus, Gilmoreosaurus, and Bactrosaurus, also tested positive. Although more than 10,000 fossils were examined in this manner, the tumors were limited to Edmontosaurus and closely related genera. The tumors may have been caused by environmental factors or genetic propensity.
Osteochondrosis, or surficial pits in bone at places where bones articulate, is also known in Edmontosaurus. This condition, resulting from cartilage failing to be replaced by bone during growth, was found to be present in 2.2% of 224 edmontosaur toe bones. The underlying cause of the condition is unknown. Genetic predisposition, trauma, feeding intensity, alterations in blood supply, excess thyroid hormones, and deficiencies in various growth factors have been suggested. Among dinosaurs, osteochondrosis (like tumors) is most commonly found in hadrosaurids.
Locomotion
Like other hadrosaurids, Edmontosaurus is thought to have been a facultative biped, meaning that it mostly moved on four legs, but could adopt a bipedal stance when needed. It probably went on all fours when standing still or moving slowly, and switched to using the hind legs alone when moving more rapidly. Research conducted by computer modeling in 2007 suggests that Edmontosaurus could run at high speeds, perhaps up to . Further simulations using a subadult specimen estimated as weighing when alive produced a model that could run or hop bipedally, use a trot, pace, or single foot symmetric quadrupedal gait, or move at a gallop. The researchers found to their surprise that the fastest gait was kangaroo-like hopping (maximum simulated speed of ), which they regarded as unlikely based on the size of the animal and lack of hopping footprints in the fossil record, and instead interpreted the result as indicative of an inaccuracy in their simulation. The fastest non-hopping gaits were galloping (maximum simulated speed of ) and running bipedally (maximum simulated speed of ). They found weak support for bipedal running as the most likely option for high-speed movement, but did not rule out high-speed quadrupedal movement.
While long thought to have been aquatic or semiaquatic, hadrosaurids were not as well-suited for swimming as other dinosaurs (particularly theropods, who were once thought to have been unable to pursue hadrosaurids into water). Hadrosaurids had slim hands with short fingers, making their forelimbs ineffective for propulsion, and the tail was also not useful for propulsion because of the ossified tendons that increased its rigidity, and the poorly developed attachment points for muscles that would have moved the tail from side to side.
Social behavior
Extensive bone beds are known for Edmontosaurus, and such groupings of hadrosaurids are used to suggest that they were gregarious, living in groups. Three quarries containing Edmontosaurus remains are identified in a 2007 database of fossil bone beds, from Alberta (Horseshoe Canyon Formation), South Dakota (Hell Creek Formation), and Wyoming (Lance Formation). One edmontosaur bone bed, from claystone and mudstone of the Lance Formation in eastern Wyoming, covers more than a square kilometre, although Edmontosaurus bones are most concentrated in a subsection of this site. It is estimated that disassociated remains pertaining to 10,000 to 25,000 edmontosaurs are present here.
Unlike many other hadrosaurids, Edmontosaurus lacked a bony crest. It may have had soft-tissue display structures in the skull, though: the bones around the nasal openings had deep indentations surrounding the openings, and this pair of recesses are postulated to have held inflatable air sacs, perhaps allowing for both visual and auditory signaling. Edmontosaurus may have been dimorphic, with more robust and more lightly built forms, but it has not been established if this is related to sexual dimorphism.
Edmontosaurus has been considered a possibly migratory hadrosaurid by some authors. A 2008 review of dinosaur migration studies by Phil R. Bell and Eric Snively proposed that E. regalis was capable of an annual round-trip journey, provided it had the requisite metabolism and fat deposition rates. Such a trip would have required speeds of about , and could have brought it from Alaska to Alberta. In contrast to Bell and Snively, Anusuya Chinsamy and colleagues concluded from a study of bone microstructure that polar Edmontosaurus overwintered.
Paleoecology
Distribution
Edmontosaurus was a wide-ranging genus in both time and space. At the southern range of its distribution, the rock units from which it is known can be divided into two groups by age: the older Horseshoe Canyon and St. Mary River formations, and the younger Frenchman, Hell Creek, and Lance formations. The time span covered by the Horseshoe Canyon Formation and equivalents is also known as Edmontonian, and the time span covered by the younger units is also known as Lancian. The Edmontonian and Lancian time intervals had distinct dinosaur faunas. At its northern range, Edmontosaurus is known from a single locality; the Liscomb Bonebed of the Prince Creek Formation.
The Edmontonian land vertebrate age is defined by the first appearance of Edmontosaurus regalis in the fossil record. Although sometimes reported as of exclusively early Maastrichtian age, the Horseshoe Canyon Formation was of somewhat longer duration. Deposition began approximately 73 million years ago, in the late Campanian, and ended between 68.0 and 67.6 million years ago. Edmontosaurus regalis is known from the lowest of five units within the Horseshoe Canyon Formation, but is absent from at least the second to the top. As many as three quarters of the dinosaur specimens from badlands near Drumheller, Alberta may pertain to Edmontosaurus.
Ecosystem
The Lancian time interval was the last interval before the Cretaceous–Paleogene extinction event that eliminated non-avian dinosaurs. Edmontosaurus was one of the more common dinosaurs of the interval. Robert T. Bakker reported that it made up one-seventh of the large dinosaur sample, with most of the rest (five-sixths) made up of the horned dinosaur Triceratops. The coastal plain Triceratops–Edmontosaurus association, dominated by Triceratops, extended from present-day Colorado to Saskatchewan.
The Lance Formation, as typified by exposures approximately north of Fort Laramie in eastern Wyoming, has been interpreted as a bayou setting similar to the Louisiana coastal plain. It was closer to a large delta than the Hell Creek Formation depositional setting to the north and received much more sediment. Tropical araucarian conifers and palm trees dotted the hardwood forests, differentiating the flora from the northern coastal plain. The climate was humid and subtropical, with conifers, palmettos, and ferns in the swamps, and conifers, ash, live oak, and shrubs in the forests. Freshwater fish, salamanders, turtles, diverse lizards, snakes, shorebirds, and small mammals lived alongside the dinosaurs. Small dinosaurs are not known in as great of abundance here as in the Hell Creek rocks, but Thescelosaurus once again seems to have been relatively common. Triceratops is known from many skulls, which tend to be somewhat smaller than those of more northern individuals. The Lance Formation is the setting of two edmontosaur "mummies".
Predator-prey relationships
The time span and geographic range of Edmontosaurus overlapped with Tyrannosaurus, and an adult specimen of E. annectens on display in the Denver Museum of Nature and Science shows evidence of a theropod bite in the tail. Counting back from the hip, the thirteenth to seventeenth vertebrae have damaged spines consistent with an attack from the right rear of the animal. One spine has a portion sheared away, and the others are kinked; three have apparent tooth puncture marks. The top of the tail was at least high, and the only theropod species known from the same rock formation that was tall enough to make such an attack is T. rex. The bones are partially healed, but the edmontosaur died before the traces of damage were completely obliterated. The damage also shows signs of bone infection. Kenneth Carpenter, who studied the specimen, noted that there also seems to be a healed fracture in the left hip which predated the attack because it was more fully healed. He suggested that the edmontosaur was a target because it may have been limping from this earlier injury. Because it survived the attack, Carpenter suggested that it may have outmaneuvered or outrun its attacker, or that the damage to its tail was incurred by the hadrosaurid using it as a weapon against the tyrannosaur. However, more modern studies dispute the idea of an attack but rather other factors unrelated to an attack from a tyrannosaur.
Another specimen of E. annectens, pertaining to a long individual from South Dakota, shows evidence of tooth marks from small theropods on its lower jaws. Some of the marks are partially healed. Michael Triebold, informally reporting on the specimen, suggested a scenario where small theropods attacked the throat of the edmontosaur; the animal survived the initial attack but succumbed to its injuries shortly thereafter. Some edmontosaur bone beds were sites of scavenging. Albertosaurus and Saurornitholestes tooth marks are common at one Alberta bone bed, and Daspletosaurus fed on Edmontosaurus and fellow hadrosaurid Saurolophus at another Alberta site. However more recent studies suggest that any and all evidence for Daspletosaurus being present in the Horseshoe Canyon Formation is referrable to Albertosaurus.
| Biology and health sciences | Ornitischians | Animals |
1585399 | https://en.wikipedia.org/wiki/Marginocephalia | Marginocephalia | Marginocephalia ( Latin: margin-head) is a clade of ornithischian dinosaurs that is characterized by a bony shelf or margin at the back of the skull. These fringes were likely used for display. This clade was officially defined in the PhyloCode by Daniel Madzia and colleagues in 2021 as "the smallest clade containing Ceratops montanus, Pachycephalosaurus wyomingensis, and Triceratops horridus". There are two clades included in Marginocephalia: the thick-skulled Pachycephalosauria and the horned Ceratopsia. All members of Marginocephalia were primarily herbivores (though pachycephalosaurs are speculated to have been omnivorous). They basally used gastroliths to aid in digestion of tough plant matter until they convergently evolved tooth batteries in Neoceratopsia (or "new Ceratopsia") and Pachycephalosauria. Marginocephalia first evolved in the Jurassic Period and became more common in the Cretaceous. They are basally small facultative quadrupeds while derived members of the group are large obligate quadrupeds. Primitive marginocephalians are found in Asia, but the group migrated upwards into North America.
Pachycephalosaurs, or "thick-headed reptiles", have primitive features that include basally small sized bodies, obligate bipedalism, and simple teeth with one row in operation at a time that are replaced as they are worn down. As they evolved, pachycephalosaurs evolved much thicker and advanced skull roofs including dome forms with horn-like ornamentation. Some research suggests these domes were used like helmets for protection while head-butting members in intraspecific combat. Some research suggests their necks were not strong enough to support such an impact. Flat-headed pachycephalosaur specimens have been found in Asia, and there is great controversy on the meaning of these flat heads. Recent research suggests the flat heads could be a juvenile state before developing the dome shape in the adult stage. It could also be evidence of sexual dimorphism with the female being more flat-headed.
Ceratopsians, or "horned-faces", differ from pachycephalosaurs in the presence of a rostral bone, or beak. They are also known for having a jugal horn and a thin parietal-squamosal shelf that extends back and up into a frill. This frill could have been used for anchoring jaw muscles, as well as for display. The horns were likely used for establishing dominance, or defending territories. It is also possible they were a factor in sexual display and species recognition. One of the basalmost members of this group is Psittacosaurus, which is one of the most species-rich dinosaur genera from Asia. Ceratopsians later evolved into very large quadrupeds with elaborate facial horns such as Triceratops, Styracosaurus, and Centrosaurus. There was no change in richness of species throughout the Cretaceous before the Cretaceous-Paleogene extinction.
Feeding
Marginocephalians have simple, peg-like teeth surrounded by rhamphotheca, a horny sheath of keratin. The teeth are arranged in batteries for easy replacement and have serrations which may have been useful for cutting up vegetation. Marginocephalia evolved several methods for breaking down vegetation. Pachycephalosaurs had especially large abdomens with broad girths and elongate sacral ribs suggesting the presence of a large stomach. This is presumed to have been useful for breaking down tough vegetation through bacterial fermentation. Another adaptation for advanced vegetation digestion is seen in Ceratopsians, which evolved features to improve their chewing apparatus. Derived ceratopsians have vertical grinding surfaces on their teeth to maximize break-down of tough vegetation. There is also evidence of advanced adductor musculature that extends from a large coronoid process on the mandible up to the ceratopsian frill, which would increase chewing force. Pachycephalosaurs had gastroliths to help in digestion of food, but only primitive ceratopsians, such as Psittacosaurus, have been found to have gastroliths.
Margins and social behavior
Marginocephalian remains reveal significant evidence of being social creatures, much of which is related to the many possible functions of the bony skull margins. Some possible functions of the variably shaped and sized margins are to ward off predators, display, ritualized combat, defense of territory, or establishing social ordering. Both pachycephalosaurs and ceratopsians show evidence of interspecific communication, and there may be evidence of intraspecific communication.
Pachycephalosaurs, with their dome-shaped heads, are commonly thought to have used their thick skulls for butting into each other. Their brains and vertebrae are positioned in ways to protect from the impact of head-butting. Also, the columnar structure found in bone remains is consistent with models used to recreate the practice of head-butting. However, some pachycephalosaurs such as Stygimoloch have vascularization in the skull cap that would not have supported head-butting behavior. Thus, the heads could have been used as ornamentation or to butt into the softer flank of other pachycephalosaurs for intraspecific communication or as a form of agonistic behavior.
The frills of ceratopsians are incredibly diverse. They may have been used for protective purposes as the frill sometimes splays over the neck. However, some say that the frill would have provided little protection from other large dinosaurs such as Tyrannosaurus. Other possible functions include intraspecific communication for mating purposes or as a visual display of territorial protection, as seen in many common day organisms such as red-breasted robins. The frills also could have been used for species recognition purposes, as they seem to develop fairly early in life. There has also been evidence that ceratopsians care for their young, as bone-beds have been found of adult individuals with a nest of juveniles, although some refute this as viable evidence of care for young. There have also been bone-beds found with hundreds of adult ceratopsians, indicating herd activity. A few specimens have been found with puncture wounds, supporting the use of horns as protective or combative weapons. Other research examining juvenile ceratopsians reveals a change in horn morphology over time, suggesting frills and horns could have been used for intraspecific communication of age. Horns also could have been used for thermoregulation as indicated by isotope analysis, as aid in knocking down vegetation, or for horn-locking agonistic behavior.
Sexual dimorphism
The study of sexual dimorphism in dinosaurs is incredibly difficult. The varying size and intricacy of margins in Marginocephalia have shown many signs of sexual dimorphism. Although the intricate frills of marginocephalians sometimes seem to present with dimorphic features, many doubt the validity of these claims. Stegoceras validum, a pachycephalosaur, can be segregated into two groups based on the size and shape of their skulls. These two group classifications separated the species population in half, which is highly indicative of sexual dimorphism. However, some report that the two groups may actually represent two separate species. Protoceratops, a type of ceratopsian, also shows signs of sexual dimorphism. However, their frills don't seem to develop until later in life, and may be coordinated with sexual maturity.
Locomotion
In general, primitive marginocephalians were bipedal or facultative quadrupedal, and derived individuals were obligate quadrupedal. This is especially prominent in Ceratopsia, where only the primitive Psittacosaurus is bipedal. All the derived forms were strong quadrupeds, although their stance is controversial. Some think they were fairly columnar, with front limbs erect under the body, which would have been more efficient for speed. Others think the legs were more sprawling, as evidenced by the shape of the forelimb bones. Although not as fast, this posture would have been efficient for grazing vegetation on the ground. Less is known about bipedal pachycephalosaur locomotion, although they must have had a fairly broad girth in order to make room for their enlarged gut.
Classification
The cladogram below follows a 2009 analysis by Zheng and colleagues.
Cladogram after Butler et al., 2011.
| Biology and health sciences | Ornitischians | Animals |
1586000 | https://en.wikipedia.org/wiki/Pandalus%20borealis | Pandalus borealis | Pandalus borealis is a species of caridean shrimp found in cold parts of the northern Atlantic and northern Pacific Oceans, although the latter population now often is regarded as a separate species, P. eous. The Food and Agriculture Organization refers to them as the northern prawn. Other common names include pink shrimp, deepwater prawn, deep-sea prawn, Nordic shrimp, great northern prawn, northern shrimp, coldwater prawn and Maine shrimp.
Distribution
Pandalus borealis usually lives on a soft muddy bottoms at depths of , in waters with a temperature of , although it has been recorded from and . P. borealis thrives in waters where the salinity ranges between 32 and 35 ppt, depending on where the shrimp are at in their life cycle. The distribution of the North Atlantic nominate subspecies P. b. borealis ranges from New England in the United States, Canada's eastern seaboard (off Newfoundland and Labrador and eastern Baffin Island in Nunavut), southern and eastern Greenland, Iceland, Svalbard, Norway and the North Sea as far south as the English Channel. The North Pacific P. b. eous is found from Japan and Korea, through the Sea of Okhotsk, across the Bering Strait, and as far south as the U.S. state of California. Instead of regarding it as a subspecies, the North Pacific population is often recognized as a separate species, P. eous.
Ecology
Trophic DNA metabarcoding studies show that Pandalus borealis plays a key role in Arctic food webs, by feeding on a diverse array of prey, including gelatinous zooplankton and chaetognaths. High diversity of fish DNA can also be detected in their stomachs, probably as a result of their role as generalized scavengers. This has led some authors to propose Pandalus borealis as an efficient natural sampler for assessing molecular fish diversity in Arctic marine ecosystems.
Physiology
In their up to eight-year lifespan, males can reach a length of , while females can reach long, although typical sizes are much smaller. The size of Pandalus borealis individuals can differ based on age, temperature of the environment and sex. Higher temperature water has been associated with faster growth.
The shrimp are hermaphroditic, specifically protandrous hermaphrodites. They are born as male, but after approximately two and a half years, their testes turn to ovaries and they complete their lives as females. Northern shrimp's spawning season begins in the late summer, usually offshore. By early fall the females will start to extrude their eggs onto their abdomen. This is when they will move inshore where their eggs will hatch in the winter.
Commercial fishing
Pandalus borealis is an important food resource, and has been widely fished since the early 1900s in Norway, and later in other countries following Johan Hjort's practical discoveries of how to locate them. In Canada, these shrimp are sold peeled, cooked and frozen in bags in supermarkets, and are consumed as appetizers.
Northern shrimp have a short life, which contributes to a variable stock on a yearly basis. However, the species is not considered overfished due to a large amount reported and a large amount harvested.
In Canada, the annual harvest limit is set to 164,000 tonnes (2008). The Canadian fishery began in the 1980s and expanded in 1990s.
In New England, Northern Shrimp were a valuable fishery stock from the late 1950s to 1978. Pandalus borealis was in high demand due to it being considered sweeter and tastier than Pacific Shrimp. Fishery production peaked in 1969 with landings at 28.3 million pounds.
In 2013, the Atlantic States Marine Fisheries Commission (which covers the Atlantic seaboard of the United States) determined that their stocks of P. borealis were too low and shut down the New England fishery. This was the first cancellation in 35 years.
The fishery has yet to recover since it collapsed and studies from 2018 report that Northern Shrimp still remain in a depleted condition. With temperatures increasing yearly, and a low spawning stock biomass (SSB), the spawning conditions for Northern Shrimp remain unfavorable. Colder temperatures and higher spawning biomass would increase the recruitment and increase the population size in the long run. However, surface temperatures are continuing to rise yearly off the coast of Maine due to climate change and impacting the region's marine fisheries.
Uses
Beyond human consumption, shrimp alkaline phosphatase (SAP), an enzyme used in molecular biology, is obtained from Pandalus borealis, and the species' carapace is a source of chitosan, a versatile chemical used for such different applications as treating bleeding wounds, filtering wine or improving the soil in organic farming.
| Biology and health sciences | Shrimps and prawns | Animals |
8912 | https://en.wikipedia.org/wiki/Drake%20equation | Drake equation | The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way Galaxy.
The equation was formulated in 1961 by Frank Drake, not for purposes of quantifying the number of civilizations, but as a way to stimulate scientific dialogue at the first scientific meeting on the search for extraterrestrial intelligence (SETI). The equation summarizes the main concepts which scientists must contemplate when considering the question of other radio-communicative life. It is more properly thought of as an approximation than as a serious attempt to determine a precise number.
Criticism related to the Drake equation focuses not on the equation itself, but on the fact that the estimated values for several of its factors are highly conjectural, the combined multiplicative effect being that the uncertainty associated with any derived value is so large that the equation cannot be used to draw firm conclusions.
Equation
The Drake equation is:
where
= the number of civilizations in the Milky Way galaxy with which communication might be possible (i.e. which are on the current past light cone);
and
= the average rate of star formation in our Galaxy.
= the fraction of those stars that have planets.
= the average number of planets that can potentially support life per star that has planets.
= the fraction of planets that could support life that actually develop life at some point.
= the fraction of planets with life that go on to develop intelligent life (civilizations).
= the fraction of civilizations that develop a technology that releases detectable signs of their existence into space.
= the length of time for which such civilizations release detectable signals into space.
This form of the equation first appeared in Drake's 1965 paper.
History
In September 1959, physicists Giuseppe Cocconi and Philip Morrison published an article in the journal Nature with the provocative title "Searching for Interstellar Communications". Cocconi and Morrison argued that radio telescopes had become sensitive enough to pick up transmissions that might be broadcast into space by civilizations orbiting other stars. Such messages, they suggested, might be transmitted at a wavelength of 21 cm (1,420.4 MHz). This is the wavelength of radio emission by neutral hydrogen, the most common element in the universe, and they reasoned that other intelligences might see this as a logical landmark in the radio spectrum.
Two months later, Harvard University astronomy professor Harlow Shapley speculated on the number of inhabited planets in the universe, saying "The universe has 10 million, million, million suns (10 followed by 18 zeros) similar to our own. One in a million has planets around it. Only one in a million million has the right combination of chemicals, temperature, water, days and nights to support planetary life as we know it. This calculation arrives at the estimated figure of 100 million worlds where life has been forged by evolution."
Seven months after Cocconi and Morrison published their article, Drake began searching for extraterrestrial intelligence in an experiment called Project Ozma. It was the first systematic search for signals from communicative extraterrestrial civilizations. Using the dish of the National Radio Astronomy Observatory, Green Bank in Green Bank, West Virginia, Drake monitored two nearby Sun-like stars: Epsilon Eridani and Tau Ceti, slowly scanning frequencies close to the 21 cm wavelength for six hours per day from April to July 1960. The project was well designed, inexpensive, and simple by today's standards. It detected no signals.
Soon thereafter, Drake hosted the first search for extraterrestrial intelligence conference on detecting their radio signals. The meeting was held at the Green Bank facility in 1961. The equation that bears Drake's name arose out of his preparations for the meeting.
The ten attendees were conference organizer J. Peter Pearman, Frank Drake, Philip Morrison, businessman and radio amateur Dana Atchley, chemist Melvin Calvin, astronomer Su-Shu Huang, neuroscientist John C. Lilly, inventor Barney Oliver, astronomer Carl Sagan, and radio-astronomer Otto Struve. These participants called themselves "The Order of the Dolphin" (because of Lilly's work on dolphin communication), and commemorated their first meeting with a plaque at the observatory hall.
Usefulness
The Drake equation results in a summary of the factors affecting the likelihood that we might detect radio-communication from intelligent extraterrestrial life. The last three parameters, , , and , are not known and are very difficult to estimate, with values ranging over many orders of magnitude (see ). Therefore, the usefulness of the Drake equation is not in the solving, but rather in the contemplation of all the various concepts which scientists must incorporate when considering the question of life elsewhere, and gives the question of life elsewhere a basis for scientific analysis. The equation has helped draw attention to some particular scientific problems related to life in the universe, for example abiogenesis, the development of multi-cellular life, and the development of intelligence itself.
Within the limits of existing human technology, any practical search for distant intelligent life must necessarily be a search for some manifestation of a distant technology. After about 50 years, the Drake equation is still of seminal importance because it is a 'road map' of what we need to learn in order to solve this fundamental existential question. It also formed the backbone of astrobiology as a science; although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories. Some 50 years of SETI have failed to find anything, even though radio telescopes, receiver techniques, and computational abilities have improved significantly since the early 1960s. SETI efforts since 1961 have conclusively ruled out widespread alien emissions near the 21 cm wavelength of the hydrogen frequency.
Estimates
Original estimates
There is considerable disagreement on the values of these parameters, but the 'educated guesses' used by Drake and his colleagues in 1961 were:
= 1 yr−1 (1 star formed per year, on the average over the life of the galaxy; this was regarded as conservative)
= 0.2 to 0.5 (one fifth to one half of all stars formed will have planets)
= 1 to 5 (stars with planets will have between 1 and 5 planets capable of developing life)
= 1 (100% of these planets will develop life)
= 1 (100% of which will develop intelligent life)
= 0.1 to 0.2 (10–20% of which will be able to communicate)
= somewhere between 1000 and 100,000,000 years
Inserting the above minimum numbers into the equation gives a minimum N of 20 (see: Range of results). Inserting the maximum numbers gives a maximum of 50,000,000. Drake states that given the uncertainties, the original meeting concluded that , and there were probably between 1000 and 100,000,000 planets with civilizations in the Milky Way Galaxy.
Current estimates
This section discusses and attempts to list the best current estimates for the parameters of the Drake equation.
Rate of star creation in this Galaxy,
Calculations in 2010, from NASA and the European Space Agency indicate that the rate of star formation in this Galaxy is about of material per year. To get the number of stars per year, we divide this by the initial mass function (IMF) for stars, where the average new star's mass is about . This gives a star formation rate of about 1.5–3 stars per year.
Fraction of those stars that have planets,
Analysis of microlensing surveys, in 2012, has found that may approach 1—that is, stars are orbited by planets as a rule, rather than the exception; and that there are one or more bound planets per Milky Way star.
Average number of planets that might support life per star that has planets,
In November 2013, astronomers reported, based on Kepler space telescope data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of sun-like stars and red dwarf stars within the Milky Way Galaxy. 11 billion of these estimated planets may be orbiting sun-like stars. Since there are about 100 billion stars in the galaxy, this implies is roughly 0.4. The nearest planet in the habitable zone is Proxima Centauri b, which is as close as about 4.2 light-years away.
The consensus at the Green Bank meeting was that had a minimum value between 3 and 5. Dutch science journalist Govert Schilling has opined that this is optimistic. Even if planets are in the habitable zone, the number of planets with the right proportion of elements is difficult to estimate. Brad Gibson, Yeshe Fenner, and Charley Lineweaver determined that about 10% of star systems in the Milky Way Galaxy are hospitable to life, by having heavy elements, being far from supernovae and being stable for a sufficient time.
The discovery of numerous gas giants in close orbit with their stars has introduced doubt that life-supporting planets commonly survive the formation of their stellar systems. So-called hot Jupiters may migrate from distant orbits to near orbits, in the process disrupting the orbits of habitable planets.
On the other hand, the variety of star systems that might have habitable zones is not just limited to solar-type stars and Earth-sized planets. It is now estimated that even tidally locked planets close to red dwarf stars might have habitable zones, although the flaring behavior of these stars might speak against this. The possibility of life on moons of gas giants (such as Jupiter's moon Europa, or Saturn's moons Titan and Enceladus) adds further uncertainty to this figure.
The authors of the rare Earth hypothesis propose a number of additional constraints on habitability for planets, including being in galactic zones with suitably low radiation, high star metallicity, and low enough density to avoid excessive asteroid bombardment. They also propose that it is necessary to have a planetary system with large gas giants which provide bombardment protection without a hot Jupiter; and a planet with plate tectonics, a large moon that creates tidal pools, and moderate axial tilt to generate seasonal variation.
Fraction of the above that actually go on to develop life,
Geological evidence from the Earth suggests that may be high; life on Earth appears to have begun around the same time as favorable conditions arose, suggesting that abiogenesis may be relatively common once conditions are right. However, this evidence only looks at the Earth (a single model planet), and contains anthropic bias, as the planet of study was not chosen randomly, but by the living organisms that already inhabit it (ourselves). From a classical hypothesis testing standpoint, without assuming that the underlying distribution of is the same for all planets in the Milky Way, there are zero degrees of freedom, permitting no valid estimates to be made. If life (or evidence of past life) were to be found on Mars, Europa, Enceladus or Titan that developed independently from life on Earth it would imply a value for close to 1. While this would raise the number of degrees of freedom from zero to one, there would remain a great deal of uncertainty on any estimate due to the small sample size, and the chance they are not really independent.
Countering this argument is that there is no evidence for abiogenesis occurring more than once on the Earth—that is, all terrestrial life stems from a common origin. If abiogenesis were more common it would be speculated to have occurred more than once on the Earth. Scientists have searched for this by looking for bacteria that are unrelated to other life on Earth, but none have been found yet. It is also possible that life arose more than once, but that other branches were out-competed, or died in mass extinctions, or were lost in other ways. Biochemists Francis Crick and Leslie Orgel laid special emphasis on this uncertainty: "At the moment we have no means at all of knowing" whether we are "likely to be alone in the galaxy (Universe)" or whether "the galaxy may be pullulating with life of many different forms." As an alternative to abiogenesis on Earth, they proposed the hypothesis of directed panspermia, which states that Earth life began with "microorganisms sent here deliberately by a technological society on another planet, by means of a special long-range unmanned spaceship".
In 2020, a paper by scholars at the University of Nottingham proposed an "Astrobiological Copernican" principle, based on the Principle of Mediocrity, and speculated that "intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution". In the authors' framework, , , and are all set to a probability of 1 (certainty). Their resultant calculation concludes there are more than thirty current technological civilizations in the galaxy (disregarding error bars).
Fraction of the above that develops intelligent life,
This value remains particularly controversial. Those who favor a low value, such as the biologist Ernst Mayr, point out that of the billions of species that have existed on Earth, only one has become intelligent and from this, infer a tiny value for . Likewise, the Rare Earth hypothesis, notwithstanding their low value for above, also think a low value for dominates the analysis. Those who favor higher values note the generally increasing complexity of life over time, concluding that the appearance of intelligence is almost inevitable, implying an approaching 1. Skeptics point out that the large spread of values in this factor and others make all estimates unreliable. (See Criticism).
In addition, while it appears that life developed soon after the formation of Earth, the Cambrian explosion, in which a large variety of multicellular life forms came into being, occurred a considerable amount of time after the formation of Earth, which suggests the possibility that special conditions were necessary. Some scenarios such as the snowball Earth or research into extinction events have raised the possibility that life on Earth is relatively fragile. Research on any past life on Mars is relevant since a discovery that life did form on Mars but ceased to exist might raise the estimate of but would indicate that in half the known cases, intelligent life did not develop.
Estimates of have been affected by discoveries that the Solar System's orbit is circular in the galaxy, at such a distance that it remains out of the spiral arms for tens of millions of years (evading radiation from novae). Also, Earth's large moon may aid the evolution of life by stabilizing the planet's axis of rotation.
There has been quantitative work to begin to define . One example is a Bayesian analysis published in 2020. In the conclusion, the author cautions that this study applies to Earth's conditions. In Bayesian terms, the study favors the formation of intelligence on a planet with identical conditions to Earth but does not do so with high confidence.
Planetary scientist Pascal Lee of the SETI Institute proposes that this fraction is very low (0.0002). He based this estimate on how long it took Earth to develop intelligent life (1 million years since Homo erectus evolved, compared to 4.6 billion years since Earth formed).
Fraction of the above revealing their existence via signal release into space,
For deliberate communication, the one example we have (the Earth) does not do much explicit communication, though there are some efforts covering only a tiny fraction of the stars that might look for human presence. (See Arecibo message, for example). There is considerable speculation why an extraterrestrial civilization might exist but choose not to communicate. However, deliberate communication is not required, and calculations indicate that current or near-future Earth-level technology might well be detectable to civilizations not too much more advanced than present day humans. By this standard, the Earth is a communicating civilization.
Another question is what percentage of civilizations in the galaxy are close enough for us to detect, assuming that they send out signals. For example, existing Earth radio telescopes could only detect Earth radio transmissions from roughly a light year away.
Lifetime of such a civilization wherein it communicates its signals into space,
Michael Shermer estimated as 420 years, based on the duration of sixty historical Earthly civilizations. Using 28 civilizations more recent than the Roman Empire, he calculates a figure of 304 years for "modern" civilizations. It could also be argued from Michael Shermer's results that the fall of most of these civilizations was followed by later civilizations that carried on the technologies, so it is doubtful that they are separate civilizations in the context of the Drake equation. In the expanded version, including reappearance number, this lack of specificity in defining single civilizations does not matter for the result, since such a civilization turnover could be described as an increase in the reappearance number rather than increase in , stating that a civilization reappears in the form of the succeeding cultures. Furthermore, since none could communicate over interstellar space, the method of comparing with historical civilizations could be regarded as invalid.
David Grinspoon has argued that once a civilization has developed enough, it might overcome all threats to its survival. It will then last for an indefinite period of time, making the value for potentially billions of years. If this is the case, then he proposes that the Milky Way Galaxy may have been steadily accumulating advanced civilizations since it formed. He proposes that the last factor be replaced with , where is the fraction of communicating civilizations that become "immortal" (in the sense that they simply do not die out), and representing the length of time during which this process has been going on. This has the advantage that would be a relatively easy-to-discover number, as it would simply be some fraction of the age of the universe.
It has also been hypothesized that once a civilization has learned of a more advanced one, its longevity could increase because it can learn from the experiences of the other.
The astronomer Carl Sagan speculated that all of the terms, except for the lifetime of a civilization, are relatively high and the determining factor in whether there are large or small numbers of civilizations in the universe is the civilization lifetime, or in other words, the ability of technological civilizations to avoid self-destruction. In Sagan's case, the Drake equation was a strong motivating factor for his interest in environmental issues and his efforts to warn against the dangers of nuclear warfare. Paleobiologist Olev Vinn suggests that the lifetime of most technological civilizations is brief due to inherited behavior patterns present in all intelligent organisms. These behaviors, incompatible with civilized conditions, inevitably lead to self-destruction soon after the emergence of advanced technologies.
An intelligent civilization might not be organic, as some have suggested that artificial general intelligence may replace humanity.
Range of results
As many skeptics have pointed out, the Drake equation can give a very wide range of values, depending on the assumptions, as the values used in portions of the Drake equation are not well established. In particular, the result can be , meaning we are likely alone in the galaxy, or , implying there are many civilizations we might contact. One of the few points of wide agreement is that the presence of humanity implies a probability of intelligence arising of greater than zero.
As an example of a low estimate, combining NASA's star formation rates, the rare Earth hypothesis value of , Mayr's view on intelligence arising, Drake's view of communication, and Shermer's estimate of lifetime:
, , , [Drake, above], and years
gives:
i.e., suggesting that we are probably alone in this galaxy, and possibly in the observable universe.
On the other hand, with larger values for each of the parameters above, values of can be derived that are greater than 1. The following higher values that have been proposed for each of the parameters:
, , , , , [Drake, above], and years
Use of these parameters gives:
Monte Carlo simulations of estimates of the Drake equation factors based on a stellar and planetary model of the Milky Way have resulted in the number of civilizations varying by a factor of 100.
Possible former technological civilizations
In 2016, Adam Frank and Woodruff Sullivan modified the Drake equation to determine just how unlikely the event of a technological species arising on a given habitable planet must be, to give the result that Earth hosts the only technological species that has ever arisen, for two cases: (a) this Galaxy, and (b) the universe as a whole. By asking this different question, one removes the lifetime and simultaneous communication uncertainties. Since the numbers of habitable planets per star can today be reasonably estimated, the only remaining unknown in the Drake equation is the probability that a habitable planet ever develops a technological species over its lifetime. For Earth to have the only technological species that has ever occurred in the universe, they calculate the probability of any given habitable planet ever developing a technological species must be less than . Similarly, for Earth to have been the only case of hosting a technological species over the history of this Galaxy, the odds of a habitable zone planet ever hosting a technological species must be less than (about 1 in 60 billion). The figure for the universe implies that it is extremely unlikely that Earth hosts the only technological species that has ever occurred. On the other hand, for this Galaxy one must think that fewer than 1 in 60 billion habitable planets develop a technological species for there not to have been at least a second case of such a species over the past history of this Galaxy.
Modifications
As many observers have pointed out, the Drake equation is a very simple model that omits potentially relevant parameters, and many changes and modifications to the equation have been proposed. One line of modification, for example, attempts to account for the uncertainty inherent in many of the terms.
Combining the estimates of the original six factors by major researchers via a Monte Carlo procedure leads to a best value for the non-longevity factors of 0.85 1/years. This result differs insignificantly from the estimate of unity given both by Drake and the Cyclops report.
Others note that the Drake equation ignores many concepts that might be relevant to the odds of contacting other civilizations. For example, David Brin states: "The Drake equation merely speaks of the number of sites at which ETIs spontaneously arise. The equation says nothing directly about the contact cross-section between an ETIS and contemporary human society". Because it is the contact cross-section that is of interest to the SETI community, many additional factors and modifications of the Drake equation have been proposed.
Colonization It has been proposed to generalize the Drake equation to include additional effects of alien civilizations colonizing other star systems. Each original site expands with an expansion velocity , and establishes additional sites that survive for a lifetime . The result is a more complex set of 3 equations.
Reappearance factor The Drake equation may furthermore be multiplied by how many times an intelligent civilization may occur on planets where it has happened once. Even if an intelligent civilization reaches the end of its lifetime after, for example, 10,000 years, life may still prevail on the planet for billions of years, permitting the next civilization to evolve. Thus, several civilizations may come and go during the lifespan of one and the same planet. Thus, if is the average number of times a new civilization reappears on the same planet where a previous civilization once has appeared and ended, then the total number of civilizations on such a planet would be , which is the actual reappearance factor added to the equation.
The factor depends on what generally is the cause of civilization extinction. If it is generally by temporary uninhabitability, for example a nuclear winter, then may be relatively high. On the other hand, if it is generally by permanent uninhabitability, such as stellar evolution, then may be almost zero. In the case of total life extinction, a similar factor may be applicable for , that is, how many times life may appear on a planet where it has appeared once.
METI factor Alexander Zaitsev said that to be in a communicative phase and emit dedicated messages are not the same. For example, humans, although being in a communicative phase, are not a communicative civilization; we do not practise such activities as the purposeful and regular transmission of interstellar messages. For this reason, he suggested introducing the METI factor (messaging to extraterrestrial intelligence) to the classical Drake equation. He defined the factor as "the fraction of communicative civilizations with clear and non-paranoid planetary consciousness", or alternatively expressed, the fraction of communicative civilizations that actually engage in deliberate interstellar transmission.
The METI factor is somewhat misleading since active, purposeful transmission of messages by a civilization is not required for them to receive a broadcast sent by another that is seeking first contact. It is merely required they have capable and compatible receiver systems operational; however, this is a variable humans cannot accurately estimate.
Biogenic gases Astronomer Sara Seager proposed a revised equation that focuses on the search for planets with biosignature gases. These gases are produced by living organisms that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes.
The Seager equation looks like this:
where:
= the number of planets with detectable signs of life
= the number of stars observed
= the fraction of stars that are quiet
= the fraction of stars with rocky planets in the habitable zone
= the fraction of those planets that can be observed
= the fraction that have life
= the fraction on which life produces a detectable signature gas
Seager stresses, "We're not throwing out the Drake Equation, which is really a different topic," explaining, "Since Drake came up with the equation, we have discovered thousands of exoplanets. We as a community have had our views revolutionized as to what could possibly be out there. And now we have a real question on our hands, one that's not related to intelligent life: Can we detect any signs of life in any way in the very near future?"
Carl Sagan's version of the Drake equationAmerican astronomer Carl Sagan made some modifications in the Drake equation and presented it in the 1980 program Cosmos: A Personal Voyage. The modified equation is shown below
where
= the number of civilizations in the Milky Way galaxy with which communication might be possible (i.e. which are on the current past light cone);
and
= Number of stars in the Milky Way Galaxy
= the fraction of those stars that have planets.
= the average number of planets that can potentially support life per star that has planets.
= the fraction of planets that could support life that actually develop life at some point.
= the fraction of planets with life that go on to develop intelligent life (civilizations).
= the fraction of civilizations that develop a technology that releases detectable signs of their existence into space.
= fraction of a planetary lifetime graced by a technological civilization
Criticism
Criticism of the Drake equation is varied. Firstly, many of the terms in the equation are largely or entirely based on conjecture. Star formation rates are well-known, and the incidence of planets has a sound theoretical and observational basis, but the other terms in the equation become very speculative. The uncertainties revolve around the present day understanding of the evolution of life, intelligence, and civilization, not physics. No statistical estimates are possible for some of the parameters, where only one example is known. The net result is that the equation cannot be used to draw firm conclusions of any kind, and the resulting margin of error is huge, far beyond what some consider acceptable or meaningful.
Others point out that the equation was formulated before our understanding of the universe had matured. Astrophysicist Ethan Siegel, said:
One reply to such criticisms is that even though the Drake equation currently involves speculation about unmeasured parameters, it was intended as a way to stimulate dialogue on these topics. Then the focus becomes how to proceed experimentally. Indeed, Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference.
Fermi paradox
A civilization lasting for tens of millions of years could be able to spread throughout the galaxy, even at the slow speeds foreseeable with present-day technology. However, no confirmed signs of civilizations or intelligent life elsewhere have been found, either in this Galaxy or in the observable universe of 2 trillion galaxies. According to this line of thinking, the tendency to fill (or at least explore) all available territory seems to be a universal trait of living things, so the Earth should have already been colonized, or at least visited, but no evidence of this exists. Hence Fermi's question "Where is everybody?".
A large number of explanations have been proposed to explain this lack of contact; a book published in 2015 elaborated on 75 different explanations. In terms of the Drake Equation, the explanations can be divided into three classes:
Few intelligent civilizations ever arise. This is an argument that at least one of the first few terms, , has a low value. The most common suspect is , but explanations such as the rare Earth hypothesis argue that is the small term.
Intelligent civilizations exist, but we see no evidence, meaning is small. Typical arguments include that civilizations are too far apart, it is too expensive to spread throughout the galaxy, civilizations broadcast signals for only a brief period of time, communication is dangerous, and many others.
The lifetime of intelligent, communicative civilizations is short, meaning the value of is small. Drake suggested that a large number of extraterrestrial civilizations would form, and he further speculated that the lack of evidence of such civilizations may be because technological civilizations tend to disappear rather quickly. Typical explanations include it is the nature of intelligent life to destroy itself, it is the nature of intelligent life to destroy others, they tend to be destroyed by natural events, and others.
These lines of reasoning lead to the Great Filter hypothesis, which states that since there are no observed extraterrestrial civilizations despite the vast number of stars, at least one step in the process must be acting as a filter to reduce the final value. According to this view, either it is very difficult for intelligent life to arise, or the lifetime of technologically advanced civilizations, or the period of time they reveal their existence must be relatively short.
An analysis by Anders Sandberg, Eric Drexler and Toby Ord suggests "a substantial ex ante (predicted) probability of there being no other intelligent life in our observable universe".
In fiction and popular culture
The equation was cited by Gene Roddenberry as supporting the multiplicity of inhabited planets shown on Star Trek, the television series he created. However, Roddenberry did not have the equation with him, and he was forced to "invent" it for his original proposal. The invented equation created by Roddenberry is:
Regarding Roddenberry's fictional version of the equation, Drake himself commented that a number raised to the first power is just the number itself.
A commemorative plate on NASA's Europa Clipper mission, planned for launch in October 2024, features a poem by the U.S. Poet Laureate Ada Limón, waveforms of the word 'water' in 103 languages, a schematic of the water hole, the Drake equation, and a portrait of planetary scientist Ron Greeley on it.
The track Abiogenesis on the Carbon Based Lifeforms album World of Sleepers features the Drake equation in a spoken voice-over.
| Physical sciences | Planetary science | Astronomy |
8937 | https://en.wikipedia.org/wiki/Dsungaripterus | Dsungaripterus | Dsungaripterus is a genus of dsungaripterid pterosaur which lived during the Early Cretaceous in what is now China and possibly South Korea. Its first fossil was found in the Tugulu Group (Lianmuqin and Shengjinkou Formations) of the Junggar Basin.
Description
Dsungaripterus weii had a wingspan of . Like most dsungaripteroids it had a rather robust skeleton with thick walls and stouty bodily proportions, suggesting a mostly terrestrial lifestyle. The flight style of these animals is unclear, but it was probably punctuated by abrupt landings and extensive flapping.
The skull of Dsungaripterus, measuring over long, bore a low bone crest that ran down from the base of the skull to halfway to the beak. Dsungaripterus'''s head and neck were together almost long. Its most notable feature are its long, narrow, upcurved jaws with a pointed tip. It had no teeth in the front part of its jaws, which were probably used to remove prey from cracks in rocks and/or the sandy, muddy inland environments it inhabited. It had knobbly flat teeth more to the back of the jaw that were well suited for crushing the armor of shellfish or other hard objects. Thus, it is commonly interpreted that dsungaripterids were durophagous and possibly piscivorous pterosaurs. Additionally, Dsungaripterus also had a palate similar to that of azhdarchoids.
History of discoveryDsungaripterus was described in 1964 named by Yang Zhongjian. The genus name combines a reference to the Junggar Basin with a Latinized Greek pteron, "wing". The type species is Dsungaripterus weii, the specific name honoring paleontologist C.M. Wei of the Palaeontological Division, Institute of Science, Bureau of Petroleum of Xinjiang. The holotype is IVPP V-2776, a partial skull and skeleton from the Lianmuqin Formation. In 1973, more material has been found within the Shengjinkou Formation, which includes almost complete skulls.
In 1980 Peter Galton renamed Pterodactylus brancai (Reck 1931) from the Tendaguru Formation into Dsungaripterus brancai, but the identification is now commonly rejected. In 1982 Natasha Bakhurina named a new species, Dsungaripterus parvus, based on a smaller skeleton from Mongolia. Later, this was renamed into "Phobetor", a preoccupied name, and in 2009 concluded to be identical to Noripterus. A dsungaripterid wing finger phalanx was reported in 2002 from the Hasandong Formation of South Korea, and was identified in 2015 and 2018 as Dsungaripterus? cf. D. weii.
Classification
Dsungaripterus was classified by Yang as a member of the Dsungaripteridae. Below is a cladogram showing the results of a phylogenetic analysis presented by Andres and colleagues in 2014. They recovered Dsungaripterus within the clade Dsungaripteromorpha (a subgroup within the Azhdarchoidea), more specifically within the Dsungaripteridae, sister taxon to Domeykodactylus. Their cladogram is shown below.
In 2019, a different topology, this time by Kellner and colleagues, was published. In this study, Dsungaripterus was recovered outside the Azhdarchoidea, within the larger group Tapejaroidea, sister taxon to Noripterus''. The cladogram of the analysis is shown below.
| Biology and health sciences | Pterosaurs | Animals |
8972 | https://en.wikipedia.org/wiki/Dagger | Dagger | A dagger is a fighting knife with a very sharp point and usually one or two sharp edges, typically designed or capable of being used as a cutting or thrusting weapon. Daggers have been used throughout human history for close combat confrontations, and many cultures have used adorned daggers in ritual and ceremonial contexts. The distinctive shape and historic usage of the dagger have made it iconic and symbolic. A dagger in the modern sense is a weapon designed for close-proximity combat or self-defense; due to its use in historic weapon assemblages, it has associations with assassination and murders. Double-edged knives, however, play different sorts of roles in different social contexts.
A wide variety of thrusting knives have been described as daggers, including knives that feature only a single cutting edge, such as the European rondel dagger or the Afghan pesh-kabz, or, in some instances, no cutting edge at all, such as the stiletto of the Renaissance. However, in the last hundred years or so, in most contexts, a dagger has certain definable characteristics, including a short blade with a sharply tapered point, a central spine or fuller, and usually two cutting edges sharpened the full length of the blade, or nearly so. Most daggers also feature a full crossguard to keep the hand from riding forwards onto the sharpened blade edges.
Daggers are primarily weapons, so knife legislation in many places restricts their manufacture, sale, possession, transport, or use.
History
Antiquity
The earliest daggers were made of materials such as flint, ivory or bone in Neolithic times.
Copper daggers appeared first in the early Bronze Age, in the 3rd millennium BC, and copper daggers of Early Minoan III (2400–2000 BC) were recovered at Knossos.
In ancient Egypt, daggers were usually made of copper or bronze, while royalty had gold weapons. At least since pre-dynastic Egypt, () daggers were adorned as ceremonial objects with golden hilts and later even more ornate and varied construction. One early silver dagger was recovered with midrib design. The 1924 opening of the tomb of Tutankhamun revealed two daggers, one with a gold blade, and one of smelted iron. It is held that mummies of the Eleventh Dynasty were buried with bronze sabres; and there is a bronze dagger of Thut-mes III. (Eighteenth Dynasty), , and bronze armour, swords and daggers of Mene-ptah II. of the (Nineteenth Dynasty) .
Iron production did not begin until 1200 BC, and iron ore was not found in Egypt, making the iron dagger rare, and the context suggests that the iron dagger was valued on a level equal to that of its ceremonial gold counterpart. These facts, and the composition of the dagger had long suggested a meteoritic origin, however, evidence for its meteoritic origin was not entirely conclusive until June 2016 when researchers using x-ray fluorescence spectrometry confirmed similar proportions of metals (Iron, 10% nickel, and 0.6% cobalt) in a meteorite discovered in the area, deposited by an ancient meteor shower.
One of the earliest objects made of smelted iron is a dagger dating to before 2000 BC, found in a context that suggests it was treated as an ornamental object of great value. Found in a Hattic royal tomb dated about 2500 BC, at Alaca Höyük in northern Anatolia, the dagger has a smelted iron blade and a gold handle.
The artisans and blacksmiths of Iberia in what is now southern Spain and southwestern France produced various iron daggers and swords of high quality from the 5th to the 3rd century BC, in ornamentation and patterns influenced by Greek, Punic (Carthaginian), and Phoenician culture. The exceptional purity of Iberian iron and the sophisticated method of forging, which included cold hammering, produced double-edged weapons of excellent quality. One can find technologically advanced designs such as folding knives rusted among the artifacts of many Second Iberian Iron Age cremation burials or in Roman Empire excavations all around Spain and the Mediterranean. Iberian infantrymen carried several types of iron daggers, most of them based on shortened versions of double-edged swords, but the true Iberian dagger had a triangular-shaped blade. Iberian daggers and swords were later adopted by Hannibal and his Carthaginian armies. The Lusitanii, a pre-Celtic people dominating the lands west of Iberia (most of modern Portugal and Extremadura) successfully held off the Roman Empire for many years with a variety of innovative tactics and light weapons, including iron-bladed short spears and daggers modeled after Iberian patterns.
During the Roman Empire, legionaries were issued a pugio (from the Latin , or "fight"), a double-edged iron thrusting dagger with a blade of . The design and fabrication of the pugio was taken directly from Iberian daggers and short swords; the Romans even adopted the triangular-bladed Iberian dagger, which they called the parazonium. Like the gladius, the pugio was most often used as a thrusting (stabbing weapon). As an extreme close-quarter combat weapon, the pugio was the Roman soldier's last line of defense. When not in battle, the pugio served as a convenient utility knife.
Middle Ages
The term dagger appears only in the Late Middle Ages, reflecting the fact that while the dagger had been known in antiquity, it had disappeared during the Early Middle Ages, replaced by the hewing knife or seax.
The dagger reappeared in the 12th century as the "knightly dagger", or more properly cross-hilt or quillon dagger, and was developed into a common arm and tool for civilian use by the late medieval period.
The earliest known depiction of a cross-hilt dagger is the so-called "Guido relief" inside the Grossmünster of Zürich (). A number of depictions of the fully developed cross-hilt dagger are found in the Morgan Bible (). Many of these cross-hilt daggers resemble miniature swords, with cross guards and pommels very similar in form to swords of the period. Others, however, are not an exact match to known sword designs, having for example pommel caps, large hollow star shaped pommels on so-called "Burgundian Heraldic daggers" or antenna style cross and pommel, reminiscent of Hallstatt era daggers. The cross-hilt type persisted well into the Renaissance
The Old French term dague appears to have referred to these weapons in the 13th century, alongside other terms such as poignal and basilard. The Middle English dagger is used from the 1380s.
During this time, the dagger was often employed in the role of a secondary defense weapon in close combat. The knightly dagger evolved into the larger baselard knife in the 14th century. During the 14th century, it became fairly common for knights to fight on foot to strengthen the infantry defensive line. This necessitated more use of daggers. At Agincourt (1415) archers used them to dispatch dismounted knights by thrusting the narrow blades through helmet vents and other apertures. The baselard was considered an intermediate between a short sword and a long dagger, and became popular also as a civilian weapon. Sloane MS. 2593 () records a song satirizing the use of oversized baselard knives as fashion accessories. Weapons of this sort called anelace, somewhere between a large dagger and a short sword, were much in use in 14th century England as civilians' accoutrements, worn "suspended by a ring from the girdle".
In the Late Middle Ages, knives with blade designs that emphasized thrusting attacks, such as the stiletto, became increasingly popular, and some thrusting knives commonly referred to as 'daggers' ceased to have a cutting edge. This was a response to the deployment of heavy armour, such as maille and plate armour, where cutting attacks were ineffective and focus was on thrusts with narrow blades to punch through mail or aim at armour plate intersections (or the eye slits of the helmet visor). These late medieval thrusting weapons are sometimes classed by the shape of their hilt as either roundel, bollock or ear daggers. The term dagger is coined in this time, as are the Early Modern German equivalents dolch (tolch) and degen (tegen). In the German school of fencing, Johannes Liechtenauer (Ms. 3227a) and his successors (specifically Andres Lignizer in Cod. 44 A 8) taught fighting with the dagger.
These techniques in some respects resemble modern knife fighting, but emphasized thrusting strokes almost exclusively, instead of slashes and cuts. When used offensively, a standard attack frequently employed the reverse or icepick grip, stabbing downward with the blade to increase thrust and penetrative force. This was done primarily because the blade point frequently had to penetrate or push apart an opponent's steel chain mail or plate armour in order to inflict an injury. The disadvantage of employing the medieval dagger in this manner was that it could easily be blocked by a variety of techniques, most notably by a block with the weaponless arm while simultaneously attacking with a weapon held in the right hand. Another disadvantage was the reduction in effective blade reach to the opponent when using a reverse grip. As the wearing of armour fell out of favor, dagger fighting techniques began to evolve which emphasized the use of the dagger with a conventional or forward grip, while the reverse or icepick grip was retained when attacking an unsuspecting opponent from behind, such as in an assassination.
Renaissance and early modern period
The dagger was very popular as a fencing and personal defense weapon in 17th and 18th century Spain, where it was referred to as the daga or puñal. During the Renaissance Age the dagger was used as part of everyday dress, and daggers were the only weapon commoners were allowed to carry on their person. In English, the terms poniard and dirk are loaned during the late 16th to early 17th century, the latter in the spelling dork, durk (presumably via Low German, Dutch or Scandinavian dolk, dolch, ultimately from a West Slavic tulich), the modern spelling dirk dating to 18th-century Scots.
Beginning in the 17th century, another form of dagger—the plug bayonet and later the socket bayonet—was used to convert muskets and other longarms into spears by mounting them on the barrel. They were periodically used for eating; the arm was also used for a variety of other tasks such as mending boots, house repairs and farm jobs. The final function of the dagger was as an obvious and ostentatious means of enhancing a man's personal apparel, conforming to fashion which dictated that all men carried them.
Modern period (19th–21st century)
WW1 trench warfare caused daggers and fighting knives to come back in play. They also replaced the sabres worn by officers, which were too long and clumsy for trench warfare. They were worn with pride as a sign of having served front line duty.
Daggers achieved public notoriety in the 20th century as ornamental uniform regalia during the Fascist dictatorships of Mussolini's Italy and Hitler's Germany. Dress daggers were used by several other countries as well, including Japan, but never to the same extent. As combat equipment they were carried by many infantry and commando forces during the Second World War. British Commando and other elite units were issued an especially slender dagger, the Fairbairn–Sykes fighting knife, developed by William E. Fairbairn and Eric A. Sykes from real-life close-combat experiences gained while serving on the Shanghai Municipal Police Force. The F-S dagger proved very popular with the commandos, who used it primarily for sentry elimination. Some units of the U.S. Marine Corps Raiders in the Pacific were issued a similar fighting dagger, the Marine Raider stiletto, though this modified design proved less than successful when used in the type of knife combat encountered in the Pacific theater due to this version using inferior materials and manufacturing techniques.
During the Vietnam War, the Gerber Mark II, designed by US Army Captain Bud Holzman and Al Mar, was a popular fighting knife pattern that was privately purchased by many U.S. soldiers and marines who served in that war.
Aside from military forces, most daggers are no longer carried openly, but concealed in clothing. One of the more popular forms of the concealable dagger is the boot knife. The boot knife is nothing more than a shortened dagger that is compact enough to be worn on the lower leg, usually by means of a sheath clipped or strapped to a boot or other footwear.
Cultural symbolism
The dagger is symbolically ambiguous. For some cultures and military organizations the dagger symbolizes courage and daring in combat.
However, daggers may be associated with deception or treachery due to the ease of concealment and surprise that the user could inflict upon an unsuspecting victim. Indeed, many assassinations have been carried out with the use of a dagger, including that of Julius Caesar. A cloak and dagger attack is one in which a deceitful, traitorous, or concealed enemy attacks a person. Some have noted a phallic association between daggers and the succession of royal dynasties in British literature.
In European artwork, daggers were sometimes associated with Hecate, the Ancient Greek goddess of witchcraft.
The social stigma of the dagger originates in its periodic use in the commission of disreputable and murderous attacks, from the 44 BC assassination of Julius Caesar to the use of the stiletto dagger by the Black Hand of early 20th century America. Consequently, it developed a public association with surprise assaults by criminals and murderers intent on stabbing unsuspecting victims. To this day, criminal codes of many nations and some US states specifically ban the carrying of the dagger as a prohibited weapon.
Modern use
The dagger is in military use as a close combat and ceremonial arm.
Many nations use the dagger pattern in the form of the bayonet. Daggers are commonly used as part of the insignias of elite military units or special forces, such as the US Army Special Operations Command, the US Army Special Forces, or the Commando Dagger patch for those who have completed the British All Arms Commando Course.
Art knives
Daggers are a popular form of what is known as the "art knife", due in part to the symmetry of the blade. One of the knives required of an American Bladesmith Society Mastersmith is the construction of an "art knife" or a "European style" dagger.
| Technology | Melee weapons | null |
9015 | https://en.wikipedia.org/wiki/DNA%20replication | DNA replication | In molecular biology, DNA replication is the biological process of producing two identical replicas of DNA from one original DNA molecule. DNA replication occurs in all living organisms acting as the most essential part of biological inheritance. This is essential for cell division during growth and repair of damaged tissues, while it also ensures that each of the new cells receives its own copy of the DNA. The cell possesses the distinctive property of division, which makes replication of DNA essential.
DNA is made up of a double helix of two complementary strands. The double helix describes the appearance of a double-stranded DNA which is thus composed of two linear strands that run opposite to each other and twist together. During replication, these strands are separated. Each strand of the original DNA molecule then serves as a template for the production of its counterpart, a process referred to as semiconservative replication. As a result of semi-conservative replication, the new helix will be composed of an original DNA strand as well as a newly synthesized strand. Cellular proofreading and error-checking mechanisms ensure near perfect fidelity for DNA replication.
In a cell, DNA replication begins at specific locations, or origins of replication, in the genome which contains the genetic material of an organism. Unwinding of DNA at the origin and synthesis of new strands, accommodated by an enzyme known as helicase, results in replication forks growing bi-directionally from the origin. A number of proteins are associated with the replication fork to help in the initiation and continuation of DNA synthesis. Most prominently, DNA polymerase synthesizes the new strands by adding nucleotides that complement each (template) strand. DNA replication occurs during the S-stage of interphase.
DNA replication (DNA amplification) can also be performed in vitro (artificially, outside a cell). DNA polymerases isolated from cells and artificial DNA primers can be used to start DNA synthesis at known sequences in a template DNA molecule. Polymerase chain reaction (PCR), ligase chain reaction (LCR), and transcription-mediated amplification (TMA) are examples. In March 2021, researchers reported evidence suggesting that a preliminary form of transfer RNA, a necessary component of translation, the biological synthesis of new proteins in accordance with the genetic code, could have been a replicator molecule itself in the very early development of life, or abiogenesis.
DNA structure
DNA exists as a double-stranded structure, with both strands coiled together to form the characteristic double helix. Each single strand of DNA is a chain of four types of nucleotides. Nucleotides in DNA contain a deoxyribose sugar, a phosphate, and a nucleobase. The four types of nucleotide correspond to the four nucleobases adenine, cytosine, guanine, and thymine, commonly abbreviated as A, C, G, and T. Adenine and guanine are purine bases, while cytosine and thymine are pyrimidines. These nucleotides form phosphodiester bonds, creating the phosphate-deoxyribose backbone of the DNA double helix with the nucleobases pointing inward (i.e., toward the opposing strand). Nucleobases are matched between strands through hydrogen bonds to form base pairs. Adenine pairs with thymine (two hydrogen bonds), and guanine pairs with cytosine (three hydrogen bonds).
DNA strands have a directionality, and the different ends of a single strand are called the "3′ (three-prime) end" and the "5′ (five-prime) end". By convention, if the base sequence of a single strand of DNA is given, the left end of the sequence is the 5′ end, while the right end of the sequence is the 3′ end. The strands of the double helix are anti-parallel, with one being 5′ to 3′, and the opposite strand 3′ to 5′. These terms refer to the carbon atom in deoxyribose to which the next phosphate in the chain attaches. Directionality has consequences in DNA synthesis, because DNA polymerase can synthesize DNA in only one direction by adding nucleotides to the 3′ end of a DNA strand.
The pairing of complementary bases in DNA (through hydrogen bonding) means that the information contained within each strand is redundant. Phosphodiester (intra-strand) bonds are stronger than hydrogen (inter-strand) bonds. The actual job of the phosphodiester bonds is where in DNA polymers connect the 5' carbon atom of one nucleotide to the 3' carbon atom of another nucleotide, while the hydrogen bonds stabilize DNA double helices across the helix axis but not in the direction of the axis. This makes it possible to separate the strands from one another. The nucleotides on a single strand can therefore be used to reconstruct nucleotides on a newly synthesized partner strand.
DNA polymerase
DNA polymerases are a family of enzymes that carry out all forms of DNA replication. DNA polymerases in general cannot initiate synthesis of new strands but can only extend an existing DNA or RNA strand paired with a template strand. To begin synthesis, a short fragment of RNA, called a primer, must be created and paired with the template DNA strand.
DNA polymerase adds a new strand of DNA by extending the 3′ end of an existing nucleotide chain, adding new nucleotides matched to the template strand, one at a time, via the creation of phosphodiester bonds. The energy for this process of DNA polymerization comes from hydrolysis of the high-energy phosphate (phosphoanhydride) bonds between the three phosphates attached to each unincorporated base. Free bases with their attached phosphate groups are called nucleotides; in particular, bases with three attached phosphate groups are called nucleoside triphosphates. When a nucleotide is being added to a growing DNA strand, the formation of a phosphodiester bond between the proximal phosphate of the nucleotide to the growing chain is accompanied by hydrolysis of a high-energy phosphate bond with release of the two distal phosphate groups as a pyrophosphate. Enzymatic hydrolysis of the resulting pyrophosphate into inorganic phosphate consumes a second high-energy phosphate bond and renders the reaction effectively irreversible.
In general, DNA polymerases are highly accurate, with an intrinsic error rate of less than one mistake for every 107 nucleotides added. Some DNA polymerases can also delete nucleotides from the end of a developing strand in order to fix mismatched bases. This is known as proofreading. Finally, post-replication mismatch repair mechanisms monitor the DNA for errors, being capable of distinguishing mismatches in the newly synthesized DNA Strand from the original strand sequence. Together, these three discrimination steps enable replication fidelity of less than one mistake for every 109 nucleotides added.
The rate of DNA replication in a living cell was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second. The mutation rate per base pair per replication during phase T4 DNA synthesis is 1.7 per 108.
Replication process
DNA replication, like all biological polymerization processes, proceeds in three enzymatically catalyzed and coordinated steps: initiation, elongation and termination.
Initiation
For a cell to divide, it must first replicate its DNA. DNA replication is an all-or-none process; once replication begins, it proceeds to completion. Once replication is complete, it does not occur again in the same cell cycle. This is made possible by the division of initiation of the pre-replication complex.
Pre-replication complex
In late mitosis and early G1 phase, a large complex of initiator proteins assembles into the pre-replication complex at particular points in the DNA, known as "origins". In E. coli the primary initiator protein is Dna A; in yeast, this is the origin recognition complex. Sequences used by initiator proteins tend to be "AT-rich" (rich in adenine and thymine bases), because A-T base pairs have two hydrogen bonds (rather than the three formed in a C-G pair) and thus are easier to strand-separate. In eukaryotes, the origin recognition complex catalyzes the assembly of initiator proteins into the pre-replication complex. In addition, a recent report suggests that budding yeast ORC dimerizes in a cell cycle dependent manner to control licensing. In turn, the process of ORC dimerization is mediated by a cell cycle-dependent Noc3p dimerization cycle in vivo, and this role of Noc3p is separable from its role in ribosome biogenesis. An essential Noc3p dimerization cycle mediates ORC double-hexamer formation in replication licensing ORC and Noc3p are continuously bound to the chromatin throughout the cell cycle. Cdc6 and Cdt1 then associate with the bound origin recognition complex at the origin in order to form a larger complex necessary to load the Mcm complex onto the DNA. In eukaryotes, the Mcm complex is the helicase that will split the DNA helix at the replication forks and origins. The Mcm complex is recruited at late G1 phase and loaded by the ORC-Cdc6-Cdt1 complex onto the DNA via ATP-dependent protein remodeling. The loading of the MCM complex onto the origin DNA marks the completion of pre-replication complex formation.
If environmental conditions are right in late G1 phase, the G1 and G1/S cyclin-Cdk complexes are activated, which stimulate expression of genes that encode components of the DNA synthetic machinery. G1/S-Cdk activation also promotes the expression and activation of S-Cdk complexes, which may play a role in activating replication origins depending on species and cell type. Control of these Cdks vary depending on cell type and stage of development. This regulation is best understood in budding yeast, where the S cyclins Clb5 and Clb6 are primarily responsible for DNA replication. Clb5,6-Cdk1 complexes directly trigger the activation of replication origins and are therefore required throughout S phase to directly activate each origin.
In a similar manner, Cdc7 is also required through S phase to activate replication origins. Cdc7 is not active throughout the cell cycle, and its activation is strictly timed to avoid premature initiation of DNA replication. In late G1, Cdc7 activity rises abruptly as a result of association with the regulatory subunit DBF4, which binds Cdc7 directly and promotes its protein kinase activity. Cdc7 has been found to be a rate-limiting regulator of origin activity. Together, the G1/S-Cdks and/or S-Cdks and Cdc7 collaborate to directly activate the replication origins, leading to initiation of DNA synthesis.
Preinitiation complex
In early S phase, S-Cdk and Cdc7 activation lead to the assembly of the preinitiation complex, a massive protein complex formed at the origin. Formation of the preinitiation complex displaces Cdc6 and Cdt1 from the origin replication complex, inactivating and disassembling the pre-replication complex. Loading the preinitiation complex onto the origin activates the Mcm helicase, causing unwinding of the DNA helix. The preinitiation complex also loads α-primase and other DNA polymerases onto the DNA.
After α-primase synthesizes the first primers, the primer-template junctions interact with the clamp loader, which loads the sliding clamp onto the DNA to begin DNA synthesis. The components of the preinitiation complex remain associated with replication forks as they move out from the origin.
Elongation
DNA polymerase has 5′–3′ activity.
All known DNA replication systems require a free 3′ hydroxyl group before synthesis can be initiated (note: the DNA template is read in 3′ to 5′ direction whereas a new strand is synthesized in the 5′ to 3′ direction—this is often confused). Four distinct mechanisms for DNA synthesis are recognized:
All cellular life forms and many DNA viruses, phages and plasmids use a primase to synthesize a short RNA primer with a free 3′ OH group which is subsequently elongated by a DNA polymerase.
The retroelements (including retroviruses) employ a transfer RNA that primes DNA replication by providing a free 3′ OH that is used for elongation by the reverse transcriptase.
In the adenoviruses and the φ29 family of bacteriophages, the 3′ OH group is provided by the side chain of an amino acid of the genome attached protein (the terminal protein) to which nucleotides are added by the DNA polymerase to form a new strand.
In the single stranded DNA viruses—a group that includes the circoviruses, the geminiviruses, the parvoviruses and others—and also the many phages and plasmids that use the rolling circle replication (RCR) mechanism, the RCR endonuclease creates a nick in the genome strand (single stranded viruses) or one of the DNA strands (plasmids). The 5′ end of the nicked strand is transferred to a tyrosine residue on the nuclease and the free 3′ OH group is then used by the DNA polymerase to synthesize the new strand.
Cellular organisms use the first of these pathways since it is the most well-known. In this mechanism, once the two strands are separated, primase adds RNA primers to the template strands. The leading strand receives one RNA primer while the lagging strand receives several. The leading strand is continuously extended from the primer by a DNA polymerase with high processivity, while the lagging strand is extended discontinuously from each primer forming Okazaki fragments. RNase removes the primer RNA fragments, and a low processivity DNA polymerase distinct from the replicative polymerase enters to fill the gaps. When this is complete, a single nick on the leading strand and several nicks on the lagging strand can be found. Ligase works to fill these nicks in, thus completing the newly replicated DNA molecule.
The primase used in this process differs significantly between bacteria and archaea/eukaryotes. Bacteria use a primase belonging to the DnaG protein superfamily which contains a catalytic domain of the TOPRIM fold type. The TOPRIM fold contains an α/β core with four conserved strands in a Rossmann-like topology. This structure is also found in the catalytic domains of topoisomerase Ia, topoisomerase II, the OLD-family nucleases and DNA repair proteins related to the RecR protein.
The primase used by archaea and eukaryotes, in contrast, contains a highly derived version of the RNA recognition motif (RRM). This primase is structurally similar to many viral RNA-dependent RNA polymerases, reverse transcriptases, cyclic nucleotide generating cyclases and DNA polymerases of the A/B/Y families that are involved in DNA replication and repair. In eukaryotic replication, the primase forms a complex with Pol α.
Multiple DNA polymerases take on different roles in the DNA replication process. In E. coli, DNA Pol III is the polymerase enzyme primarily responsible for DNA replication. It assembles into a replication complex at the replication fork that exhibits extremely high processivity, remaining intact for the entire replication cycle. In contrast, DNA Pol I is the enzyme responsible for replacing RNA primers with DNA. DNA Pol I has a 5′ to 3′ exonuclease activity in addition to its polymerase activity, and uses its exonuclease activity to degrade the RNA primers ahead of it as it extends the DNA strand behind it, in a process called nick translation. Pol I is much less processive than Pol III because its primary function in DNA replication is to create many short DNA regions rather than a few very long regions.
In eukaryotes, the low-processivity enzyme, Pol α, helps to initiate replication because it forms a complex with primase. In eukaryotes, leading strand synthesis is thought to be conducted by Pol ε; however, this view has recently been challenged, suggesting a role for Pol δ. Primer removal is completed Pol δ while repair of DNA during replication is completed by Pol ε.
As DNA synthesis continues, the original DNA strands continue to unwind on each side of the bubble, forming a replication fork with two prongs. In bacteria, which have a single origin of replication on their circular chromosome, this process creates a "theta structure" (resembling the Greek letter theta: θ). In contrast, eukaryotes have longer linear chromosomes and initiate replication at multiple origins within these.
Replication fork
The replication fork is a structure that forms within the long helical DNA during DNA replication. It is produced by enzymes called helicases that break the hydrogen bonds that hold the DNA strands together in a helix. The resulting structure has two branching "prongs", each one made up of a single strand of DNA. These two strands serve as the template for the leading and lagging strands, which will be created as DNA polymerase matches complementary nucleotides to the templates; the templates may be properly referred to as the leading strand template and the lagging strand template.
DNA is read by DNA polymerase in the 3′ to 5′ direction, meaning the new strand is synthesized in the 5' to 3' direction. Since the leading and lagging strand templates are oriented in opposite directions at the replication fork, a major issue is how to achieve synthesis of new lagging strand DNA, whose direction of synthesis is opposite to the direction of the growing replication fork.
Leading strand
The leading strand is the strand of new DNA which is synthesized in the same direction as the growing replication fork. This sort of DNA replication is continuous.
Lagging strand
The lagging strand is the strand of new DNA whose direction of synthesis is opposite to the direction of the growing replication fork. Because of its orientation, replication of the lagging strand is more complicated as compared to that of the leading strand. As a consequence, the DNA polymerase on this strand is seen to "lag behind" the other strand.
The lagging strand is synthesized in short, separated segments. On the lagging strand template, a primase "reads" the template DNA and initiates synthesis of a short complementary RNA primer. A DNA polymerase extends the primed segments, forming Okazaki fragments. The RNA primers are then removed and replaced with DNA, and the fragments of DNA are joined by DNA ligase.
Dynamics at the replication fork
In all cases the helicase is composed of six polypeptides that wrap around only one strand of the DNA being replicated. The two polymerases are bound to the helicase hexamer. In eukaryotes the helicase wraps around the leading strand, and in prokaryotes it wraps around the lagging strand.
As helicase unwinds DNA at the replication fork, the DNA ahead is forced to rotate. This process results in a build-up of twists in the DNA ahead. This build-up creates a torsional load that would eventually stop the replication fork. Topoisomerases are enzymes that temporarily break the strands of DNA, relieving the tension caused by unwinding the two strands of the DNA helix; topoisomerases (including DNA gyrase) achieve this by adding negative supercoils to the DNA helix.
Bare single-stranded DNA tends to fold back on itself forming secondary structures; these structures can interfere with the movement of DNA polymerase. To prevent this, single-strand binding proteins bind to the DNA until a second strand is synthesized, preventing secondary structure formation.
Double-stranded DNA is coiled around histones that play an important role in regulating gene expression so the replicated DNA must be coiled around histones at the same places as the original DNA. To ensure this, histone chaperones disassemble the chromatin before it is replicated and replace the histones in the correct place. Some steps in this reassembly are somewhat speculative.
Clamp proteins act as a sliding clamp on DNA, allowing the DNA polymerase to bind to its template and aid in processivity. The inner face of the clamp enables DNA to be threaded through it. Once the polymerase reaches the end of the template or detects double-stranded DNA, the sliding clamp undergoes a conformational change that releases the DNA polymerase. Clamp-loading proteins are used to initially load the clamp, recognizing the junction between template and RNA primers.:274-5
DNA replication proteins
At the replication fork, many replication enzymes assemble on the DNA into a complex molecular machine called the replisome. The following is a list of major DNA replication enzymes that participate in the replisome:
In vitro single-molecule experiments (using optical tweezers and magnetic tweezers) have found synergetic interactions between the replisome enzymes (helicase, polymerase, and Single-strand DNA-binding protein) and with the DNA replication fork enhancing DNA-unwinding and DNA-replication. These results lead to the development of kinetic models accounting for the synergetic interactions and their stability.
Replication machinery
Replication machineries consist of factors involved in DNA replication and appearing on template ssDNAs. Replication machineries include primosotors are replication enzymes; DNA polymerase, DNA helicases, DNA clamps and DNA topoisomerases, and replication proteins; e.g. single-stranded DNA binding proteins (SSB). In the replication machineries these components coordinate. In most of the bacteria, all of the factors involved in DNA replication are located on replication forks and the complexes stay on the forks during DNA replication. Replication machineries are also referred to as replisomes, or DNA replication systems. These terms are generic terms for proteins located on replication forks. In eukaryotic and some bacterial cells the replisomes are not formed.
In an alternative figure, DNA factories are similar to projectors and DNAs are like as cinematic films passing constantly into the projectors. In the replication factory model, after both DNA helicases for leading strands and lagging strands are loaded on the template DNAs, the helicases run along the DNAs into each other. The helicases remain associated for the remainder of replication process. Peter Meister et al. observed directly replication sites in budding yeast by monitoring green fluorescent protein (GFP)-tagged DNA polymerases α. They detected DNA replication of pairs of the tagged loci spaced apart symmetrically from a replication origin and found that the distance between the pairs decreased markedly by time. This finding suggests that the mechanism of DNA replication goes with DNA factories. That is, couples of replication factories are loaded on replication origins and the factories associated with each other. Also, template DNAs move into the factories, which bring extrusion of the template ssDNAs and new DNAs. Meister's finding is the first direct evidence of replication factory model. Subsequent research has shown that DNA helicases form dimers in many eukaryotic cells and bacterial replication machineries stay in single intranuclear location during DNA synthesis.
Replication Factories Disentangle Sister Chromatids. The disentanglement is essential for distributing the chromatids into daughter cells after DNA replication. Because sister chromatids after DNA replication hold each other by Cohesin rings, there is the only chance for the disentanglement in DNA replication. Fixing of replication machineries as replication factories can improve the success rate of DNA replication. If replication forks move freely in chromosomes, catenation of nuclei is aggravated and impedes mitotic segregation.
Termination
Eukaryotes initiate DNA replication at multiple points in the chromosome, so replication forks meet and terminate at many points in the chromosome. Because eukaryotes have linear chromosomes, DNA replication is unable to reach the very end of the chromosomes. Due to this problem, DNA is lost in each replication cycle from the end of the chromosome. Telomeres are regions of repetitive DNA close to the ends and help prevent loss of genes due to this shortening. Shortening of the telomeres is a normal process in somatic cells. This shortens the telomeres of the daughter DNA chromosome. As a result, cells can only divide a certain number of times before the DNA loss prevents further division. (This is known as the Hayflick limit.) Within the germ cell line, which passes DNA to the next generation, telomerase extends the repetitive sequences of the telomere region to prevent degradation. Telomerase can become mistakenly active in somatic cells, sometimes leading to cancer formation. Increased telomerase activity is one of the hallmarks of cancer.
Termination requires that the progress of the DNA replication fork must stop or be blocked. Termination at a specific locus, when it occurs, involves the interaction between two components: (1) a termination site sequence in the DNA, and (2) a protein which binds to this sequence to physically stop DNA replication. In various bacterial species, this is named the DNA replication terminus site-binding protein, or Ter protein.
Because bacteria have circular chromosomes, termination of replication occurs when the two replication forks meet each other on the opposite end of the parental chromosome. E. coli regulates this process through the use of termination sequences that, when bound by the Tus protein, enable only one direction of replication fork to pass through. As a result, the replication forks are constrained to always meet within the termination region of the chromosome.
Regulation
Eukaryotes
Within eukaryotes, DNA replication is controlled within the context of the cell cycle. As the cell grows and divides, it progresses through stages in the cell cycle; DNA replication takes place during the S phase (synthesis phase). The progress of the eukaryotic cell through the cycle is controlled by cell cycle checkpoints. Progression through checkpoints is controlled through complex interactions between various proteins, including cyclins and cyclin-dependent kinases. Unlike bacteria, eukaryotic DNA replicates in the confines of the nucleus.
The G1/S checkpoint (restriction checkpoint) regulates whether eukaryotic cells enter the process of DNA replication and subsequent division. Cells that do not proceed through this checkpoint remain in the G0 stage and do not replicate their DNA.
Once the DNA has gone through the "G1/S" test, it can only be copied once in every cell cycle. When the Mcm complex moves away from the origin, the pre-replication complex is dismantled. Because a new Mcm complex cannot be loaded at an origin until the pre-replication subunits are reactivated, one origin of replication can not be used twice in the same cell cycle.
Activation of S-Cdks in early S phase promotes the destruction or inhibition of individual pre-replication complex components, preventing immediate reassembly. S and M-Cdks continue to block pre-replication complex assembly even after S phase is complete, ensuring that assembly cannot occur again until all Cdk activity is reduced in late mitosis.
In budding yeast, inhibition of assembly is caused by Cdk-dependent phosphorylation of pre-replication complex components. At the onset of S phase, phosphorylation of Cdc6 by Cdk1 causes the binding of Cdc6 to the SCF ubiquitin protein ligase, which causes proteolytic destruction of Cdc6. Cdk-dependent phosphorylation of Mcm proteins promotes their export out of the nucleus along with Cdt1 during S phase, preventing the loading of new Mcm complexes at origins during a single cell cycle. Cdk phosphorylation of the origin replication complex also inhibits pre-replication complex assembly. The individual presence of any of these three mechanisms is sufficient to inhibit pre-replication complex assembly. However, mutations of all three proteins in the same cell does trigger reinitiation at many origins of replication within one cell cycle.
In animal cells, the protein geminin is a key inhibitor of pre-replication complex assembly. Geminin binds Cdt1, preventing its binding to the origin recognition complex. In G1, levels of geminin are kept low by the APC, which ubiquitinates geminin to target it for degradation. When geminin is destroyed, Cdt1 is released, allowing it to function in pre-replication complex assembly. At the end of G1, the APC is inactivated, allowing geminin to accumulate and bind Cdt1.
Replication of chloroplast and mitochondrial genomes occurs independently of the cell cycle, through the process of D-loop replication.
Replication focus
In vertebrate cells, replication sites concentrate into positions called replication foci. Replication sites can be detected by immunostaining daughter strands and replication enzymes and monitoring GFP-tagged replication factors. By these methods it is found that replication foci of varying size and positions appear in S phase of cell division and their number per nucleus is far smaller than the number of genomic replication forks.
P. Heun et al.,(2001) tracked GFP-tagged replication foci in budding yeast cells and revealed that replication origins move constantly in G1 and S phase and the dynamics decreased significantly in S phase. Traditionally, replication sites were fixed on spatial structure of chromosomes by nuclear matrix or lamins. The Heun's results denied the traditional concepts, budding yeasts do not have lamins, and support that replication origins self-assemble and form replication foci.
By firing of replication origins, controlled spatially and temporally, the formation of replication foci is regulated. D. A. Jackson et al.(1998) revealed that neighboring origins fire simultaneously in mammalian cells. Spatial juxtaposition of replication sites brings clustering of replication forks. The clustering do rescue of stalled replication forks and favors normal progress of replication forks. Progress of replication forks is inhibited by many factors; collision with proteins or with complexes binding strongly on DNA, deficiency of dNTPs, nicks on template DNAs and so on. If replication forks get stuck and the rest of the sequences from the stuck forks are not copied, then the daughter strands get nick nick unreplicated sites. The un-replicated sites on one parent's strand hold the other strand together but not daughter strands. Therefore, the resulting sister chromatids cannot separate from each other and cannot divide into 2 daughter cells. When neighboring origins fire and a fork from one origin is stalled, fork from other origin access on an opposite direction of the stalled fork and duplicate the un-replicated sites. As other mechanism of the rescue there is application of dormant replication origins that excess origins do not fire in normal DNA replication.
Bacteria
Most bacteria do not go through a well-defined cell cycle but instead continuously copy their DNA; during rapid growth, this can result in the concurrent occurrence of multiple rounds of replication. In E. coli, the best-characterized bacteria, DNA replication is regulated through several mechanisms, including: the hemimethylation and sequestering of the origin sequence, the ratio of adenosine triphosphate (ATP) to adenosine diphosphate (ADP), and the levels of protein DnaA. All these control the binding of initiator proteins to the origin sequences.
Because E. coli methylates GATC DNA sequences, DNA synthesis results in hemimethylated sequences. This hemimethylated DNA is recognized by the protein SeqA, which binds and sequesters the origin sequence; in addition, DnaA (required for initiation of replication) binds less well to hemimethylated DNA. As a result, newly replicated origins are prevented from immediately initiating another round of DNA replication.
ATP builds up when the cell is in a rich medium, triggering DNA replication once the cell has reached a specific size. ATP competes with ADP to bind to DnaA, and the DnaA-ATP complex is able to initiate replication. A certain number of DnaA proteins are also required for DNA replication — each time the origin is copied, the number of binding sites for DnaA doubles, requiring the synthesis of more DnaA to enable another initiation of replication.
In fast-growing bacteria, such as E. coli, chromosome replication takes more time than dividing the cell. The bacteria solve this by initiating a new round of replication before the previous one has been terminated. The new round of replication will form the chromosome of the cell that is born two generations after the dividing cell. This mechanism creates overlapping replication cycles.
Problems with DNA replication
There are many events that contribute to replication stress, including:
Misincorporation of ribonucleotides
Unusual DNA structures
Conflicts between replication and transcription
Insufficiency of essential replication factors
Common fragile sites
Overexpression or constitutive activation of oncogenes
Chromatin inaccessibility
Polymerase chain reaction
Researchers commonly replicate DNA in vitro using the polymerase chain reaction (PCR). PCR uses a pair of primers to span a target region in template DNA, and then polymerizes partner strands in each direction from these primers using a thermostable DNA polymerase. Repeating this process through multiple cycles amplifies the targeted DNA region. At the start of each cycle, the mixture of template and primers is heated, separating the newly synthesized molecule and template. Then, as the mixture cools, both of these become templates for annealing of new primers, and the polymerase extends from these. As a result, the number of copies of the target region doubles each round, increasing exponentially.
| Biology and health sciences | Cell processes | null |
9061 | https://en.wikipedia.org/wiki/Dolphin | Dolphin | A dolphin is an aquatic mammal in the clade Odontoceti (toothed whale). Dolphins belong to the families Delphinidae (the oceanic dolphins), Platanistidae (the Indian river dolphins), Iniidae (the New World river dolphins), Pontoporiidae (the brackish dolphins), and possibly extinct Lipotidae (baiji or Chinese river dolphin). There are 40 extant species named as dolphins.
Dolphins range in size from the and Maui's dolphin to the and orca. Various species of dolphins exhibit sexual dimorphism where the males are larger than females. They have streamlined bodies and two limbs that are modified into flippers. Though not quite as flexible as seals, they are faster; some dolphins can briefly travel at speeds of or leap about . Dolphins use their conical teeth to capture fast-moving prey. They have well-developed hearing which is adapted for both air and water; it is so well developed that some can survive even if they are blind. Some species are well adapted for diving to great depths. They have a layer of fat, or blubber, under the skin to keep warm in the cold water.
Dolphins are widespread. Most species prefer the warm waters of the tropic zones, but some, such as the right whale dolphin, prefer colder climates. Dolphins feed largely on fish and squid, but a few, such as the orca, feed on large mammals such as seals. Male dolphins typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for a relatively long period of time.
Dolphins produce a variety of vocalizations, usually in the form of clicks and whistles.
Dolphins are sometimes hunted in places such as Japan, in an activity known as dolphin drive hunting. Besides drive hunting, they also face threats from bycatch, habitat loss, and marine pollution. Dolphins have been depicted in various cultures worldwide. Dolphins are sometimes kept in captivity and trained to perform tricks. The most common dolphin species in captivity is the bottlenose dolphin, while there are around 60 orcas in captivity.
Etymology
The name is originally from Greek (delphís), "dolphin", which was related to the Greek (delphus), "womb". The animal's name can therefore be interpreted as meaning "a 'fish' with a womb". The name was transmitted via the Latin delphinus (the romanization of the later Greek δελφῖνος – delphinos), which in Medieval Latin became and in Old French daulphin, which reintroduced the ph into the word dolphin. The term mereswine ("sea pig") is also used.
The term dolphin can be used to refer to most species in the family Delphinidae (oceanic dolphins) and the river dolphin families of Iniidae (South American river dolphins), Pontoporiidae (La Plata dolphin), Lipotidae (Yangtze river dolphin) and Platanistidae (Ganges river dolphin and Indus river dolphin). Meanwhile, the mahi-mahi fish is called the dolphinfish. In common usage, the term whale is used only for the larger cetacean species, while the smaller ones with a beaked or longer nose are considered dolphins. The name dolphin is used casually as a synonym for bottlenose dolphin, the most common and familiar species of dolphin. There are six species of dolphins commonly thought of as whales, collectively known as blackfish: the orca, the melon-headed whale, the pygmy killer whale, the false killer whale, and the two species of pilot whales, all of which are classified under the family Delphinidae and qualify as dolphins. Although the terms dolphin and porpoise are sometimes used interchangeably, porpoise usually refers to the Phocoenidae family, which have a shorter beak and spade-shaped teeth and differ in their behavior.
A group of dolphins is called a school or a pod. Male dolphins are called bulls, females are called cows and young dolphins are called calves.
Hybridization
In 1933, three hybrid dolphins beached off the Irish coast; they were hybrids between Risso's and bottlenose dolphins. This mating was later repeated in captivity, producing a hybrid calf. In captivity, a bottlenose and a rough-toothed dolphin produced hybrid offspring. A common-bottlenose hybrid lives at SeaWorld California. Other dolphin hybrids live in captivity around the world or have been reported in the wild, such as a bottlenose-Atlantic spotted hybrid. The best known hybrid is the wholphin, a false killer whale-bottlenose dolphin hybrid. The wholphin is a fertile hybrid. Two wholphins currently live at the Sea Life Park in Hawaii; the first was born in 1985 from a male false killer whale and a female bottlenose. Wholphins have also been observed in the wild.
Evolution
Dolphins are descendants of land-dwelling mammals of the artiodactyl order (even-toed ungulates). They are related to the Indohyus, an extinct chevrotain-like ungulate, from which they split approximately 48 million years ago.
The primitive cetaceans, or archaeocetes, first took to the sea approximately 49 million years ago and became fully aquatic by 5–10 million years later.
Archaeoceti is a parvorder comprising ancient whales. These ancient whales are the predecessors of modern whales, stretching back to their first ancestor that spent their lives near (rarely in) the water. Likewise, the archaeocetes can be anywhere from near fully terrestrial, to semi-aquatic to fully aquatic, but what defines an archaeocete is the presence of visible legs or asymmetrical teeth. Their features became adapted for living in the marine environment. Major anatomical changes include the hearing set-up that channeled vibrations from the jaw to the earbone which occurred with Ambulocetus 49 million years ago, a streamlining of the body and the growth of flukes on the tail which occurred around 43 million years ago with Protocetus, the migration of the nasal openings toward the top of the cranium and the modification of the forelimbs into flippers which occurred with Basilosaurus 35 million years ago, and the shrinking and eventual disappearance of the hind limbs which took place with the first odontocetes and mysticetes 34 million years ago. The modern dolphin skeleton has two small, rod-shaped pelvic bones thought to be vestigial hind limbs. In October 2006, an unusual bottlenose dolphin was captured in Japan; it had small fins on each side of its genital slit, which scientists believe to be an unusually pronounced development of these vestigial hind limbs.
Today, the closest living relatives of cetaceans are the hippopotamuses; these share a semi-aquatic ancestor that branched off from other artiodactyls some 60 million years ago. Around 40 million years ago, a common ancestor between the two branched off into cetacea and anthracotheres; anthracotheres became extinct at the end of the Pleistocene two-and-a-half million years ago, eventually leaving only one surviving lineage: the two species of hippo.
Anatomy
Dolphins have torpedo-shaped bodies with generally non-flexible necks, limbs modified into flippers, a tail fin, and bulbous heads. Dolphin skulls have small eye orbits, long snouts, and eyes placed on the sides of its head; they lack external ear flaps. Dolphins range in size from the long and Maui's dolphin to the and orca. Overall, they tend to be dwarfed by other Cetartiodactyls. Several species have female-biased sexual dimorphism, with the females being larger than the males.
Dolphins have conical teeth, as opposed to porpoises' spade-shaped teeth. These conical teeth are used to catch swift prey such as fish, squid or large mammals, such as seals.
Breathing involves expelling stale air from their blowhole, in an upward blast, which may be visible in cold air, followed by inhaling fresh air into the lungs. Dolphins have rather small, unidentifiable spouts.
All dolphins have a thick layer of blubber, thickness varying on climate. This blubber can help with buoyancy, protection to some extent as predators would have a hard time getting through a thick layer of fat, and energy for leaner times; the primary usage for blubber is insulation from the harsh climate. Calves, generally, are born with a thin layer of blubber, which develops at different paces depending on the habitat.
Dolphins have a two-chambered stomach that is similar in structure to terrestrial carnivores. They have fundic and pyloric chambers.
Dolphins' reproductive organs are located inside the body, with genital slits on the ventral (belly) side. Males have two slits, one concealing the penis and one further behind for the anus. Females have one genital slit, housing the vagina and the anus, with a mammary slit on either side.
Integumentary system
The integumentary system is an organ system mostly consisting of skin, hair, nails and endocrine glands. The skin of dolphins is specialized to satisfy specific requirements, including protection, fat storage, heat regulation, and sensory perception. The skin of a dolphin is made up of two parts: the epidermis and the blubber, which consists of two layers including the dermis and subcutis.
The dolphin's skin is known to have a smooth rubber texture and is without hair and glands, except mammary glands. At birth, a newborn dolphin has hairs lined up in a single band on both sides of the rostrum, which is their jaw, and usually has a total length of 16–17 cm . The epidermis is characterized by the lack of keratin and by a prominent intertwine of epidermal rete pegs and long dermal papillae. The epidermal rete pegs are the epithelial extensions that project into the underlying connective tissue in both skin and mucous membranes. The dermal papillae are finger-like projections that help adhesion between the epidermal and dermal layers, as well as providing a larger surface area to nourish the epidermal layer. The thickness of a dolphin's epidermis varies, depending on species and age.
Blubber
Blubber is found within the dermis and subcutis layer. The dermis blends gradually with the adipose layer, which is known as fat, because the fat may extend up to the epidermis border and collagen fiber bundles extend throughout the whole subcutaneous blubber which is fat found under the skin. The thickness of the subcutaneous blubber or fat depends on the dolphin's health, development, location, reproductive state, and how well it feeds. This fat is thickest on the dolphin's back and belly. Most of the dolphin's body fat is accumulated in a thick layer of blubber. Blubber differs from fat in that, in addition to fat cells, it contains a fibrous network of connective tissue.
The blubber functions to streamline the body and to form specialized locomotor structures such as the dorsal fin, propulsive fluke blades and caudal keels. There are many nerve endings that resemble small, onion-like configurations that are present in the superficial portion of the dermis. Mechanoreceptors are found within the interlocks of the epidermis with dermal ridges. There are nerve fibers in the dermis that extend to the epidermis. These nerve endings are known to be highly proprioceptive, which explains sensory perception. Proprioception, which is also known as kinesthesia, is the body's ability to sense its location, movements and actions. Dolphins are sensitive to vibrations and small pressure changes. Blood vessels and nerve endings can be found within the dermis. There is a plexus of parallel running arteries and veins in the dorsal fin, fluke, and flippers. The blubber manipulates the blood vessels to help the dolphin stay warm. When the temperature drops, the blubber constricts the blood vessels to reduce blood flow in the dolphin. This allows the dolphin to spend less energy heating its own body, ultimately keeping the animal warmer without burning energy as quick. In order to release heat, the heat must pass the blubber layer. There are thermal windows that lack blubber, are not fully insulated and are somewhat thin and highly vascularized, including the dorsal fin, flukes, and flippers. These thermal windows are a good way for dolphins to get rid of excess heat if overheating. Additionally in order to conserve heat, dolphins use countercurrent heat exchange. Blood flows in different directions in order for heat to transfer across membranes. Heat from warm blood leaving the heart will heat up the cold blood that is headed back to the heart from the extremities, meaning that the heart always has warm blood and it decreases the heat lost to the water in those thermal windows.
Locomotion
Dolphins have two pectoral flippers, each containing four digits, a boneless dorsal fin for stability, and a fluke for propulsion. Although dolphins do not possess external hind limbs, some possess discrete rudimentary appendages, which may contain feet and digits. Orcas are fast swimmers in comparison to seals which typically cruise at ; the orca, in comparison, can travel at speeds up to . A study of a Pacific white-sided dolphin in an aquarium found fast burst acceleration, with the individual being able with 5 strokes (2.5 fluke beats) to go from 5.0 m s-1 to 8.7 m s-1 in 0.7 seconds.
The fusing of the neck vertebrae, while increasing stability when swimming at high speeds, decreases flexibility, which means most dolphins are unable to turn their heads. River dolphins have non-fused neck vertebrae and can turn their heads up to 90°. Dolphins swim by moving their fluke and rear body vertically, while their flippers are mainly used for steering. Some species porpoise out of the water, which allows them to travel faster. Their skeletal anatomy allows them to be fast swimmers. All species have a dorsal fin to prevent themselves from involuntarily spinning in the water.
Some dolphins are adapted for diving to great depths. In addition to their streamlined bodies, some can selectively slow their heart rate to conserve oxygen. Some can also re-route blood from tissue tolerant of water pressure to the heart, brain and other organs. Their hemoglobin and myoglobin store oxygen in body tissues, and they have twice as much myoglobin as hemoglobin.
Senses
A dolphin ear has specific adaptations to the marine environment. In humans, the middle ear works as an impedance equalizer between the outside air's low impedance and the cochlear fluid's high impedance. In dolphins, and other marine mammals, there is no great difference between the outer and inner environments. Instead of sound passing through the outer ear to the middle ear, dolphins receive sound through the throat, from which it passes through a low-impedance fat-filled cavity to the inner ear. The ear is acoustically isolated from the skull by air-filled sinus pockets, which allow for greater directional hearing underwater.
Dolphins generate sounds independently of respiration using recycled air that passes through air sacs and phonic (alternatively monkey) lips. Integral to the lips are oil-filled organs called dorsal bursae that have been suggested to be homologous to the sperm whale's spermaceti organ. High-frequency clicks pass through the sound-modifying organs of the extramandibular fat body, intramandibular fat body and the melon. This melon consists of fat, and the skull of any such creature containing a melon will have a large depression. This allows dolphins to use echolocation for orientation. Though most dolphins do not have hair, they do have hair follicles that may perform some sensory function. Beyond locating an object, echolocation also provides the animal with an idea on an object's shape and size, though how exactly this works is not yet understood. The small hairs on the rostrum of the boto (river dolphins of South America) are believed to function as a tactile sense, possibly to compensate for the boto's poor eyesight.
A dolphin eye is relatively small for its size, yet they do retain a good degree of eyesight. As well as this, the eyes of a dolphin are placed on the sides of its head, so their vision consists of two fields, rather than a binocular view like humans have. When dolphins surface, their lens and cornea correct the nearsightedness that results from the water's refraction of light. Their eyes contain both rod and cone cells, meaning they can see in both dim and bright light, but they have far more rod cells than they do cone cells. They lack short wavelength sensitive visual pigments in their cone cells, indicating a more limited capacity for color vision than most mammals. Most dolphins have slightly flattened eyeballs, enlarged pupils (which shrink as they surface to prevent damage), slightly flattened corneas and a tapetum lucidum (eye tissue behind the retina); these adaptations allow for large amounts of light to pass through the eye and, therefore, a very clear image of the surrounding area. They also have glands on the eyelids and outer corneal layer that act as protection for the cornea.
The olfactory lobes and nerve are absent in dolphins, suggesting that they have no sense of smell.
Dolphins are not thought to have a good sense of taste, as their taste buds are atrophied or missing altogether. Some have preferences for different kinds of fish, indicating some ability to taste.
Intelligence
Dolphins are known to teach, learn, cooperate, scheme, and grieve. The neocortex of many species is home to elongated spindle neurons that, prior to 2007, were known only in hominids. In humans, these cells are involved in social conduct, emotions, judgment, and theory of mind. Cetacean spindle neurons are found in areas of the brain that are analogous to where they are found in humans, suggesting that they perform a similar function.
Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately the ⅔ or ¾ exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalization quotient that can be used as another indication of animal intelligence. Orcas have the second largest brain mass of any animal on earth, next to the sperm whale. The brain to body mass ratio in some is second only to humans.
Self-awareness is seen, by some, to be a sign of highly developed, abstract thinking. Self-awareness, though not well-defined scientifically, is believed to be the precursor to more advanced processes like meta-cognitive reasoning (thinking about thinking) that are typical of humans. Research in this field has suggested that cetaceans, among others, possess self-awareness.
The most widely used test for self-awareness in animals is the mirror test in which a mirror is introduced to an animal, and the animal is then marked with a temporary dye. If the animal then goes to the mirror in order to view the mark, it has exhibited strong evidence of self-awareness.
Some disagree with these findings, arguing that the results of these tests are open to human interpretation and susceptible to the Clever Hans effect. This test is much less definitive than when used for primates, because primates can touch the mark or the mirror, while cetaceans cannot, making their alleged self-recognition behavior less certain. Skeptics argue that behaviors that are said to identify self-awareness resemble existing social behaviors, and so researchers could be misinterpreting self-awareness for social responses to another individual. The researchers counter-argue that the behaviors shown are evidence of self-awareness, as they are very different from normal responses to another individual. Whereas apes can merely touch the mark on themselves with their fingers, cetaceans show less definitive behavior of self-awareness; they can only twist and turn themselves to observe the mark.
In 1995, Marten and Psarakos used television to test dolphin self-awareness. They showed dolphins real-time video of themselves, video of another dolphin and recorded footage. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been repeated since then, dolphins have since passed the mirror test. Some researchers have argued that evidence for self-awareness has not been convincingly demonstrated.
Behavior
Socialization
Dolphins are highly social animals, often living in pods of up to a dozen individuals, though pod sizes and structures vary greatly between species and locations. In places with a high abundance of food, pods can merge temporarily, forming a superpod; such groupings may exceed 1,000 dolphins. Membership in pods is not rigid; interchange is common. They establish strong social bonds, and will stay with injured or ill members, helping them to breathe by bringing them to the surface if needed. This altruism does not appear to be limited to their own species. The dolphin Moko in New Zealand has been observed guiding a female pygmy sperm whale together with her calf out of shallow water where they had stranded several times. They have also been seen protecting swimmers from sharks by swimming circles around the swimmers or charging the sharks to make them go away.
Dolphins communicate using a variety of clicks, whistle-like sounds and other vocalizations. Dolphins also use nonverbal communication by means of touch and posturing.
Dolphins also display culture, something long believed to be unique to humans (and possibly other primate species). In May 2005, a discovery in Australia found Indo-Pacific bottlenose dolphins (Tursiops aduncus) teaching their young to use tools. They cover their snouts with sponges to protect them while foraging. This knowledge is mostly transferred by mothers to daughters, unlike simian primates, where knowledge is generally passed on to both sexes. Using sponges as mouth protection is a learned behavior. Another learned behavior was discovered among river dolphins in Brazil, where some male dolphins use weeds and sticks as part of a sexual display.
Forms of care-giving between fellows and even for members of different species(see Moko (dolphin)) are recorded in various species – such as trying to save weakened fellows or female pilot whales holding up dead calves for long periods.
Dolphins engage in acts of aggression towards each other. The older a male dolphin is, the more likely his body is to be covered with bite scars. Male dolphins can get into disputes over companions and females. Acts of aggression can become so intense that targeted dolphins sometimes go into exile after losing a fight.
Male bottlenose dolphins have been known to engage in infanticide. Dolphins have also been known to kill porpoises (porpicide) for reasons which are not fully understood, as porpoises generally do not share the same diet as dolphins and are therefore not competitors for food supplies. The Cornwall Wildlife Trust records about one such death a year. Possible explanations include misdirected infanticide, misdirected sexual aggression or play behaviour.
Reproduction and sexuality
Dolphin copulation happens belly to belly; though many species engage in lengthy foreplay, the actual act is usually brief, but may be repeated several times within a short timespan. The gestation period varies with species; for the small tucuxi dolphin, this period is around 11 to 12 months, while for the orca, the gestation period is around 17 months. Typically dolphins give birth to a single calf, which is, unlike most other mammals, born tail first in most cases. They usually become sexually active at a young age, even before reaching sexual maturity. The age of sexual maturity varies by species and sex.
Dolphins are known to display non-reproductive sexual behavior, engaging in masturbation, stimulation of the genital area of other individuals using the rostrum or flippers, and homosexual contact.
Various species of dolphin have been known to engage in sexual behavior including copulation with dolphins of other species, and occasionally exhibit sexual behavior towards other animals, including humans. Sexual encounters may be violent, with male bottlenose dolphins sometimes showing aggressive behavior towards both females and other males. Male dolphins may also work together and attempt to herd females in estrus, keeping the females by their side by means of both physical aggression and intimidation, to increase their chances of reproductive success.
Sleeping
Generally, dolphins sleep with only one brain hemisphere in slow-wave sleep at a time, thus maintaining enough consciousness to breathe and to watch for possible predators and other threats. Sleep stages earlier in sleep can occur simultaneously in both hemispheres.
In captivity, dolphins seemingly enter a fully asleep state where both eyes are closed and there is no response to mild external stimuli. In this case, respiration is automatic; a tail kick reflex keeps the blowhole above the water if necessary. Anesthetized dolphins initially show a tail kick reflex. Though a similar state has been observed with wild sperm whales, it is not known if dolphins in the wild reach this state. The Indus river dolphin has a sleep method that is different from that of other dolphin species. Living in water with strong currents and potentially dangerous floating debris, it must swim continuously to avoid injury. As a result, this species sleeps in very short bursts which last between 4 and 60 seconds.
Feeding
There are various feeding methods among and within species, some apparently exclusive to a single population. Fish and squid are the main food, but the false killer whale and the orca also feed on other marine mammals. Orcas on occasion also hunt whale species larger than themselves. Different breeds of dolphins vary widely in the number of teeth they possess. The orca usually carries 40–56 teeth while the popular bottlenose dolphin has anywhere from 72 to 116 conical teeth and its smaller cousin the common dolphin has 188–268 teeth: the number of teeth that an individual carries varies widely between within a single species. Hybrids between common and bottlenose bred in captivity had a number of teeth intermediate between that of their parents.
One common feeding method is herding, where a pod squeezes a school of fish into a small volume, known as a bait ball. Individual members then take turns plowing through the ball, feeding on the stunned fish. Corralling is a method where dolphins chase fish into shallow water to catch them more easily. Orcas and bottlenose dolphins have also been known to drive their prey onto a beach to feed on it, a behaviour known as beach or strand feeding. Some species also whack fish with their flukes, stunning them and sometimes knocking them out of the water.
Reports of cooperative human-dolphin fishing date back to the ancient Roman author and natural philosopher Pliny the Elder. A modern human-dolphin partnership currently operates in Laguna, Santa Catarina, Brazil. Here, dolphins drive fish towards fishermen waiting along the shore and signal the men to cast their nets. The dolphins' reward is the fish that escape the nets.
In Shark Bay, Australia, dolphins catch fish by trapping them in huge conch shells. In "shelling", a dolphin brings the shell to the surface and shakes it, so that fish sheltering within fall into the dolphin's mouth. From 2007 to 2018, in 5,278 encounters with dolphins, researchers observed 19 dolphins shelling 42 times. The behavior spreads mainly within generations, rather than being passed from mother to offspring.
Vocalization
Dolphins are capable of making a broad range of sounds using nasal airsacs located just below the blowhole. Roughly three categories of sounds can be identified: frequency modulated whistles, burst-pulsed sounds, and clicks. Dolphins communicate with whistle-like sounds produced by vibrating connective tissue, similar to the way human vocal cords function, and through burst-pulsed sounds, though the nature and extent of that ability is not known. The clicks are directional and are for echolocation, often occurring in a short series called a click train. The click rate increases when approaching an object of interest. Dolphin echolocation clicks are amongst the loudest sounds made by marine animals.
Bottlenose dolphins have been found to have signature whistles, a whistle that is unique to a specific individual. These whistles are used in order for dolphins to communicate with one another by identifying an individual. It can be seen as the dolphin equivalent of a name for humans. These signature whistles are developed during a dolphin's first year; it continues to maintain the same sound throughout its lifetime. In order to obtain each individual whistle sound, dolphins undergo vocal production learning. This consists of an experience with other dolphins that modifies the signal structure of an existing whistle sound. An auditory experience influences the whistle development of each dolphin. Dolphins are able to communicate to one another by addressing another dolphin through mimicking their whistle. The signature whistle of a male bottlenose dolphin tends to be similar to that of his mother, while the signature whistle of a female bottlenose dolphin tends to be more distinguishing. Bottlenose dolphins have a strong memory when it comes to these signature whistles, as they are able to relate to a signature whistle of an individual they have not encountered for over twenty years. Research done on signature whistle usage by other dolphin species is relatively limited. The research on other species done so far has yielded varied outcomes and inconclusive results.
Because dolphins are generally associated in groups, communication is necessary. Signal masking is when other similar sounds (conspecific sounds) interfere with the original acoustic sound. In larger groups, individual whistle sounds are less prominent. Dolphins tend to travel in pods, upon which there are groups of dolphins that range from a few to many. Although they are traveling in these pods, the dolphins do not necessarily swim right next to each other. Rather, they swim within the same general vicinity. In order to prevent losing one of their pod members, there are higher whistle rates. Because their group members were spread out, this was done in order to continue traveling together.
Jumping and playing
Dolphins frequently leap above the water surface, this being done for various reasons. When travelling, jumping can save the dolphin energy as there is less friction while in the air. This type of travel is known as porpoising. Other reasons include orientation, social displays, fighting, non-verbal communication, entertainment and attempting to dislodge parasites.
Dolphins show various types of playful behavior, often including objects, self-made bubble rings, other dolphins or other animals. When playing with objects or small animals, common behavior includes carrying the object or animal along using various parts of the body, passing it along to other members of the group or taking it from another member, or throwing it out of the water. Dolphins have also been observed harassing animals in other ways, for example by dragging birds underwater without showing any intent to eat them. Playful behaviour that involves another animal species with active participation of the other animal has also been observed. Playful dolphin interactions with humans are the most obvious examples, followed by those with humpback whales and dogs.
Juvenile dolphins off the coast of Western Australia have been observed chasing, capturing, and chewing on blowfish. While some reports state that the dolphins are becoming intoxicated on the tetrodotoxin in the fishes' skin, other reports have characterized this behavior as the normal curiosity and exploration of their environment in which dolphins engage.
Tail-walking
Although this behaviour is highly unusual in wild dolphins, several Indo-Pacific bottlenose dolphins (Tursiops aduncus) of the Port River, north of Adelaide, South Australia, have been seen to have exhibit "tail-walking". This activity mimicks a standing posture, using the tail to run backwards along the water. To perform this movement, the dolphin "forces the majority of its body vertically out of the water and maintains the position by vigorously pumping its tail".
This started in 1988 when a female named Billie was rescued after becoming trapped in a polluted marina, and spent two weeks recuperating with captive dolphins. Billie had previously been observed swimming and frolicking with racehorses exercising in the Port River in the 1980s. After becoming trapped in a reedy estuary further down the coast, she was rescued and placed with several captive dolphins at a marine park to recuperate. There she observed the captive dolphins performing tail-walking. After being returned to the Port River, she continued to perform this trick, and another dolphin, Wave, copied her. Wave, a very active tail-walker, passed on the skill to her daughters, Ripple and Tallula.
After Billie's premature death, Wave started tail-walking much more frequently, and other dolphins in the group were observed also performing the behaviour. In 2011, up to 12 dolphins were observed tail-walking, but only females appeared to learn the skill. In October 2021, a dolphin was observed tail-walking over a number of hours.
Scientists have found the spread of this behaviour, through up to two generations, surprising, as it brings no apparent advantage, and is very energy-consuming. A 2018 study by Mike Rossley et al. suggested:
Threats
Dolphins have few marine enemies. Some species or specific populations have none, making them apex predators. For most of the smaller species of dolphins, only a few of the larger sharks, such as the bull shark, dusky shark, tiger shark and great white shark, are a potential risk, especially for calves. Some of the larger dolphin species, especially orcas, may also prey on smaller dolphins, but this seems rare. Dolphins also suffer from a wide variety of diseases and parasites. The Cetacean morbillivirus in particular has been known to cause regional epizootics often leaving hundreds of animals of various species dead. Symptoms of infection are often a severe combination of pneumonia, encephalitis and damage to the immune system, which greatly impair the cetacean's ability to swim and stay afloat unassisted. A study at the U.S. National Marine Mammal Foundation revealed that dolphins, like humans, develop a natural form of type 2 diabetes which may lead to a better understanding of the disease and new treatments for both humans and dolphins.
Dolphins can tolerate and recover from extreme injuries such as shark bites although the exact methods used to achieve this are not known. The healing process is rapid and even very deep wounds do not cause dolphins to hemorrhage to death. Furthermore, even gaping wounds restore in such a way that the animal's body shape is restored, and infection of such large wounds seems rare.
A study published in the journal Marine Mammal Science suggests that at least some dolphins survive shark attacks using everything from sophisticated combat moves to teaming up against the shark.
Humans
Some dolphin species are at risk of extinction, especially some river dolphin species such as the Amazon river dolphin, and the Ganges and Yangtze river dolphin, which are critically or seriously endangered. A 2006 survey found no individuals of the Yangtze river dolphin. The species now appears to be functionally extinct.
Pesticides, heavy metals, plastics, and other industrial and agricultural pollutants that do not disintegrate rapidly in the environment concentrate in predators such as dolphins. Injuries or deaths due to collisions with boats, especially their propellers, are also common.
Various fishing methods, most notably purse seine fishing for tuna and the use of drift and gill nets, unintentionally kill many dolphins. Accidental by-catch in gill nets and incidental captures in antipredator nets that protect marine fish farms are common and pose a risk for mainly local dolphin populations. In some parts of the world, such as Taiji in Japan and the Faroe Islands, dolphins are traditionally considered food and are killed in harpoon or drive hunts. Dolphin meat is high in mercury and may thus pose a health danger to humans when consumed.
Queensland's shark culling program, which has killed roughly 50,000 sharks since 1962, has also killed thousands of dolphins as bycatch. "Shark control" programs in both Queensland and New South Wales use shark nets and drum lines, which entangle and kill dolphins. Queensland's "shark control" program has killed more than 1,000 dolphins in recent years, and at least 32 dolphins have been killed in Queensland since 2014. A shark culling program in KwaZulu-Natal has killed at least 2,310 dolphins.
Dolphin safe labels attempt to reassure consumers that fish and other marine products have been caught in a dolphin-friendly way. The earliest campaigns with "dolphin safe" labels were initiated in the 1980s as a result of cooperation between marine activists and the major tuna companies, and involved decreasing incidental dolphin kills by up to 50% by changing the type of nets used to catch tuna. The dolphins are netted only while fishermen are in pursuit of smaller tuna. Albacore are not netted this way, making albacore the only truly dolphin-safe tuna.
Loud underwater noises, such as those resulting from naval sonar use, live firing exercises, and certain offshore construction projects such as wind farms, may be harmful to dolphins, increasing stress, damaging hearing, and causing decompression sickness by forcing them to surface too quickly to escape the noise.
Dolphins and other smaller cetaceans are also hunted in an activity known as dolphin drive hunting. This is accomplished by driving a pod together with boats and usually into a bay or onto a beach. Their escape is prevented by closing off the route to the ocean with other boats or nets. Dolphins are hunted this way in several places around the world, including the Solomon Islands, the Faroe Islands, Peru, and Japan, the most well-known practitioner of this method. By numbers, dolphins are mostly hunted for their meat, though some end up in dolphinariums. Despite the controversial nature of the hunt resulting in international criticism, and the possible health risk that the often polluted meat causes, thousands of dolphins are caught in drive hunts each year.
Impacts of climate change
Dolphins are marine mammals with broad geographic extent, making them susceptible to climate change in various ways. The most common effect of climate change on dolphins is the increasing water temperatures across the globe. This has caused a large variety of dolphin species to experience range shifts, in which the species move from their typical geographic region to cooler waters. Another side effect of increasing water temperatures is the increase in harmful algae blooms, which has caused a mass die-off of bottlenose dolphins.
In California, the 1982–83 El Niño warming event caused the near-bottom spawning market squid to leave southern California, which caused their predator, the pilot whale, to also leave. As the market squid returned six years later, Risso's dolphins came to feed on the squid. Bottlenose dolphins expanded their range from southern to central California, and stayed even after the warming event subsided. The Pacific white-sided dolphin has had a decline in population in the southwest Gulf of California, the southern boundary of their distribution. In the 1980s they were abundant with group sizes up to 200 across the entire cool season. Then, in the 2000s, only two groups were recorded with sizes of 20 and 30, and only across the central cool season. This decline was not related to a decline of other marine mammals or prey, so it was concluded to have been caused by climate change as it occurred during a period of warming. Additionally, the Pacific white-sided dolphin had an increase in occurrence on the west coast of Canada from 1984 to 1998.
In the Mediterranean, sea surface temperatures have increased, as well as salinity, upwelling intensity, and sea levels. Because of this, prey resources have been reduced causing a steep decline in the short-beaked common dolphin Mediterranean subpopulation, which was deemed endangered in 2003. This species now only exists in the Alboran Sea, due to its high productivity, distinct ecosystem, and differing conditions from the rest of the Mediterranean.
In northwest Europe, many dolphin species have experienced range shifts from the region's typically colder waters. Warm water dolphins, like the short-beaked common dolphin and striped dolphin, have expanded north of western Britain and into the northern North Sea, even in the winter, which may displace the white-beaked and Atlantic white-sided dolphin that are in that region. The white-beaked dolphin has shown an increase in the southern North Sea since the 1960s because of this. The rough-toothed dolphin and Atlantic spotted dolphin may move to northwest Europe. In northwest Scotland, white-beaked dolphins (local to the colder waters of the North Atlantic) have decreased while common dolphins (local to warmer waters) have increased from 1992 to 2003. Additionally, Fraser's dolphin, found in tropical waters, was recorded in the UK for the first time in 1996.
River dolphins are highly affected by climate change as high evaporation rates, increased water temperatures, decreased precipitation, and increased acidification occur. River dolphins typically have a higher densities when rivers have a lox index of freshwater degradation and better water quality. Specifically looking at the Ganges river dolphin, the high evaporation rates and increased flooding on the plains may lead to more human river regulation, decreasing the dolphin population.
As warmer waters lead to a decrease in dolphin prey, this led to other causes of dolphin population decrease. In the case of bottlenose dolphins, mullet populations decrease due to increasing water temperatures, which leads to a decrease in the dolphins' health and thus their population. At the Shark Bay World Heritage Area in Western Australia, the local Indo-Pacific bottlenose dolphin population had a significant decline after a marine heatwave in 2011. This heatwave caused a decrease in prey, which led to a decline in dolphin reproductive rates as female dolphins could not get enough nutrients to sustain a calf. The resultant decrease in fish population due to warming waters has also influenced humans to see dolphins as fishing competitors or even bait. Humans use dusky dolphins as bait or are killed off because they consume the same fish humans eat and sell for profit. In the central Brazilian Amazon alone, approximately 600 pink river dolphins are killed each year to be used as bait.
Relationships with humans
In history and religion
Dolphins have long played a role in human culture.
In Greek myths, dolphins were seen invariably as helpers of humankind. Dolphins also seem to have been important to the Minoans, judging by artistic evidence from the ruined palace at Knossos. During the 2009 excavations of a major Mycenaean city at Iklaina, a striking fragment of a wall painting came to light, depicting a ship with three human figures and dolphins. Dolphins are common in Greek mythology, and many coins from ancient Greece have been found which feature a man, a boy or a deity riding on the back of a dolphin. The Ancient Greeks welcomed dolphins; spotting dolphins riding in a ship's wake was considered a good omen. In both ancient and later art, Cupid is often shown riding a dolphin. A dolphin rescued the poet Arion from drowning and carried him safe to land, at Cape Matapan, a promontory forming the southernmost point of the Peloponnesus. There was a temple to Poseidon and a statue of Arion riding the dolphin.
The Greeks reimagined the Phoenician god Melqart as Melikertês (Melicertes) and made him the son of Athamas and Ino. He drowned but was transfigured as the marine deity Palaemon, while his mother became Leucothea. (cf Ino.) At Corinth, he was so closely connected with the cult of Poseidon that the Isthmian Games, originally instituted in Poseidon's honor, came to be looked upon as the funeral games of Melicertes. Phalanthus was another legendary character brought safely to shore (in Italy) on the back of a dolphin, according to Pausanias.
Dionysus was once captured by Etruscan pirates who mistook him for a wealthy prince they could ransom. After the ship set sail Dionysus invoked his divine powers, causing vines to overgrow the ship where the mast and sails had been. He turned the oars into serpents, so terrifying the sailors that they jumped overboard, but Dionysus took pity on them and transformed them into dolphins so that they would spend their lives providing help for those in need. Dolphins were also the messengers of Poseidon and sometimes did errands for him as well. Dolphins were sacred to both Aphrodite and Apollo.
"Dolfin" was the name of an aristocratic family in the maritime Republic of Venice, whose most prominent member was the 13th-century Doge Giovanni Dolfin.
In Hindu mythology the Ganges river dolphin is associated with Ganga, the deity of the Ganges river. The dolphin is said to be among the creatures which heralded the goddess' descent from the heavens and her mount, the Makara, is sometimes depicted as a dolphin.
The Boto, a species of river dolphin that resides in the Amazon River, are believed to be shapeshifters, or encantados, who are capable of having children with human women.
There are comparatively few surviving myths of dolphins in Polynesian cultures, in spite of their maritime traditions and reverence of other marine animals such as sharks and seabirds; unlike these, they are more often perceived as food than as totemic symbols. Dolphins are most clearly represented in Rapa Nui Rongorongo, and in the traditions of the Caroline Islands they are depicted similarly to the Boto, being sexually active shapeshifters.
Heraldry
Dolphins are also used as symbols, for instance in heraldry. When heraldry developed in the Middle Ages, little was known about the biology of the dolphin and it was often depicted as a sort of fish. The stylised heraldic dolphin still conventionally follows this tradition, sometimes showing the dolphin skin covered with fish scales.
A well-known historical example was the coat of arms of the former province of the Dauphiné in southern France, from which were derived the arms and the title of the Dauphin of France, the heir to the former throne of France (the title literally meaning "The Dolphin of France").
Dolphins are present in the coat of arms of Anguilla and the coat of arms of Romania, and the coat of arms of Barbados has a dolphin supporter.
The coat of arms of the town of Poole, Dorset, England, first recorded in 1563, includes a dolphin, which was historically depicted in stylised heraldic form, but which since 1976 has been depicted naturalistically.
In captivity
Species
The renewed popularity of dolphins in the 1960s resulted in the appearance of many dolphinaria around the world, making dolphins accessible to the public. Criticism and animal welfare laws forced many to close, although hundreds still exist around the world. In the United States, the best known are the SeaWorld marine mammal parks.
In the Middle East the best known are Dolphin Bay at Atlantis, The Palm and the Dubai Dolphinarium.
Various species of dolphins are kept in captivity. These small cetaceans are more often than not kept in theme parks, such as SeaWorld, commonly known as a dolphinarium. Bottlenose dolphins are the most common species of dolphin kept in dolphinariums as they are relatively easy to train, have a long lifespan in captivity and have a friendly appearance. Hundreds if not thousands of bottlenose dolphins live in captivity across the world, though exact numbers are hard to determine. Other species kept in captivity are spotted dolphins, false killer whales and common dolphins, Commerson's dolphins, as well as rough-toothed dolphins, but all in much lower numbers than the bottlenose dolphin. There are also fewer than ten pilot whales, Amazon river dolphins, Risso's dolphins, spinner dolphins, or tucuxi in captivity. An unusual and very rare hybrid dolphin, known as a wolphin, is kept at the Sea Life Park in Hawaii, which is a cross between a bottlenose dolphin and a false killer whale.
The number of orcas kept in captivity is very small, especially when compared to the number of bottlenose dolphins, with 60 captive orcas being held in aquaria . The orca's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, 55 whales were taken from the wild in Iceland, 19 from Japan, and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s, and by 1999, about 40% of the 48 animals on display in the world were captive-born.
Organizations such as the Mote Marine Laboratory rescue and rehabilitate sick, wounded, stranded or orphaned dolphins while others, such as the Whale and Dolphin Conservation and Hong Kong Dolphin Conservation Society, work on dolphin conservation and welfare. India has declared the dolphin as its national aquatic animal in an attempt to protect the endangered Ganges river dolphin. The Vikramshila Gangetic Dolphin Sanctuary has been created in the Ganges river for the protection of the animals.
Controversy
There is debate over the welfare of cetaceans in captivity, and often welfare can vary greatly dependent on the levels of care being provided at a particular facility. In the United States, facilities are regularly inspected by federal agencies to ensure that a high standard of welfare is maintained. Additionally, facilities can apply to become accredited by the Association of Zoos and Aquariums (AZA), which (for accreditation) requires "the highest standards of animal care and welfare in the world" to be achieved. Facilities such as SeaWorld and the Georgia Aquarium are accredited by the AZA. Organizations such as World Animal Protection and the Whale and Dolphin Conservation campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of male orca. Captives have vastly reduced life expectancies, on average only living into their 20s, although there are examples of orcas living longer, including several over 30 years old, and two captive orcas, Corky II and Lolita, are in their mid-40s. In the wild, females who survive infancy live 46 years on average, and up to 70–80 years in rare cases. Wild males who survive infancy live 31 years on average, and up to 50–60 years. Captivity usually bears little resemblance to wild habitat, and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild orca behavior. Wild orcas may travel up to in a day, and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress.
Although dolphins generally interact well with humans, some attacks have occurred, most of them resulting in small injuries. Orcas, the largest species of dolphin, have been involved in fatal attacks on humans in captivity. The record-holder of documented orca fatal attacks is a male named Tilikum, who lived at SeaWorld from 1992 until his death in 2017. Tilikum has played a role in the death of three people in three different incidents (1991, 1999 and 2010). Tilikum's behaviour sparked the production of the documentary Blackfish, which focuses on the consequences of keeping orcas in captivity. There are documented incidents in the wild, too, but none of them fatal.
Fatal attacks from other species are less common, but there is a registered occurrence off the coast of Brazil in 1994, when a man died after being attacked by a bottlenose dolphin named Tião. Tião had suffered harassment by human visitors, including attempts to stick ice cream sticks down his blowhole. Non-fatal incidents occur more frequently, both in the wild and in captivity.
While dolphin attacks occur far less frequently than attacks by other sea animals, such as sharks, some scientists are worried about the careless programs of human-dolphin interaction. Dr. Andrew J. Read, a biologist at the Duke University Marine Laboratory who studies dolphin attacks, points out that dolphins are large and wild predators, so people should be more careful when they interact with them.
Several scientists who have researched dolphin behaviour have proposed that dolphins' unusually high intelligence in comparison to other animals means that dolphins should be seen as non-human persons who should have their own specific rights and that it is morally unacceptable to keep them captive for entertainment purposes or to kill them either intentionally for consumption or unintentionally as by-catch. Four countries – Chile, Costa Rica, Hungary, and India – have declared dolphins to be "non-human persons" and have banned the capture and import of live dolphins for entertainment.
Military
A number of militaries have employed dolphins for various purposes from finding mines to rescuing lost or trapped humans. The military use of dolphins drew scrutiny during the Vietnam War, when rumors circulated that the United States Navy was training dolphins to kill Vietnamese divers. The United States Navy denies that at any point dolphins were trained for combat. Dolphins are still being trained by the United States Navy for other tasks as part of the U.S. Navy Marine Mammal Program. The Russian military is believed to have closed its marine mammal program in the early 1990s. In 2000 the press reported that dolphins trained to kill by the Soviet Navy had been sold to Iran.
The military is also interested in disguising underwater communications as artificial dolphin clicks.
Therapy
Dolphins are an increasingly popular choice of animal-assisted therapy for psychological problems and developmental disabilities. For example, a 2005 study found dolphins an effective treatment for mild to moderate depression. This study was criticized on several grounds, including a lack of knowledge on whether dolphins are more effective than common pets. Reviews of this and other published dolphin-assisted therapy (DAT) studies have found important methodological flaws and have concluded that there is no compelling scientific evidence that DAT is a legitimate therapy or that it affords more than fleeting mood improvement.
Consumption
Cuisine
In some parts of the world, such as Taiji, Japan and the Faroe Islands, dolphins are traditionally considered as food, and are killed in harpoon or drive hunts.
Dolphin meat is consumed in a small number of countries worldwide, which include Japan and Peru (where it is referred to as chancho marino, or "sea pork"). While Japan may be the best-known and most controversial example, only a very small minority of the population has ever sampled it.
Dolphin meat is dense and such a dark shade of red as to appear black. Fat is located in a layer of blubber between the meat and the skin. When dolphin meat is eaten in Japan, it is often cut into thin strips and eaten raw as sashimi, garnished with onion and either horseradish or grated garlic, much as with sashimi of whale or horse meat (basashi). When cooked, dolphin meat is cut into bite-size cubes and then batter-fried or simmered in a miso sauce with vegetables. Cooked dolphin meat has a flavor very similar to beef liver.
Health concerns
There have been human health concerns associated with the consumption of dolphin meat in Japan after tests showed that dolphin meat contained high levels of mercury. There are no known cases of mercury poisoning as a result of consuming dolphin meat, though the government continues to monitor people in areas where dolphin meat consumption is high. The Japanese government recommends that children and pregnant women avoid eating dolphin meat on a regular basis.
Similar concerns exist with the consumption of dolphin meat in the Faroe Islands, where prenatal exposure to methylmercury and PCBs primarily from the consumption of pilot whale meat has resulted in neuropsychological deficits amongst children.
| Biology and health sciences | Cetaceans | null |
9087 | https://en.wikipedia.org/wiki/Dynamical%20system | Dynamical system | In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.
At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables.
In physics, a dynamical system is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives". In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized.
The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.
Overview
The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.
Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.
History
Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.
In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics.
Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period.
In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.
Formal definition
In the most general sense,
a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function
with
(where is the 2nd projection map)
and for any x in X:
for and , where we have defined the set for any x in X.
In particular, in the case that we have for every x in X that and thus that Φ defines a monoid action of T on X.
The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system.
We often write
if we take one of the variables as constant. The function
is called the flow through x and its graph is called the trajectory through x. The set
is called the orbit through x.
The orbit through x is the image of the flow through x.
A subset S of the state space X is called Φ-invariant if for all x in S and all t in T
Thus, in particular, if S is Φ-invariant, for all x in S. That is, the flow through x must be defined for all time for every element of S.
More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor.
Geometrical definition
In the geometrical definition, a dynamical system is the tuple . is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with ) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain .
Real dynamical system
A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow.
Discrete dynamical system
A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade.
Cellular automaton
A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice.
Multidimensional generalization
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.
Compactification of a dynamical system
Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*).
In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected.
Measure theoretical definition
A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system.
The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.
Relation to geometric definition
The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.
Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.
For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.
Construction of dynamical systems
The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following:
where
represents the velocity of the material point x
M is a finite dimensional manifold
v: T × M → TM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM.
There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions.
Depending on the properties of this vector field, the mechanical system is called
autonomous, when v(t, x) = v(x)
homogeneous when v(t, 0) = 0 for all t
The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above
The dynamical system is then (T, M, Φ).
Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy
where is a functional from the set of evolution functions to the field of the complex numbers.
This equation is useful when modeling mechanical systems with complicated constraints.
Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations.
Examples
Arnold's cat map
Baker's map is an example of a chaotic piecewise linear map
Billiards and outer billiards
Bouncing ball dynamics
Circle map
Complex quadratic polynomial
Double pendulum
Dyadic transformation
Hénon map
Irrational rotation
Kaplan–Yorke map
List of chaotic maps
Lorenz system
Quadratic map simulation system
Rössler map
Swinging Atwood's machine
Tent map
Linear dynamical systems
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t).
Flows
For a flow, the vector field v(x) is an affine function of the position in the phase space, that is,
with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity).
The case b ≠ 0 with A = 0 is just a straight line in the direction of b:
When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there.
For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,
When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.
Maps
A discrete-time, affine dynamical system has the form of a matrix difference equation:
with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also many other discrete dynamical systems.
Local dynamics
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
Rectification
A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
Near periodic orbits
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem.
Conjugation results
The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.
In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic.
The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.
Bifurcation theory
When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.
Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.
Ergodic systems
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that
In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator,
By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.
Nonlinear dynamical systems and chaos
Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).
This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"
The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear.
Solutions of finite duration
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.
As example, the equation:
Admits the finite duration solution:
that is zero for and is not Lipschitz continuous at its ending time
| Mathematics | Other | null |
9101 | https://en.wikipedia.org/wiki/Device%20driver | Device driver | In the context of an operating system, a device driver is a computer program that operates or controls a particular type of device that is attached to a computer or automaton. A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used.
A driver communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device (drives it). Once the device sends data back to the driver, the driver may invoke routines in the original calling program.
Drivers are hardware dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface.
Purpose
The main purpose of device drivers is to provide abstraction by acting as a translator between a hardware device and the applications or operating systems that use it. Programmers can write higher-level application code independently of whatever specific hardware the end-user is using.
For example, a high-level application for interacting with a serial port may simply have two functions for "send data" and "receive data". At a lower level, a device driver implementing these functions would communicate to the particular serial port controller installed on a user's computer. The commands needed to control a 16550 UART are much different from the commands needed to control an FTDI serial port converter, but each hardware-specific device driver abstracts these details into the same (or similar) software interface.
Development
Writing a device driver requires an in-depth understanding of how the hardware and the software works for a given platform function. Because drivers require low-level access to hardware functions in order to operate, drivers typically operate in a highly privileged environment and can cause system operational issues if something goes wrong. In contrast, most user-level software on modern operating systems can be stopped without greatly affecting the rest of the system. Even drivers executing in user mode can crash a system if the device is erroneously programmed. These factors make it more difficult and dangerous to diagnose problems.
The task of writing drivers thus usually falls to software engineers or computer engineers who work for hardware-development companies. This is because they have better information than most outsiders about the design of their hardware. Moreover, it was traditionally considered in the hardware manufacturer's interest to guarantee that their clients can use their hardware in an optimal way. Typically, the Logical Device Driver (LDD) is written by the operating system vendor, while the Physical Device Driver (PDD) is implemented by the device vendor. However, in recent years, non-vendors have written numerous device drivers for proprietary devices, mainly for use with free and open source operating systems. In such cases, it is important that the hardware manufacturer provide information on how the device communicates. Although this information can instead be learned by reverse engineering, this is much more difficult with hardware than it is with software.
Microsoft has attempted to reduce system instability due to poorly written device drivers by creating a new framework for driver development, called Windows Driver Frameworks (WDF). This includes User-Mode Driver Framework (UMDF) that encourages development of certain types of drivers—primarily those that implement a message-based protocol for communicating with their devices—as user-mode drivers. If such drivers malfunction, they do not cause system instability. The Kernel-Mode Driver Framework (KMDF) model continues to allow development of kernel-mode device drivers but attempts to provide standard implementations of functions that are known to cause problems, including cancellation of I/O operations, power management, and plug-and-play device support.
Apple has an open-source framework for developing drivers on macOS, called I/O Kit.
In Linux environments, programmers can build device drivers as parts of the kernel, separately as loadable modules, or as user-mode drivers (for certain types of devices where kernel interfaces exist, such as for USB devices). Makedev includes a list of the devices in Linux, including ttyS (terminal), lp (parallel port), hd (disk), loop, and sound (these include mixer, sequencer, dsp, and audio).
Microsoft Windows .sys files and Linux .ko files can contain loadable device drivers. The advantage of loadable device drivers is that they can be loaded only when necessary and then unloaded, thus saving kernel memory.
Privilege levels
Depending on the operating system, device drivers may be permitted to run at various different privilege levels. The choice of which level of privilege the drivers are in is largely decided by the type of kernel an operating system uses. An operating system that uses a monolithic kernel, such as the Linux kernel, will typically run device drivers with the same privilege as all other kernel objects. By contrast, a system designed around microkernel, such as Minix, will place drivers as processes independent from the kernel but that use it for essential input-output functionalities and to pass messages between user programs and each other.
On Windows NT, a system with a hybrid kernel, it is common for device drivers to run in either kernel-mode or user-mode.
The most common mechanism for segregating memory into various privilege levels is via protection rings. On many systems, such as those with x86 and ARM processors, switching between rings imposes a performance penalty, a factor that operating system developers and embedded software engineers consider when creating drivers for devices which are preferred to be run with low latency, such as network interface cards. The primary benefit of running a driver in user mode is improved stability since a poorly written user-mode device driver cannot crash the system by overwriting kernel memory.
Applications
Because of the diversity of hardware and operating systems, drivers operate in many different environments. Drivers may interface with:
Printers
Video adapters
Network cards
Sound cards
PC chipsets
Power and battery management
Local buses of various sorts—in particular, for bus mastering on modern systems
Low-bandwidth I/O buses of various sorts (for pointing devices such as mice, keyboards, etc.)
Computer storage devices such as hard disk, CD-ROM, and floppy disk buses (ATA, SATA, SCSI, SAS)
Implementing support for different file systems
Image scanners
Digital cameras
Digital terrestrial television tuners
Radio frequency communication transceiver adapters for wireless personal area networks as used for short-distance and low-rate wireless communication in home automation, (such as example Bluetooth Low Energy (BLE), Thread, Zigbee, and Z-Wave).
IrDA adapters
Common levels of abstraction for device drivers include:
For hardware:
Interfacing directly
Writing to or reading from a device control register
Using some higher-level interface (e.g. Video BIOS)
Using another lower-level device driver (e.g. file system drivers using disk drivers)
Simulating work with hardware, while doing something entirely different
For software:
Allowing the operating system direct access to hardware resources
Implementing only primitives
Implementing an interface for non-driver software (e.g. TWAIN)
Implementing a language, sometimes quite high-level (e.g. PostScript)
So choosing and installing the correct device drivers for given hardware is often a key component of computer system configuration.
Virtual device drivers
Virtual device drivers represent a particular variant of device drivers. They are used to emulate a hardware device, particularly in virtualization environments, for example when a DOS program is run on a Microsoft Windows computer or when a guest operating system is run on, for example, a Xen host. Instead of enabling the guest operating system to dialog with hardware, virtual device drivers take the opposite role and emulates a piece of hardware, so that the guest operating system and its drivers running inside a virtual machine can have the illusion of accessing real hardware. Attempts by the guest operating system to access the hardware are routed to the virtual device driver in the host operating system as e.g., function calls. The virtual device driver can also send simulated processor-level events like interrupts into the virtual machine.
Virtual devices may also operate in a non-virtualized environment. For example, a virtual network adapter is used with a virtual private network, while a virtual disk device is used with iSCSI. A good example for virtual device drivers can be Daemon Tools.
There are several variants of virtual device drivers, such as VxDs, VLMs, and VDDs.
Open source drivers
Graphics device driver
Printers: CUPS
RAIDs: CCISS (Compaq Command Interface for SCSI-3 Support)
Scanners: SANE
Video: Vidix, Direct Rendering Infrastructure
Solaris descriptions of commonly used device drivers:
fas: Fast/wide SCSI controller
hme: Fast (10/100 Mbit/s) Ethernet
isp: Differential SCSI controllers and the SunSwift card
glm: (Gigabaud Link Module) UltraSCSI controllers
scsi: Small Computer Serial Interface (SCSI) devices
sf: soc+ or social Fiber Channel Arbitrated Loop (FCAL)
soc: SPARC Storage Array (SSA) controllers and the control device
social: Serial optical controllers for FCAL (soc+)
APIs
Windows Display Driver Model (WDDM) – the graphic display driver architecture for Windows Vista and later.
Unified Audio Model (UAM)
Windows Driver Foundation (WDF)
Declarative Componentized Hardware (DCH) - Universal Windows Platform driver
Windows Driver Model (WDM)
Network Driver Interface Specification (NDIS) – a standard network card driver API
Advanced Linux Sound Architecture (ALSA) – the standard Linux sound-driver interface
Scanner Access Now Easy (SANE) – a public-domain interface to raster-image scanner-hardware
Installable File System (IFS) – a filesystem API for IBM OS/2 and Microsoft Windows NT
Open Data-Link Interface (ODI) – network card API similar to NDIS
Uniform Driver Interface (UDI) – a cross-platform driver interface project
Dynax Driver Framework (dxd) – C++ open source cross-platform driver framework for KMDF and IOKit
Identifiers
A device on the PCI bus or USB is identified by two IDs which consist of two bytes each. The vendor ID identifies the vendor of the device. The device ID identifies a specific device from that manufacturer/vendor.
A PCI device has often an ID pair for the main chip of the device, and also a subsystem ID pair that identifies the vendor, which may be different from the chip manufacturer.
Security
Computers often have many diverse and customized device drivers running in their operating system (OS) kernel which often contain various bugs and vulnerabilities, making them a target for exploits. Bring Your Own Vulnerable Driver (BYOVD) uses signed, old drivers that contain flaws that allow hackers to insert malicious code into the kernel.
Drivers that may be vulnerable include those for WiFi and Bluetooth, gaming/graphics drivers, and drivers for printers.
There is a lack of effective kernel vulnerability detection tools, especially for closed-source OSes such as Microsoft Windows where the source code of the device drivers is mostly not public (open source) and drivers often have many privileges.
A group of security researchers considers the lack of isolation as one of the main factors undermining kernel security, and published an isolation framework to protect operating system kernels, primarily the monolithic Linux kernel whose drivers they say get ~80,000 commits per year.
| Technology | Computer hardware | null |
9109 | https://en.wikipedia.org/wiki/Diophantine%20equation | Diophantine equation | In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.
Diophantine problems have fewer equations than unknowns and involve finding integers that solve simultaneously all equations. As such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations (beyond the case of linear and quadratic equations) was an achievement of the twentieth century.
Examples
In the following Diophantine equations, , and are the unknowns and the other letters are given constants:
Linear Diophantine equations
One equation
The simplest linear Diophantine equation takes the form
where , and are given integers. The solutions are described by the following theorem:
This Diophantine equation has a solution (where and are integers) if and only if is a multiple of the greatest common divisor of and . Moreover, if is a solution, then the other solutions have the form , where is an arbitrary integer, and and are the quotients of and (respectively) by the greatest common divisor of and .
Proof: If is this greatest common divisor, Bézout's identity asserts the existence of integers and such that . If is a multiple of , then for some integer , and is a solution. On the other hand, for every pair of integers and , the greatest common divisor of and divides . Thus, if the equation has a solution, then must be a multiple of . If and , then for every solution , we have
showing that is another solution. Finally, given two solutions such that
one deduces that
As and are coprime, Euclid's lemma shows that divides , and thus that there exists an integer such that both
Therefore,
which completes the proof.
Chinese remainder theorem
The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let be pairwise coprime integers greater than one, be arbitrary integers, and be the product The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution such that , and that the other solutions are obtained by adding to a multiple of :
System of linear Diophantine equations
More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written
where is an matrix of integers, is an column matrix of unknowns and is an column matrix of integers.
The computation of the Smith normal form of provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) and of respective dimensions and , such that the matrix
is such that is not zero for not greater than some integer , and all the other entries are zero. The system to be solved may thus be rewritten as
Calling the entries of and those of , this leads to the system
This system is equivalent to the given one in the following sense: A column matrix of integers is a solution of the given system if and only if for some column matrix of integers such that .
It follows that the system has a solution if and only if divides for and for . If this condition is fulfilled, the solutions of the given system are
where are arbitrary integers.
Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form."
Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations.
Homogeneous equations
A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem
As a homogeneous polynomial in indeterminates defines a hypersurface in the projective space of dimension , solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface.
Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the th power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for , there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved.
For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem).
For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation.
Degree two
Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced.
For proving that there is no solution, one may reduce the equation modulo . For example, the Diophantine equation
does not have any other solution than the trivial solution . In fact, by dividing , and by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if , and are all even, and are thus not coprime. Thus the only solution is the trivial solution . This shows that there is no rational point on a circle of radius centered at the origin.
More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist.
If a non-trivial integer solution is known, one may produce all other solutions in the following way.
Geometric interpretation
Let
be a homogeneous Diophantine equation, where is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The trivial solution is the solution where all are zero. If is a non-trivial integer solution of this equation, then are the homogeneous coordinates of a rational point of the hypersurface defined by . Conversely, if are homogeneous coordinates of a rational point of this hypersurface, where are integers, then is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form
where is any integer, and is the greatest common divisor of the
It follows that solving the Diophantine equation is completely reduced to finding the rational points of the corresponding projective hypersurface.
Parameterization
Let now be an integer solution of the equation As is a polynomial of degree two, a line passing through crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through , and the rational points are those that are obtained from rational lines, that is, those that correspond to rational values of the parameters.
More precisely, one may proceed as follows.
By permuting the indices, one may suppose, without loss of generality that Then one may pass to the affine case by considering the affine hypersurface defined by
which has the rational point
If this rational point is a singular point, that is if all partial derivatives are zero at , all lines passing through are contained in the hypersurface, and one has a cone. The change of variables
does not change the rational points, and transforms into a homogeneous polynomial in variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables.
If the polynomial is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case.
In the general case, consider the parametric equation of a line passing through :
Substituting this in , one gets a polynomial of degree two in , that is zero for . It is thus divisible by . The quotient is linear in , and may be solved for expressing as a quotient of two polynomials of degree at most two in with integer coefficients:
Substituting this in the expressions for one gets, for ,
where are polynomials of degree at most two with integer coefficients.
Then, one can return to the homogeneous case. Let, for ,
be the homogenization of These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by :
A point of the projective hypersurface defined by is rational if and only if it may be obtained from rational values of As are homogeneous polynomials, the point is not changed if all are multiplied by the same rational number. Thus, one may suppose that are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences where, for ,
where is an integer, are coprime integers, and is the greatest common divisor of the integers
One could hope that the coprimality of the , could imply that . Unfortunately this is not the case, as shown in the next section.
Example of Pythagorean triples
The equation
is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples.
For retrieving exactly Euclid's formula, we start from the solution , corresponding to the point of the unit circle. A line passing through this point may be parameterized by its slope:
Putting this in the circle equation
one gets
Dividing by , results in
which is easy to solve in :
It follows
Homogenizing as described above one gets all solutions as
where is any integer, and are coprime integers, and is the greatest common divisor of the three numerators. In fact, if and are both odd, and if one is odd and the other is even.
The primitive triples are the solutions where and .
This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that , and are all positive, and does not distinguish between two triples that differ by the exchange of and ,
Diophantine analysis
Typical questions
The questions asked in Diophantine analysis include:
Are there any solutions?
Are there any solutions beyond some that are easily found by inspection?
Are there finitely or infinitely many solutions?
Can all solutions be found in theory?
Can one in practice compute a full list of solutions?
These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles.
Typical problem
The given information is that a father's age is 1 less than twice that of his son, and that the digits making up the father's age are reversed in the son's age (i.e. ). This leads to the equation , thus . Inspection gives the result , , and thus equals 73 years and equals 37 years. One may easily show that there is not any other solution with and positive integers less than 10.
Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts.
17th and 18th centuries
In 1637, Pierre de Fermat scribbled on the margin of his copy of Arithmetica: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equation has no solutions for any higher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles.
In 1657, Fermat attempted to solve the Diophantine equation (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is , (see Chakravala method).
Hilbert's tenth problem
In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist.
Diophantine geometry
Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field , when is not algebraically closed.
Modern research
The oldest general method for solving a Diophantine equationor for proving that there is no solution is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations.
The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist.
During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates.
This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations.
Infinite Diophantine equations
An example of an infinite Diophantine equation is:
which can be expressed as "How many ways can a given integer be written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for each forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive . Compare this to:
which does not always have a solution for positive .
Exponential Diophantine equations
If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include:
the Ramanujan–Nagell equation,
the equation of the Fermat–Catalan conjecture and Beal's conjecture, with inequality restrictions on the exponents
the Erdős–Moser equation,
A general theory for such equations is not available; particular cases such as Catalan's conjecture and Fermat's Last Theorem have been tackled. However, the majority are solved via ad-hoc methods such as Størmer's theorem or even trial and error.
| Mathematics | Other | null |
9146 | https://en.wikipedia.org/wiki/Dolly%20%28sheep%29 | Dolly (sheep) | Dolly (5 July 1996 – 14 February 2003) was a female Finn-Dorset sheep and the first mammal that was cloned from an adult somatic cell. She was cloned by associates of the Roslin Institute in Scotland, using the process of nuclear transfer from a cell taken from a mammary gland. Her cloning proved that a cloned organism could be produced from a mature cell from a specific body part. Contrary to popular belief, she was not the first animal to be cloned.
The employment of adult somatic cells in lieu of embryonic stem cells for cloning emerged from the foundational work of John Gurdon, who cloned African clawed frogs in 1958 with this approach. The successful cloning of Dolly led to widespread advancements within stem cell research, including the discovery of induced pluripotent stem cells.
Dolly lived at the Roslin Institute throughout her life and produced several lambs. She was euthanized at the age of six years due to a progressive lung disease. No cause which linked the disease to her cloning was found.
Dolly's body was preserved and donated by the Roslin Institute in Scotland to the National Museum of Scotland, where it has been regularly exhibited since 2003.
Genesis
Dolly was cloned by Keith Campbell, Ian Wilmut and colleagues at the Roslin Institute, part of the University of Edinburgh, Scotland, and the biotechnology company PPL Therapeutics, based near Edinburgh. The funding for Dolly's cloning was provided by PPL Therapeutics and the Ministry of Agriculture. She was born on 5 July 1996. She has been called "the world's most famous sheep" by sources including BBC News and Scientific American.
The cell used as the donor for the cloning of Dolly was taken from a mammary gland, and the production of a healthy clone, therefore, proved that a cell taken from a specific part of the body could recreate a whole individual. On Dolly's name, Wilmut stated "Dolly is derived from a mammary gland cell and we couldn't think of a more impressive pair of glands than Dolly Parton's."
Birth
Dolly was born on 5 July 1996 and had three mothers: one provided the egg, another the DNA, and a third carried the cloned embryo to term. She was created using the technique of somatic cell nuclear transfer, where the cell nucleus from an adult cell is transferred into an unfertilized oocyte (developing egg cell) that has had its cell nucleus removed. The hybrid cell is then stimulated to divide by an electric shock, and when it develops into a blastocyst it is implanted in a surrogate mother. Dolly was the first clone produced from a cell taken from an adult mammal. The production of Dolly showed that genes in the nucleus of such a mature differentiated somatic cell are still capable of reverting to an embryonic totipotent state, creating a cell that can then go on to develop into any part of an animal.
Dolly's existence was announced to the public on 22 February 1997. It gained much attention in the media. A commercial with Scottish scientists playing with sheep was aired on TV, and a special report in Time magazine featured Dolly. Science featured Dolly as the breakthrough of the year. Even though Dolly was not the first animal cloned, she received media attention because she was the first cloned from an adult cell.
Life
Dolly lived her entire life at the Roslin Institute in Midlothian. There she was bred with a Welsh Mountain ram and produced six lambs in total. Her first lamb, named Bonnie, was born in April 1998. The next year, Dolly produced twin lambs Sally and Rosie; further, she gave birth to triplets Lucy, Darcy and Cotton in 2000. In late 2001, at the age of four, Dolly developed arthritis and started to have difficulty walking. This was treated with anti-inflammatory drugs.
Death
On 14 February 2003, Dolly was euthanised because she had a progressive lung disease and severe arthritis. A Finn Dorset such as Dolly has a life expectancy of around 11 to 12 years, but Dolly lived 6.5 years. A post-mortem examination showed she had a form of lung cancer called ovine pulmonary adenocarcinoma, also known as Jaagsiekte, which is a fairly common disease of sheep and is caused by the retrovirus JSRV. Roslin scientists stated that they did not think there was a connection with Dolly being a clone, and that other sheep in the same flock had died of the same disease. Such lung diseases are a particular danger for sheep kept indoors, and Dolly had to sleep inside for security reasons.
Some in the press speculated that a contributing factor to Dolly's death was that she could have been born with a genetic age of six years, the same age as the sheep from which she was cloned. One basis for this idea was the finding that Dolly's telomeres were short, which is typically a result of the aging process. The Roslin Institute stated that intensive health screening did not reveal any abnormalities in Dolly that could have come from advanced aging.
In 2016, scientists reported no defects in thirteen cloned sheep, including four from the same cell line as Dolly. The first study to review the long-term health outcomes of cloning, the authors found no evidence of late-onset, non-communicable diseases other than some minor examples of osteoarthritis and concluded "We could find no evidence, therefore, of a detrimental long-term effect of cloning by SCNT on the health of aged offspring among our cohort."
After her death Dolly's body was preserved via taxidermy and is currently on display at the National Museum of Scotland in Edinburgh.
Legacy
After cloning was successfully demonstrated through the production of Dolly, many other large mammals were cloned, including pigs, deer, horses and bulls. The attempt to clone argali (mountain sheep) did not produce viable embryos. The attempt to clone a banteng bull was more successful, as were the attempts to clone mouflon (a form of wild sheep), both resulting in viable offspring. The reprogramming process that cells need to go through during cloning is not perfect and embryos produced by nuclear transfer often show abnormal development. Making cloned mammals was highly inefficientin 1996, Dolly was the only lamb that survived to adulthood from 277 attempts. By 2014, Chinese scientists were reported to have 70–80% success rates cloning pigs, and in 2016, a Korean company, Sooam Biotech, was producing 500 cloned embryos a day. Wilmut, who led the team that created Dolly, announced in 2007 that the nuclear transfer technique may never be sufficiently efficient for use in humans.
Cloning may have uses in preserving endangered species, and may become a viable tool for reviving extinct species. In January 2009, scientists from the Centre of Food Technology and Research of Aragon in northern Spain announced the cloning of the Pyrenean ibex, a form of wild mountain goat, which was officially declared extinct in 2000. Although the newborn ibex died shortly after birth due to physical defects in its lungs, it is the first time an extinct animal has been cloned, and may open doors for saving endangered and newly extinct species by resurrecting them from frozen tissue.
In July 2016, four identical clones of Dolly (Daisy, Debbie, Dianna, and Denise) were alive and healthy at nine years old.
Scientific American concluded in 2016 that the main legacy of Dolly has not been cloning of animals but in advances into stem cell research. Gene targeting was added in 2000, when researchers cloned female lamb Diana from sheep DNA altered to contain the human gene for alpha 1-antitrypsin. The human gene was specifically activated in the ewe’s mammary gland, so Diana produced milk containing human alpha 1-antitrypsin. After Dolly, researchers realised that ordinary cells could be reprogrammed to induced pluripotent stem cells, which can be grown into any tissue.
The first successful cloning of a primate species was reported in January 2018, using the same method which produced Dolly. Two identical clones of a macaque monkey, Zhong Zhong and Hua Hua, were created by researchers in China and were born in late 2017.
In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, again using this method, and the gene-editing CRISPR-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases.
Dolly in popular culture
In 2003, the Belgian artist Dominique Goblet published a short comic strip about Dolly the cloned sheep with the title: “2004 Apparition de Dolly dans la campagne anglaise”
"Dolly The Sheep" was initially released on November 13, 2012, as a flash game developed by the small game development company Pozirk Games, in which Dolly the cloned sheep is being chased by evil scientists. For some time the game was available to play online as well as on mobile devices. As of June 14, 2023, it is only available online for desktop/laptop computers.
| Biology and health sciences | Individual animals | Animals |
9228 | https://en.wikipedia.org/wiki/Earth | Earth | Earth is the third planet from the Sun and the only astronomical object known to harbor life. This is enabled by Earth being an ocean world, the only one in the Solar System sustaining liquid surface water. Almost all of Earth's water is contained in its global ocean, covering 70.8% of Earth's crust. The remaining 29.2% of Earth's crust is land, most of which is located in the form of continental landmasses within Earth's land hemisphere. Most of Earth's land is at least somewhat humid and covered by vegetation, while large sheets of ice at Earth's polar deserts retain more water than Earth's groundwater, lakes, rivers, and atmospheric water combined. Earth's crust consists of slowly moving tectonic plates, which interact to produce mountain ranges, volcanoes, and earthquakes. Earth has a liquid outer core that generates a magnetosphere capable of deflecting most of the destructive solar winds and cosmic radiation.
Earth has a dynamic atmosphere, which sustains Earth's surface conditions and protects it from most meteoroids and UV-light at entry. It has a composition of primarily nitrogen and oxygen. Water vapor is widely present in the atmosphere, forming clouds that cover most of the planet. The water vapor acts as a greenhouse gas and, together with other greenhouse gases in the atmosphere, particularly carbon dioxide (CO2), creates the conditions for both liquid surface water and water vapor to persist via the capturing of energy from the Sun's light. This process maintains the current average surface temperature of , at which water is liquid under normal atmospheric pressure. Differences in the amount of captured energy between geographic regions (as with the equatorial region receiving more sunlight than the polar regions) drive atmospheric and ocean currents, producing a global climate system with different climate regions, and a range of weather phenomena such as precipitation, allowing components such as nitrogen to cycle.
Earth is rounded into an ellipsoid with a circumference of about 40,000 km. It is the densest planet in the Solar System. Of the four rocky planets, it is the largest and most massive. Earth is about eight light-minutes away from the Sun and orbits it, taking a year (about 365.25 days) to complete one revolution. Earth rotates around its own axis in slightly less than a day (in about 23 hours and 56 minutes). Earth's axis of rotation is tilted with respect to the perpendicular to its orbital plane around the Sun, producing seasons. Earth is orbited by one permanent natural satellite, the Moon, which orbits Earth at 384,400 km (1.28 light seconds) and is roughly a quarter as wide as Earth. The Moon's gravity helps stabilize Earth's axis, causes tides and gradually slows Earth's rotation. Tidal locking has made the Moon always face Earth with the same side.
Earth, like most other bodies in the Solar System, formed 4.5 billion years ago from gas and dust in the early Solar System. During the first billion years of Earth's history, the ocean formed and then life developed within it. Life spread globally and has been altering Earth's atmosphere and surface, leading to the Great Oxidation Event two billion years ago. Humans emerged 300,000 years ago in Africa and have spread across every continent on Earth. Humans depend on Earth's biosphere and natural resources for their survival, but have increasingly impacted the planet's environment. Humanity's current impact on Earth's climate and biosphere is unsustainable, threatening the livelihood of humans and many other forms of life, and causing widespread extinctions.
Etymology
The Modern English word Earth developed, via Middle English, from an Old English noun most often spelled . It has cognates in every Germanic language, and their ancestral root has been reconstructed as *erþō. In its earliest attestation, the word eorðe was used to translate the many senses of Latin and Greek γῆ gē: the ground, its soil, dry land, the human world, the surface of the world (including the sea), and the globe itself. As with Roman Terra/Tellūs and Greek Gaia, Earth may have been a personified goddess in Germanic paganism: late Norse mythology included Jörð ("Earth"), a giantess often given as the mother of Thor.
Historically, "Earth" has been written in lowercase. Beginning with the use of Early Middle English, its definite sense as "the globe" was expressed as "the earth". By the era of Early Modern English, capitalization of nouns began to prevail, and the earth was also written the Earth, particularly when referenced along with other heavenly bodies. More recently, the name is sometimes simply given as Earth, by analogy with the names of the other planets, though "earth" and forms with "the earth" remain common. House styles now vary: Oxford spelling recognizes the lowercase form as the more common, with the capitalized form an acceptable variant. Another convention capitalizes "Earth" when appearing as a name, such as a description of the "Earth's atmosphere", but employs the lowercase when it is preceded by "the", such as "the atmosphere of the earth". It almost always appears in lowercase in colloquial expressions such as "what on earth are you doing?"
The name Terra occasionally is used in scientific writing and especially in science fiction to distinguish humanity's inhabited planet from others, while in poetry Tellus has been used to denote personification of the Earth. Terra is also the name of the planet in some Romance languages, languages that evolved from Latin, like Italian and Portuguese, while in other Romance languages the word gave rise to names with slightly altered spellings, like the Spanish Tierra and the French Terre. The Latinate form Gæa or Gaea () of the Greek poetic name Gaia (; or ) is rare, though the alternative spelling Gaia has become common due to the Gaia hypothesis, in which case its pronunciation is rather than the more classical English .
There are a number of adjectives for the planet Earth. The word "earthly" is derived from "Earth". From the Latin Terra comes terran , terrestrial , and (via French) terrene , and from the Latin Tellus comes tellurian and telluric.
Natural history
Formation
The oldest material found in the Solar System is dated to Ga (billion years) ago. By the primordial Earth had formed. The bodies in the Solar System formed and evolved with the Sun. In theory, a solar nebula partitions a volume out of a molecular cloud by gravitational collapse, which begins to spin and flatten into a circumstellar disk, and then the planets grow out of that disk with the Sun. A nebula contains gas, ice grains, and dust (including primordial nuclides). According to nebular theory, planetesimals formed by accretion, with the primordial Earth being estimated as likely taking anywhere from 70 to 100 million years to form.
Estimates of the age of the Moon range from 4.5 Ga to significantly younger. A leading hypothesis is that it was formed by accretion from material loosed from Earth after a Mars-sized object with about 10% of Earth's mass, named Theia, collided with Earth. It hit Earth with a glancing blow and some of its mass merged with Earth. Between approximately 4.0 and , numerous asteroid impacts during the Late Heavy Bombardment caused significant changes to the greater surface environment of the Moon and, by inference, to that of Earth.
After formation
Earth's atmosphere and oceans were formed by volcanic activity and outgassing. Water vapor from these sources condensed into the oceans, augmented by water and ice from asteroids, protoplanets, and comets. Sufficient water to fill the oceans may have been on Earth since it formed. In this model, atmospheric greenhouse gases kept the oceans from freezing when the newly forming Sun had only 70% of its current luminosity. By , Earth's magnetic field was established, which helped prevent the atmosphere from being stripped away by the solar wind.
As the molten outer layer of Earth cooled it formed the first solid crust, which is thought to have been mafic in composition. The first continental crust, which was more felsic in composition, formed by the partial melting of this mafic crust. The presence of grains of the mineral zircon of Hadean age in Eoarchean sedimentary rocks suggests that at least some felsic crust existed as early as , only after Earth's formation. There are two main models of how this initial small volume of continental crust evolved to reach its current abundance: (1) a relatively steady growth up to the present day, which is supported by the radiometric dating of continental crust globally and (2) an initial rapid growth in the volume of continental crust during the Archean, forming the bulk of the continental crust that now exists, which is supported by isotopic evidence from hafnium in zircons and neodymium in sedimentary rocks. The two models and the data that support them can be reconciled by large-scale recycling of the continental crust, particularly during the early stages of Earth's history.
New continental crust forms as a result of plate tectonics, a process ultimately driven by the continuous loss of heat from Earth's interior. Over the period of hundreds of millions of years, tectonic forces have caused areas of continental crust to group together to form supercontinents that have subsequently broken apart. At approximately , one of the earliest known supercontinents, Rodinia, began to break apart. The continents later recombined to form Pannotia at , then finally Pangaea, which also began to break apart at .
The most recent pattern of ice ages began about , and then intensified during the Pleistocene about . High- and middle-latitude regions have since undergone repeated cycles of glaciation and thaw, repeating about every 21,000, 41,000 and 100,000 years. The Last Glacial Period, colloquially called the "last ice age", covered large parts of the continents, to the middle latitudes, in ice and ended about 11,700 years ago.
Origin of life and evolution
Chemical reactions led to the first self-replicating molecules about four billion years ago. A half billion years later, the last common ancestor of all current life arose. The evolution of photosynthesis allowed the Sun's energy to be harvested directly by life forms. The resultant molecular oxygen () accumulated in the atmosphere and due to interaction with ultraviolet solar radiation, formed a protective ozone layer () in the upper atmosphere. The incorporation of smaller cells within larger ones resulted in the development of complex cells called eukaryotes. True multicellular organisms formed as cells within colonies became increasingly specialized. Aided by the absorption of harmful ultraviolet radiation by the ozone layer, life colonized Earth's surface. Among the earliest fossil evidence for life is microbial mat fossils found in 3.48 billion-year-old sandstone in Western Australia, biogenic graphite found in 3.7 billion-year-old metasedimentary rocks in Western Greenland, and remains of biotic material found in 4.1 billion-year-old rocks in Western Australia. The earliest direct evidence of life on Earth is contained in 3.45 billion-year-old Australian rocks showing fossils of microorganisms.During the Neoproterozoic, , much of Earth might have been covered in ice. This hypothesis has been termed "Snowball Earth", and it is of particular interest because it preceded the Cambrian explosion, when multicellular life forms significantly increased in complexity. Following the Cambrian explosion, , there have been at least five major mass extinctions and many minor ones. Apart from the proposed current Holocene extinction event, the most recent was , when an asteroid impact triggered the extinction of non-avian dinosaurs and other large reptiles, but largely spared small animals such as insects, mammals, lizards and birds. Mammalian life has diversified over the past , and several million years ago, an African ape species gained the ability to stand upright. This facilitated tool use and encouraged communication that provided the nutrition and stimulation needed for a larger brain, which led to the evolution of humans. The development of agriculture, and then civilization, led to humans having an influence on Earth and the nature and quantity of other life forms that continues to this day.
Future
Earth's expected long-term future is tied to that of the Sun. Over the next , solar luminosity will increase by 10%, and over the next by 40%. Earth's increasing surface temperature will accelerate the inorganic carbon cycle, possibly reducing concentration to levels lethally low for current plants ( for C4 photosynthesis) in approximately . A lack of vegetation would result in the loss of oxygen in the atmosphere, making current animal life impossible. Due to the increased luminosity, Earth's mean temperature may reach in 1.5 billion years, and all ocean water will evaporate and be lost to space, which may trigger a runaway greenhouse effect, within an estimated 1.6 to 3 billion years. Even if the Sun were stable, a fraction of the water in the modern oceans will descend to the mantle, due to reduced steam venting from mid-ocean ridges.
The Sun will evolve to become a red giant in about . Models predict that the Sun will expand to roughly , about 250 times its present radius. Earth's fate is less clear. As a red giant, the Sun will lose roughly 30% of its mass, so, without tidal effects, Earth will move to an orbit from the Sun when the star reaches its maximum radius, otherwise, with tidal effects, it may enter the Sun's atmosphere and be vaporized.
Physical characteristics
Size and shape
Earth has a rounded shape, through hydrostatic equilibrium, with an average diameter of , making it the fifth largest planetary sized and largest terrestrial object of the Solar System.
Due to Earth's rotation it has the shape of an ellipsoid, bulging at its equator; its diameter is longer there than at its poles. Earth's shape also has local topographic variations; the largest local variations, like the Mariana Trench ( below local sea level), shortens Earth's average radius by 0.17% and Mount Everest ( above local sea level) lengthens it by 0.14%. Since Earth's surface is farthest out from its center of mass at its equatorial bulge, the summit of the volcano Chimborazo in Ecuador () is its farthest point out. Parallel to the rigid land topography the ocean exhibits a more dynamic topography.
To measure the local variation of Earth's topography, geodesy employs an idealized Earth producing a geoid shape. Such a shape is gained if the ocean is idealized, covering Earth completely and without any perturbations such as tides and winds. The result is a smooth but irregular geoid surface, providing a mean sea level (MSL) as a reference level for topographic measurements.
Surface
Earth's surface is the boundary between the atmosphere, and the solid Earth and oceans. Defined in this way, it has an area of about . Earth can be divided into two hemispheres: by latitude into the polar Northern and Southern hemispheres; or by longitude into the continental Eastern and Western hemispheres.
Most of Earth's surface is ocean water: 70.8% or . This vast pool of salty water is often called the world ocean, and makes Earth with its dynamic hydrosphere a water world or ocean world. Indeed, in Earth's early history the ocean may have covered Earth completely. The world ocean is commonly divided into the Pacific Ocean, Atlantic Ocean, Indian Ocean, Antarctic or Southern Ocean, and Arctic Ocean, from largest to smallest. The ocean covers Earth's oceanic crust, with the shelf seas covering the shelves of the continental crust to a lesser extent. The oceanic crust forms large oceanic basins with features like abyssal plains, seamounts, submarine volcanoes, oceanic trenches, submarine canyons, oceanic plateaus, and a globe-spanning mid-ocean ridge system. At Earth's polar regions, the ocean surface is covered by seasonally variable amounts of sea ice that often connects with polar land, permafrost and ice sheets, forming polar ice caps.
Earth's land covers 29.2%, or of Earth's surface. The land surface includes many islands around the globe, but most of the land surface is taken by the four continental landmasses, which are (in descending order): Africa-Eurasia, America (landmass), Antarctica, and Australia (landmass). These landmasses are further broken down and grouped into the continents. The terrain of the land surface varies greatly and consists of mountains, deserts, plains, plateaus, and other landforms. The elevation of the land surface varies from a low point of at the Dead Sea, to a maximum altitude of at the top of Mount Everest. The mean height of land above sea level is about .
Land can be covered by surface water, snow, ice, artificial structures or vegetation. Most of Earth's land hosts vegetation, but considerable amounts of land are ice sheets (10%, not including the equally large area of land under permafrost) or deserts (33%).
The pedosphere is the outermost layer of Earth's land surface and is composed of soil and subject to soil formation processes. Soil is crucial for land to be arable. Earth's total arable land is 10.7% of the land surface, with 1.3% being permanent cropland. Earth has an estimated of cropland and of pastureland.
The land surface and the ocean floor form the top of Earth's crust, which together with parts of the upper mantle form Earth's lithosphere. Earth's crust may be divided into oceanic and continental crust. Beneath the ocean-floor sediments, the oceanic crust is predominantly basaltic, while the continental crust may include lower density materials such as granite, sediments and metamorphic rocks. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form about 5% of the mass of the crust.
Earth's surface topography comprises both the topography of the ocean surface, and the shape of Earth's land surface. The submarine terrain of the ocean floor has an average bathymetric depth of 4 km, and is as varied as the terrain above sea level. Earth's surface is continually being shaped by internal plate tectonic processes including earthquakes and volcanism; by weathering and erosion driven by ice, water, wind and temperature; and by biological processes including the growth and decomposition of biomass into soil.
Tectonic plates
Earth's mechanically rigid outer layer of Earth's crust and upper mantle, the lithosphere, is divided into tectonic plates. These plates are rigid segments that move relative to each other at one of three boundaries types: at convergent boundaries, two plates come together; at divergent boundaries, two plates are pulled apart; and at transform boundaries, two plates slide past one another laterally. Along these plate boundaries, earthquakes, volcanic activity, mountain-building, and oceanic trench formation can occur. The tectonic plates ride on top of the asthenosphere, the solid but less-viscous part of the upper mantle that can flow and move along with the plates.
As the tectonic plates migrate, oceanic crust is subducted under the leading edges of the plates at convergent boundaries. At the same time, the upwelling of mantle material at divergent boundaries creates mid-ocean ridges. The combination of these processes recycles the oceanic crust back into the mantle. Due to this recycling, most of the ocean floor is less than old. The oldest oceanic crust is located in the Western Pacific and is estimated to be old. By comparison, the oldest dated continental crust is , although zircons have been found preserved as clasts within Eoarchean sedimentary rocks that give ages up to , indicating that at least some continental crust existed at that time.
The seven major plates are the Pacific, North American, Eurasian, African, Antarctic, Indo-Australian, and South American. Other notable plates include the Arabian Plate, the Caribbean Plate, the Nazca Plate off the west coast of South America and the Scotia Plate in the southern Atlantic Ocean. The Australian Plate fused with the Indian Plate between . The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of and the Pacific Plate moving . At the other extreme, the slowest-moving plate is the South American Plate, progressing at a typical rate of .
Internal structure
Earth's interior, like that of the other terrestrial planets, is divided into layers by their chemical or physical (rheological) properties. The outer layer is a chemically distinct silicate solid crust, which is underlain by a highly viscous solid mantle. The crust is separated from the mantle by the Mohorovičić discontinuity. The thickness of the crust varies from about under the oceans to for the continents. The crust and the cold, rigid, top of the upper mantle are collectively known as the lithosphere, which is divided into independently moving tectonic plates.
Beneath the lithosphere is the asthenosphere, a relatively low-viscosity layer on which the lithosphere rides. Important changes in crystal structure within the mantle occur at below the surface, spanning a transition zone that separates the upper and lower mantle. Beneath the mantle, an extremely low viscosity liquid outer core lies above a solid inner core. Earth's inner core may be rotating at a slightly higher angular velocity than the remainder of the planet, advancing by 0.1–0.5° per year, although both somewhat higher and much lower rates have also been proposed. The radius of the inner core is about one-fifth of that of Earth. The density increases with depth. Among the Solar System's planetary-sized objects, Earth is the object with the highest density.
Chemical composition
Earth's mass is approximately (). It is composed mostly of iron (32.1% by mass), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminium (1.4%), with the remaining 1.2% consisting of trace amounts of other elements. Due to gravitational separation, the core is primarily composed of the denser elements: iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements. The most common rock constituents of the crust are oxides. Over 99% of the crust is composed of various oxides of eleven elements, principally oxides containing silicon (the silicate minerals), aluminium, iron, calcium, magnesium, potassium, or sodium.
Internal heat
The major contributors to Earth's internal heat are primordial heat (heat left over from Earth's formation) and radiogenic heat (heat produced by radioactive decay). The major heat-producing isotopes within Earth are potassium-40, uranium-238, and thorium-232. At the center, the temperature may be up to , and the pressure could reach . Because much of the heat is provided by radioactive decay, scientists postulate that early in Earth's history, before isotopes with short half-lives were depleted, Earth's heat production was much higher. At approximately , twice the present-day heat would have been produced, increasing the rates of mantle convection and plate tectonics, and allowing the production of uncommon igneous rocks such as komatiites that are rarely formed today.
The mean heat loss from Earth is , for a global heat loss of . A portion of the core's thermal energy is transported toward the crust by mantle plumes, a form of convection consisting of upwellings of higher-temperature rock. These plumes can produce hotspots and flood basalts. More of the heat in Earth is lost through plate tectonics, by mantle upwelling associated with mid-ocean ridges. The final major mode of heat loss is through conduction through the lithosphere, the majority of which occurs under the oceans.
Gravitational field
The gravity of Earth is the acceleration that is imparted to objects due to the distribution of mass within Earth. Near Earth's surface, gravitational acceleration is approximately . Local differences in topography, geology, and deeper tectonic structure cause local and broad regional differences in Earth's gravitational field, known as gravity anomalies.
Magnetic field
The main part of Earth's magnetic field is generated in the core, the site of a dynamo process that converts the kinetic energy of thermally and compositionally driven convection into electrical and magnetic field energy. The field extends outwards from the core, through the mantle, and up to Earth's surface, where it is, approximately, a dipole. The poles of the dipole are located close to Earth's geographic poles. At the equator of the magnetic field, the magnetic-field strength at the surface is , with a magnetic dipole moment of at epoch 2000, decreasing nearly 6% per century (although it still remains stronger than its long time average). The convection movements in the core are chaotic; the magnetic poles drift and periodically change alignment. This causes secular variation of the main field and field reversals at irregular intervals averaging a few times every million years. The most recent reversal occurred approximately 700,000 years ago.
The extent of Earth's magnetic field in space defines the magnetosphere. Ions and electrons of the solar wind are deflected by the magnetosphere; solar wind pressure compresses the day-side of the magnetosphere, to about 10 Earth radii, and extends the night-side magnetosphere into a long tail. Because the velocity of the solar wind is greater than the speed at which waves propagate through the solar wind, a supersonic bow shock precedes the day-side magnetosphere within the solar wind. Charged particles are contained within the magnetosphere; the plasmasphere is defined by low-energy particles that essentially follow magnetic field lines as Earth rotates. The ring current is defined by medium-energy particles that drift relative to the geomagnetic field, but with paths that are still dominated by the magnetic field, and the Van Allen radiation belts are formed by high-energy particles whose motion is essentially random, but contained in the magnetosphere. During magnetic storms and substorms, charged particles can be deflected from the outer magnetosphere and especially the magnetotail, directed along field lines into Earth's ionosphere, where atmospheric atoms can be excited and ionized, causing an aurora.
Orbit and rotation
Rotation
Earth's rotation period relative to the Sun—its mean solar day—is of mean solar time (). Because Earth's solar day is now slightly longer than it was during the 19th century due to tidal deceleration, each day varies between longer than the mean solar day.
Earth's rotation period relative to the fixed stars, called its stellar day by the International Earth Rotation and Reference Systems Service (IERS), is of mean solar time (UT1), or Earth's rotation period relative to the precessing or moving mean March equinox (when the Sun is at 90° on the equator), is of mean solar time (UT1) . Thus the sidereal day is shorter than the stellar day by about 8.4 ms.
Apart from meteors within the atmosphere and low-orbiting satellites, the main apparent motion of celestial bodies in Earth's sky is to the west at a rate of 15°/h = 15'/min. For bodies near the celestial equator, this is equivalent to an apparent diameter of the Sun or the Moon every two minutes; from Earth's surface, the apparent sizes of the Sun and the Moon are approximately the same.
Orbit
Earth orbits the Sun, making Earth the third-closest planet to the Sun and part of the inner Solar System. Earth's average orbital distance is about , which is the basis for the astronomical unit (AU) and is equal to roughly 8.3 light minutes or 380 times Earth's distance to the Moon. Earth orbits the Sun every 365.2564 mean solar days, or one sidereal year. With an apparent movement of the Sun in Earth's sky at a rate of about 1°/day eastward, which is one apparent Sun or Moon diameter every 12 hours. Due to this motion, on average it takes 24 hours—a solar day—for Earth to complete a full rotation about its axis so that the Sun returns to the meridian.
The orbital speed of Earth averages about , which is fast enough to travel a distance equal to Earth's diameter, about , in seven minutes, and the distance from Earth to the Moon, , in about 3.5 hours.
The Moon and Earth orbit a common barycenter every 27.32 days relative to the background stars. When combined with the Earth–Moon system's common orbit around the Sun, the period of the synodic month, from new moon to new moon, is 29.53 days. Viewed from the celestial north pole, the motion of Earth, the Moon, and their axial rotations are all counterclockwise. Viewed from a vantage point above the Sun and Earth's north poles, Earth orbits in a counterclockwise direction about the Sun. The orbital and axial planes are not precisely aligned: Earth's axis is tilted some 23.44 degrees from the perpendicular to the Earth–Sun plane (the ecliptic), and the Earth-Moon plane is tilted up to ±5.1 degrees against the Earth–Sun plane. Without this tilt, there would be an eclipse every two weeks, alternating between lunar eclipses and solar eclipses.
The Hill sphere, or the sphere of gravitational influence, of Earth is about in radius. This is the maximum distance at which Earth's gravitational influence is stronger than that of the more distant Sun and planets. Objects must orbit Earth within this radius, or they can become unbound by the gravitational perturbation of the Sun. Earth, along with the Solar System, is situated in the Milky Way and orbits about 28,000 light-years from its center. It is about 20 light-years above the galactic plane in the Orion Arm.
Axial tilt and seasons
The axial tilt of Earth is approximately 23.439281° with the axis of its orbit plane, always pointing towards the Celestial Poles. Due to Earth's axial tilt, the amount of sunlight reaching any given point on the surface varies over the course of the year. This causes the seasonal change in climate, with summer in the Northern Hemisphere occurring when the Tropic of Cancer is facing the Sun, and in the Southern Hemisphere when the Tropic of Capricorn faces the Sun. In each instance, winter occurs simultaneously in the opposite hemisphere.
During the summer, the day lasts longer, and the Sun climbs higher in the sky. In winter, the climate becomes cooler and the days shorter. Above the Arctic Circle and below the Antarctic Circle there is no daylight at all for part of the year, causing a polar night, and this night extends for several months at the poles themselves. These same latitudes also experience a midnight sun, where the sun remains visible all day.
By astronomical convention, the four seasons can be determined by the solstices—the points in the orbit of maximum axial tilt toward or away from the Sun—and the equinoxes, when Earth's rotational axis is aligned with its orbital axis. In the Northern Hemisphere, winter solstice currently occurs around 21 December; summer solstice is near 21 June, spring equinox is around 20 March and autumnal equinox is about 22 or 23 September. In the Southern Hemisphere, the situation is reversed, with the summer and winter solstices exchanged and the spring and autumnal equinox dates swapped.
The angle of Earth's axial tilt is relatively stable over long periods of time. Its axial tilt does undergo nutation; a slight, irregular motion with a main period of 18.6 years. The orientation (rather than the angle) of Earth's axis also changes over time, precessing around in a complete circle over each 25,800-year cycle; this precession is the reason for the difference between a sidereal year and a tropical year. Both of these motions are caused by the varying attraction of the Sun and the Moon on Earth's equatorial bulge. The poles also migrate a few meters across Earth's surface. This polar motion has multiple, cyclical components, which collectively are termed quasiperiodic motion. In addition to an annual component to this motion, there is a 14-month cycle called the Chandler wobble. Earth's rotational velocity also varies in a phenomenon known as length-of-day variation.
Earth's annual orbit is elliptical rather than circular, and its closest approach to the Sun is called perihelion. In modern times, Earth's perihelion occurs around 3 January, and its aphelion around 4 July. These dates shift over time due to precession and changes to the orbit, the latter of which follows cyclical patterns known as Milankovitch cycles. The annual change in the Earth–Sun distance causes an increase of about 6.8% in solar energy reaching Earth at perihelion relative to aphelion. Because the Southern Hemisphere is tilted toward the Sun at about the same time that Earth reaches the closest approach to the Sun, the Southern Hemisphere receives slightly more energy from the Sun than does the northern over the course of a year. This effect is much less significant than the total energy change due to the axial tilt, and most of the excess energy is absorbed by the higher proportion of water in the Southern Hemisphere.
Earth–Moon system
Moon
The Moon is a relatively large, terrestrial, planet-like natural satellite, with a diameter about one-quarter of Earth's. It is the largest moon in the Solar System relative to the size of its planet, although Charon is larger relative to the dwarf planet Pluto. The natural satellites of other planets are also referred to as "moons", after Earth's. The most widely accepted theory of the Moon's origin, the giant-impact hypothesis, states that it formed from the collision of a Mars-size protoplanet called Theia with the early Earth. This hypothesis explains the Moon's relative lack of iron and volatile elements and the fact that its composition is nearly identical to that of Earth's crust. Computer simulations suggest that two blob-like remnants of this protoplanet could be inside the Earth.
The gravitational attraction between Earth and the Moon causes lunar tides on Earth. The same effect on the Moon has led to its tidal locking: its rotation period is the same as the time it takes to orbit Earth. As a result, it always presents the same face to the planet. As the Moon orbits Earth, different parts of its face are illuminated by the Sun, leading to the lunar phases. Due to their tidal interaction, the Moon recedes from Earth at the rate of approximately . Over millions of years, these tiny modifications—and the lengthening of Earth's day by about 23 μs/yr—add up to significant changes. During the Ediacaran period, for example, (approximately ) there were 400±7 days in a year, with each day lasting 21.9±0.4 hours.
The Moon may have dramatically affected the development of life by moderating the planet's climate. Paleontological evidence and computer simulations show that Earth's axial tilt is stabilized by tidal interactions with the Moon. Some theorists think that without this stabilization against the torques applied by the Sun and planets to Earth's equatorial bulge, the rotational axis might be chaotically unstable, exhibiting large changes over millions of years, as is the case for Mars, though this is disputed.
Viewed from Earth, the Moon is just far enough away to have almost the same apparent-sized disk as the Sun. The angular size (or solid angle) of these two bodies match because, although the Sun's diameter is about 400 times as large as the Moon's, it is also 400 times more distant. This allows total and annular solar eclipses to occur on Earth.
Asteroids and artificial satellites
Earth's co-orbital asteroids population consists of quasi-satellites, objects with a horseshoe orbit and trojans. There are at least seven quasi-satellites, including 469219 Kamoʻoalewa, ranging in diameter from 10 m to 5000 m. A trojan asteroid companion, , is librating around the leading Lagrange triangular point, L4, in Earth's orbit around the Sun. The tiny near-Earth asteroid makes close approaches to the Earth–Moon system roughly every twenty years. During these approaches, it can orbit Earth for brief periods of time.
, there are 4,550 operational, human-made satellites orbiting Earth. There are also inoperative satellites, including Vanguard 1, the oldest satellite currently in orbit, and over 16,000 pieces of tracked space debris. Earth's largest artificial satellite is the International Space Station (ISS).
Hydrosphere
Earth's hydrosphere is the sum of Earth's water and its distribution. Most of Earth's hydrosphere consists of Earth's global ocean. Earth's hydrosphere also consists of water in the atmosphere and on land, including clouds, inland seas, lakes, rivers, and underground waters. The mass of the oceans is approximately 1.35 metric tons or about 1/4400 of Earth's total mass. The oceans cover an area of with a mean depth of , resulting in an estimated volume of .
If all of Earth's crustal surface were at the same elevation as a smooth sphere, the depth of the resulting world ocean would be . About 97.5% of the water is saline; the remaining 2.5% is fresh water. Most fresh water, about 68.7%, is present as ice in ice caps and glaciers. The remaining 30% is ground water, 1% surface water (covering only 2.8% of Earth's land) and other small forms of fresh water deposits such as permafrost, water vapor in the atmosphere, biological binding, etc.
In Earth's coldest regions, snow survives over the summer and changes into ice. This accumulated snow and ice eventually forms into glaciers, bodies of ice that flow under the influence of their own gravity. Alpine glaciers form in mountainous areas, whereas vast ice sheets form over land in polar regions. The flow of glaciers erodes the surface, changing it dramatically, with the formation of U-shaped valleys and other landforms. Sea ice in the Arctic covers an area about as big as the United States, although it is quickly retreating as a consequence of climate change.
The average salinity of Earth's oceans is about 35 grams of salt per kilogram of seawater (3.5% salt). Most of this salt was released from volcanic activity or extracted from cool igneous rocks. The oceans are also a reservoir of dissolved atmospheric gases, which are essential for the survival of many aquatic life forms. Sea water has an important influence on the world's climate, with the oceans acting as a large heat reservoir. Shifts in the oceanic temperature distribution can cause significant weather shifts, such as the El Niño–Southern Oscillation.
The abundance of water, particularly liquid water, on Earth's surface is a unique feature that distinguishes it from other planets in the Solar System. Solar System planets with considerable atmospheres do partly host atmospheric water vapor, but they lack surface conditions for stable surface water. Despite some moons showing signs of large reservoirs of extraterrestrial liquid water, with possibly even more volume than Earth's ocean, all of them are large bodies of water under a kilometers thick frozen surface layer.
Atmosphere
The atmospheric pressure at Earth's sea level averages , with a scale height of about . A dry atmosphere is composed of 78.084% nitrogen, 20.946% oxygen, 0.934% argon, and trace amounts of carbon dioxide and other gaseous molecules. Water vapor content varies between 0.01% and 4% but averages about 1%. Clouds cover around two-thirds of Earth's surface, more so over oceans than land. The height of the troposphere varies with latitude, ranging between at the poles to at the equator, with some variation resulting from weather and seasonal factors.
Earth's biosphere has significantly altered its atmosphere. Oxygenic photosynthesis evolved , forming the primarily nitrogen–oxygen atmosphere of today. This change enabled the proliferation of aerobic organisms and, indirectly, the formation of the ozone layer due to the subsequent conversion of atmospheric into . The ozone layer blocks ultraviolet solar radiation, permitting life on land. Other atmospheric functions important to life include transporting water vapor, providing useful gases, causing small meteors to burn up before they strike the surface, and moderating temperature. This last phenomenon is the greenhouse effect: trace molecules within the atmosphere serve to capture thermal energy emitted from the surface, thereby raising the average temperature. Water vapor, carbon dioxide, methane, nitrous oxide, and ozone are the primary greenhouse gases in the atmosphere. Without this heat-retention effect, the average surface temperature would be , in contrast to the current , and life on Earth probably would not exist in its current form.
Weather and climate
Earth's atmosphere has no definite boundary, gradually becoming thinner and fading into outer space. Three-quarters of the atmosphere's mass is contained within the first of the surface; this lowest layer is called the troposphere. Energy from the Sun heats this layer, and the surface below, causing expansion of the air. This lower-density air then rises and is replaced by cooler, higher-density air. The result is atmospheric circulation that drives the weather and climate through redistribution of thermal energy.
The primary atmospheric circulation bands consist of the trade winds in the equatorial region below 30° latitude and the westerlies in the mid-latitudes between 30° and 60°. Ocean heat content and currents are also important factors in determining climate, particularly the thermohaline circulation that distributes thermal energy from the equatorial oceans to the polar regions.
Earth receives 1361 W/m2 of solar irradiance. The amount of solar energy that reaches Earth's surface decreases with increasing latitude. At higher latitudes, the sunlight reaches the surface at lower angles, and it must pass through thicker columns of the atmosphere. As a result, the mean annual air temperature at sea level decreases by about per degree of latitude from the equator. Earth's surface can be subdivided into specific latitudinal belts of approximately homogeneous climate. Ranging from the equator to the polar regions, these are the tropical (or equatorial), subtropical, temperate and polar climates.
Further factors that affect a location's climates are its proximity to oceans, the oceanic and atmospheric circulation, and topology. Places close to oceans typically have colder summers and warmer winters, due to the fact that oceans can store large amounts of heat. The wind transports the cold or the heat of the ocean to the land. Atmospheric circulation also plays an important role: San Francisco and Washington DC are both coastal cities at about the same latitude. San Francisco's climate is significantly more moderate as the prevailing wind direction is from sea to land. Finally, temperatures decrease with height causing mountainous areas to be colder than low-lying areas.
Water vapor generated through surface evaporation is transported by circulatory patterns in the atmosphere. When atmospheric conditions permit an uplift of warm, humid air, this water condenses and falls to the surface as precipitation. Most of the water is then transported to lower elevations by river systems and usually returned to the oceans or deposited into lakes. This water cycle is a vital mechanism for supporting life on land and is a primary factor in the erosion of surface features over geological periods. Precipitation patterns vary widely, ranging from several meters of water per year to less than a millimeter. Atmospheric circulation, topographic features, and temperature differences determine the average precipitation that falls in each region.
The commonly used Köppen climate classification system has five broad groups (humid tropics, arid, humid middle latitudes, continental and cold polar), which are further divided into more specific subtypes. The Köppen system rates regions based on observed temperature and precipitation. Surface air temperature can rise to around in hot deserts, such as Death Valley, and can fall as low as in Antarctica.
Upper atmosphere
The upper atmosphere, the atmosphere above the troposphere, is usually divided into the stratosphere, mesosphere, and thermosphere. Each layer has a different lapse rate, defining the rate of change in temperature with height. Beyond these, the exosphere thins out into the magnetosphere, where the geomagnetic fields interact with the solar wind. Within the stratosphere is the ozone layer, a component that partially shields the surface from ultraviolet light and thus is important for life on Earth. The Kármán line, defined as above Earth's surface, is a working definition for the boundary between the atmosphere and outer space.
Thermal energy causes some of the molecules at the outer edge of the atmosphere to increase their velocity to the point where they can escape from Earth's gravity. This causes a slow but steady loss of the atmosphere into space. Because unfixed hydrogen has a low molecular mass, it can achieve escape velocity more readily, and it leaks into outer space at a greater rate than other gases. The leakage of hydrogen into space contributes to the shifting of Earth's atmosphere and surface from an initially reducing state to its current oxidizing one. Photosynthesis provided a source of free oxygen, but the loss of reducing agents such as hydrogen is thought to have been a necessary precondition for the widespread accumulation of oxygen in the atmosphere. Hence the ability of hydrogen to escape from the atmosphere may have influenced the nature of life that developed on Earth. In the current, oxygen-rich atmosphere most hydrogen is converted into water before it has an opportunity to escape. Instead, most of the hydrogen loss comes from the destruction of methane in the upper atmosphere.
Life on Earth
Earth is the only known place that has ever been habitable for life. Earth's life developed in Earth's early bodies of water some hundred million years after Earth formed. Earth's life has been shaping and inhabiting many particular ecosystems on Earth and has eventually expanded globally forming an overarching biosphere.
Therefore, life has impacted Earth, significantly altering Earth's atmosphere and surface over long periods of time, causing changes like the Great Oxidation Event. Earth's life has also over time greatly diversified, allowing the biosphere to have different biomes, which are inhabited by comparatively similar plants and animals. The different biomes developed at distinct elevations or water depths, planetary temperature latitudes and on land also with different humidity. Earth's species diversity and biomass reaches a peak in shallow waters and with forests, particularly in equatorial, warm and humid conditions. While freezing polar regions and high altitudes, or extremely arid areas are relatively barren of plant and animal life.
Earth provides liquid water—an environment where complex organic molecules can assemble and interact, and sufficient energy to sustain a metabolism. Plants and other organisms take up nutrients from water, soils and the atmosphere. These nutrients are constantly recycled between different species.
Extreme weather, such as tropical cyclones (including hurricanes and typhoons), occurs over most of Earth's surface and has a large impact on life in those areas. From 1980 to 2000, these events caused an average of 11,800 human deaths per year. Many places are subject to earthquakes, landslides, tsunamis, volcanic eruptions, tornadoes, blizzards, floods, droughts, wildfires, and other calamities and disasters. Human impact is felt in many areas due to pollution of the air and water, acid rain, loss of vegetation (overgrazing, deforestation, desertification), loss of wildlife, species extinction, soil degradation, soil depletion and erosion. Human activities release greenhouse gases into the atmosphere which cause global warming. This is driving changes such as the melting of glaciers and ice sheets, a global rise in average sea levels, increased risk of drought and wildfires, and migration of species to colder areas.
Human geography
Originating from earlier primates in Eastern Africa 300,000years ago humans have since been migrating and with the advent of agriculture in the 10th millennium BC increasingly settling Earth's land. In the 20th century Antarctica had been the last continent to see a first and until today limited human presence.
Human population has since the 19th century grown exponentially to seven billion in the early 2010s, and is projected to peak at around ten billion in the second half of the 21st century. Most of the growth is expected to take place in sub-Saharan Africa.
Distribution and density of human population varies greatly around the world with the majority living in south to eastern Asia and 90% inhabiting only the Northern Hemisphere of Earth, partly due to the hemispherical predominance of the world's land mass, with 68% of the world's land mass being in the Northern Hemisphere. Furthermore, since the 19th century humans have increasingly converged into urban areas with the majority living in urban areas by the 21st century.
Beyond Earth's surface humans have lived on a temporary basis, with only a few special-purpose deep underground and underwater presences and a few space stations. The human population virtually completely remains on Earth's surface, fully depending on Earth and the environment it sustains. Since the second half of the 20th century, some hundreds of humans have temporarily stayed beyond Earth, a tiny fraction of whom have reached another celestial body, the Moon.
Earth has been subject to extensive human settlement, and humans have developed diverse societies and cultures. Most of Earth's land has been territorially claimed since the 19th century by sovereign states (countries) separated by political borders, and 205 such states exist today, with only parts of Antarctica and a few small regions remaining unclaimed. Most of these states together form the United Nations, the leading worldwide intergovernmental organization, which extends human governance over the ocean and Antarctica, and therefore all of Earth.
Natural resources and land use
Earth has resources that have been exploited by humans. Those termed non-renewable resources, such as fossil fuels, are only replenished over geological timescales. Large deposits of fossil fuels are obtained from Earth's crust, consisting of coal, petroleum, and natural gas. These deposits are used by humans both for energy production and as feedstock for chemical production. Mineral ore bodies have also been formed within the crust through a process of ore genesis, resulting from actions of magmatism, erosion, and plate tectonics. These metals and other elements are extracted by mining, a process which often brings environmental and health damage.
Earth's biosphere produces many useful biological products for humans, including food, wood, pharmaceuticals, oxygen, and the recycling of organic waste. The land-based ecosystem depends upon topsoil and fresh water, and the oceanic ecosystem depends on dissolved nutrients washed down from the land. In 2019, of Earth's land surface consisted of forest and woodlands, was shrub and grassland, were used for animal feed production and grazing, and were cultivated as croplands. Of the 1214% of ice-free land that is used for croplands, 2 percentage points were irrigated in 2015. Humans use building materials to construct shelters.
Humans and the environment
Human activities have impacted Earth's environments. Through activities such as the burning of fossil fuels, humans have been increasing the amount of greenhouse gases in the atmosphere, altering Earth's energy budget and climate. It is estimated that global temperatures in the year 2020 were warmer than the preindustrial baseline. This increase in temperature, known as global warming, has contributed to the melting of glaciers, rising sea levels, increased risk of drought and wildfires, and migration of species to colder areas.
The concept of planetary boundaries was introduced to quantify humanity's impact on Earth. Of the nine identified boundaries, five have been crossed: Biosphere integrity, climate change, chemical pollution, destruction of wild habitats and the nitrogen cycle are thought to have passed the safe threshold. As of 2018, no country meets the basic needs of its population without transgressing planetary boundaries. It is thought possible to provide all basic physical needs globally within sustainable levels of resource use.
Cultural and historical viewpoint
Human cultures have developed many views of the planet. The standard astronomical symbols of Earth are a quartered circle, , representing the four corners of the world, and a globus cruciger, . Earth is sometimes personified as a deity. In many cultures it is a mother goddess that is also the primary fertility deity. Creation myths in many religions involve the creation of Earth by a supernatural deity or deities. The Gaia hypothesis, developed in the mid-20th century, compared Earth's environments and life as a single self-regulating organism leading to broad stabilization of the conditions of habitability.
Images of Earth taken from space, particularly during the Apollo program, have been credited with altering the way that people viewed the planet that they lived on, called the overview effect, emphasizing its beauty, uniqueness and apparent fragility. In particular, this caused a realization of the scope of effects from human activity on Earth's environment. Enabled by science, particularly Earth observation, humans have started to take action on environmental issues globally, acknowledging the impact of humans and the interconnectedness of Earth's environments.
Scientific investigation has resulted in several culturally transformative shifts in people's view of the planet. Initial belief in a flat Earth was gradually displaced in Ancient Greece by the idea of a spherical Earth, which was attributed to both the philosophers Pythagoras and Parmenides. Earth was generally believed to be the center of the universe until the 16th century, when scientists first concluded that it was a moving object, one of the planets of the Solar System.
It was only during the 19th century that geologists realized Earth's age was at least many millions of years. Lord Kelvin used thermodynamics to estimate the age of Earth to be between 20 million and 400 million years in 1864, sparking a vigorous debate on the subject; it was only when radioactivity and radioactive dating were discovered in the late 19th and early 20th centuries that a reliable mechanism for determining Earth's age was established, proving the planet to be billions of years old.
| Physical sciences | null | null |
9236 | https://en.wikipedia.org/wiki/Evolution | Evolution | Evolution is the change in the heritable characteristics of biological populations over successive generations. It occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
The scientific theory of evolution by natural selection was conceived independently by two British naturalists, Charles Darwin and Alfred Russel Wallace, in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment.
In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow.
All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today.
Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science.
Heredity
Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.
Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner.
Sources of variation
Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species.
An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.
Mutation
Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect.
About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial.
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth.
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line.
One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring.
Sex and recombination
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution.
The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial.
Gene flow
Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.
Epigenetics
Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.
Evolutionary forces
From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias.
Natural selection
Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles:
Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation).
Different traits confer different rates of survival and reproduction (differential fitness).
These traits can be passed from generation to generation (heritability of fitness).
More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking.
The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are "selected against."
Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms.
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height.
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation.
Genetic drift
Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles.
According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities.
The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population.
It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.
Mutation bias
Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution.
Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.
For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.
However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation.
Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates.
Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation.
Genetic hitchhiking
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.
Sexual selection
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.
Natural outcomes
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction, whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution is the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.
A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically, however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing, and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.
Adaptation
Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky:
Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.
Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.
An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.
During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes.
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.
Coevolution
Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.
Cooperation
Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.
Speciation
Speciation is the process where a species diverges into two or more descendant species.
There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.
Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.
The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.
The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.
One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.
Extinction
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described.
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.
Applications
Concepts and models used in evolutionary biology, such as natural selection, have many applications.
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.
Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation.
Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.
Evolutionary history of life
Origin of life
The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.
More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described.
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells.
Common descent
All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.
Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.
Evolution of life
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.
Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.
History of evolutionary thought
Classical antiquity
The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura ().
Middle Ages
In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.
A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous".
Pre-Darwinian
The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.
Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.
Darwinian revolution
The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.
Othniel C. Marsh, America’s first paleontologist, was the first to provide solid fossil evidence to support Darwin’s theory of evolution by unearthing the ancestors of the modern horse. In 1877, Marsh delivered a very influential speech before the annual meeting of the American Association for the Advancement of Science, providing a demonstrative argument for evolution. For the first time, Marsh traced the evolution of vertebrates from fish all the way through humans. Sparing no detail, he listed a wealth of fossil examples of past life forms. The significance of this speech was immediately recognized by the scientific community, and it was printed in its entirety in several scientific journals.
In 1880, Marsh caught the attention of the scientific world with the publication of Odontornithes: a Monograph on Extinct Birds of North America, which included his discoveries of birds with teeth. These skeletons helped bridge the gap between dinosaurs and birds, and provided invaluable support for Darwin's theory of evolution. Darwin wrote to Marsh saying, "Your work on these old birds & on the many fossil animals of N. America has afforded the best support to the theory of evolution, which has appeared within the last 20 years" (since Darwin's publication of Origin of Species).Cianfaglione, Paul. "O.C. Marsh Odontornithes Monograph Still Relevant Today", 20 Jul 2016, Avian Musings: "going beyond the field mark."
Pangenesis and heredity
The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.
The 'modern synthesis'
In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology.
Further syntheses
Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations.
The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.
One extension, known as evolutionary developmental biology and informally called "evo-devo", emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.
Social and cultural responses
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists.
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District'' case. The debate over Darwin's ideas did not generate significant controversy in China.
| Biology and health sciences | Science and medicine | null |
9241 | https://en.wikipedia.org/wiki/Euglenozoa | Euglenozoa | Euglenozoa are a large group of flagellate Discoba. They include a variety of common free-living species, as well as a few important parasites, some of which infect humans. Euglenozoa are represented by four major groups, i.e., Kinetoplastea, Diplonemea, Euglenida, and Symbiontida. Euglenozoa are unicellular, mostly around in size, although some euglenids get up to long.
Structure
Most euglenozoa have two flagella, which are inserted parallel to one another in an apical or subapical pocket. In some these are associated with a cytostome or mouth, used to ingest bacteria or other small organisms. This is supported by one of three sets of microtubules that arise from the flagellar bases; the other two support the dorsal and ventral surfaces of the cell.
Some other euglenozoa feed through absorption, and many euglenids possess chloroplasts, the only eukaryotes outside Diaphoretickes to do so without performing kleptoplasty, and so obtain energy through photosynthesis. These chloroplasts are surrounded by three membranes and contain chlorophylls A and B, along with other pigments, so are probably derived from a green alga, captured long ago in an endosymbiosis by a basal euglenozoan. Reproduction occurs exclusively through cell division. During mitosis, the nuclear membrane remains intact, and the spindle microtubules form inside of it.
The group is characterized by the ultrastructure of the flagella. In addition to the normal supporting microtubules or axoneme, each contains a rod (called paraxonemal), which has a tubular structure in one flagellum and a latticed structure in the other. Based on this, two smaller groups have been included here: the diplonemids and Postgaardi.
Classification
Historically, euglenozoans have been treated as either plants or animals, depending on whether they belong to largely photosynthetic groups or not. Hence they have names based on either the International Code of Nomenclature for algae, fungi, and plants (ICNafp) or the International Code of Zoological Nomenclature (ICZN). For example, one family has the name Euglenaceae under the ICNafp and the name Euglenidae under the ICZN. As another example, the genus name Dinema is acceptable under the ICZN, but illegitimate under the ICNafp, as it is a later homonym of an orchid genus, so that the synonym Dinematomonas must be used instead.
The Euglenozoa are generally accepted as monophyletic. They are related to Percolozoa; the two share mitochondria with disk-shaped cristae, which only occurs in a few other groups.
Both probably belong to a larger group of eukaryotes called the Excavata. This grouping, though, has been challenged.
Phylogeny
The phylogeny based on the work of Cavalier-Smith (2016):
A consensus phylogeny following the review by Kostygov et al. (2021):
Taxonomy
Cavalier-Smith (2016/2017)
The following classification of Euglenozoa is as described by Cavalier-Smith in 2016, modified to include the new subphylum Plicomonada according to Cavalier-Smith et al (2017).
Phylum Euglenozoa Cavalier-Smith 1981 emend. Simpson 1997 [Euglenobionta]
Subphylum Glycomonada Cavalier-Smith 2016
Class Diplonemea Cavalier-Smith 1993 emend. Simpson 1997 [Diplosonematea; Diplonemia Cavalier-Smith 1993]
Order Diplonemida Cavalier-Smith 1993 [Hemistasiida]
Family Hemistasiidae Cavalier-Smith 2016 [Entomosigmaceae]
Family Diplonemidae Cavalier-Smith 1993 [Rhynchopodaceae Skuja 1948 ex Cavalier-Smith 1993]
Class Kinetoplastea Honigberg 1963 emend. Margulis 1974 [Kinetoplastida Honigberg 1963; Kinetoplasta Honigberg 1963 stat. nov.]
Subclass Prokinetoplastina Vickerman 2004
Order Prokinetoplastida Vickerman 2004
Family Ichthyobodonidae Isaksen et al., 2007
Subclass Metakinetoplastina Vickerman 2004
Order Bodonida* Hollande 1952 em. Vickerman 1976, Kryov et al. 1980
Suborder Neobodonida Vickerman 2004
Family Rhynchomonadidae Cavalier-Smith 2016
Family Neobodonidae Cavalier-Smith 2016
Suborder Parabodonida Vickerman 2004
Family Parabodonidae Cavalier-Smith 2016
Family Cryptobiidae* Vickerman 2004
Suborder Eubodonida Vickerman 2004
Family Bodonidae Bütschli 1883 [Bodonaceae Lemmermann 1914; Bodoninae Bütschli 1883; Pleuromonadidae Kent 1880]
Order Trypanosomatida Kent 1880 stat. n. Hollande, 1952 emend. Vickerman 2004
Family Trypanosomatidae Doflein 1901
Subphylum Plicomonada Cavalier-Smith 2017
Infraphylum Postgaardia Cavalier-Smith 2016 stat. nov. Cavalier-Smith 2017
Class Postgaardea Cavalier-Smith 1998 s.s. [Symbiontida Yubuki et al., 2009]
Order Bihospitida Cavalier-Smith 2016
Family Bihospitidae Cavalier-Smith 2016
Order Postgaardida Cavalier-Smith 2003
Family Calkinsiidae Cavalier-Smith 2016
Family Postgaardidae Cavalier-Smith 2016
Infraphylum Euglenoida Bütschli 1884 emend. Senn 1900 stat. nov. Cavalier-Smith, 2017 [Euglenophyta; Euglenida Buetschli 1884; Euglenoidina Buetschli 1884]
Parvphylum Entosiphona Cavalier-Smith 2016 stat. nov. Cavalier-Smith 2017
Class Entosiphonea Cavalier-Smith 2016
Order Entosiphonida Cavalier-Smith 2016
Family Entosiphonidae Cavalier-Smith 2016
Parvphylum Dipilida Cavalier-Smith 2016 stat. nov. Cavalier-Smith 2017
Superclass Rigimonada* Cavalier-Smith 2016
Class Stavomonadea Cavalier-Smith 2016 [Petalomonadea Cavalier-Smith 1993; Petalomonadophyceae]
Subclass Heterostavia Cavalier-Smith 2016
Order Heterostavida Cavalier-Smith 2016
Family Serpenomonadidae Cavalier-Smith 2016
Subclass Homostavia Cavalier-Smith 2016
Order Decastavida Cavalier-Smith 2016a
Family Decastavidae Cavalier-Smith 2016a
Family Keelungiidae Cavalier-Smith 2016a
Order Petalomonadida Cavalier-Smith 1993 [Sphenomonadales Leedale 1967; Sphenomonadina Leedale 1967]
Family Sphenomonadidae Kent 1880
Family Petalomonadidae [Petalomonadaceae Buetschli 1884; Notosolenaceae Stokes 1888; Scytomonadaceae Ritter von Stein 1878]
Class Ploeotarea Cavalier-Smith 2016
Order Ploeotiida Cavalier-Smith 1993
Family Lentomonadidae Cavalier-Smith 2016
Family Ploeotiidae Cavalier-Smith 2016
Superclass Spirocuta Cavalier-Smith 2016
Class Peranemea Cavalier-Smith 1993 emend. Cavalier-Smith 2016
Subclass Acroglissia Cavalier-Smith 2016
Order Acroglissida Cavalier-Smith 2016
Family Teloproctidae Cavalier-Smith 2016a
Subclass Peranemia Cavalier-Smith 2016
Order Peranemida Bütschli 1884 stat. nov. Cavalier-Smith 1993
Family Peranematidae [Peranemataceae Dujardin 1841; Pseudoperanemataceae Christen 1962]
Subclass Anisonemia Cavalier-Smith 2016
Order Anisonemida Cavalier-Smith 2016 [Heteronematales Leedale 1967]
Family Anisonemidae Saville Kent, 1880 em. Cavalier-Smith 2016 [Heteronemidae Calkins 1926; Zygoselmidaceae Kent 188]
Order Natomonadida Cavalier-Smith 2016
Suborder Metanemina Cavalier-Smith 2016
Family Neometanemidae Cavalier-Smith 2016
Suborder Rhabdomonadina Leedale 1967 emend. Cavalier-Smith 1993 [Astasida Ehrenberg 1831; Rhabdomonadia Cavalier-Smith 1993; Rhabdomonadophyceae; Rhabdomonadales]
Family Distigmidae Hollande, 1942
Family Astasiidae Saville Kent, 1884 [Astasiaceae Ehrenberg orth. mut. Senn 1900; Rhabdomonadaceae Fott 1971; Menoidiaceae Buetschli 188; Menoidiidae Hollande, 1942]
Class Euglenophyceae Schoenichen 1925 emend. Marin & Melkonian 2003 [Euglenea Bütschli 1884 emend. Busse & Preisfeld 2002; Euglenoidea Bütschli 1884; Euglenida Bütschli 1884] (Photosynthetic clade)
Subclass Rapazia Cavalier-Smith 2016
Order Rapazida Cavalier-Smith 2016
Family Rapazidae Cavalier-Smith 2016
Subclass Euglenophycidae Busse and Preisfeld, 2003
Order Eutreptiida [Eutreptiales Leedale 1967 emend. Marin & Melkonian 2003; Eutreptiina Leedale 1967]
Family Eutreptiaceae [Eutreptiaceae Hollande 1942]
Order Euglenida Ritter von Stein, 1878 stat. n. Calkins, 1926 [Euglenales Engler 1898 emend. Marin & Melkonian 2003; Euglenina Buetschli 1884; Euglenomorphales Leedale 1967; Colaciales Smith 1938]
Family Euglenamorphidae Hollande, 1952 stat. n. Cavalier-Smith 2016 [Euglenomorphaceae; Hegneriaceae Brumpt & Lavier 1924]
Family Phacidae [Phacaceae Kim et al. 2010]
Family Euglenidae Bütschli 1884 [Euglenaceae Dujardin 1841 emend. Kim et al. 2010; Colaciaceae Smith 1933] (Mucilaginous clade)
Kostygov et al. (2021)
Phylum Euglenozoa Cavalier-Smith 1981 emend. Simpson 1997
Class Kinetoplastea Honigberg, 1963 emend. Vickerman, 1976
Subclass Prokinetoplastia Vickerman, 2004
Order Prokinetoplastida Vickerman, 2004
Family Ichthyobodonidae Isaksen et al., 2007
Family Perkinselidae Kostygov, 2021
Subclass Metakinetoplastia Vickerman, 2004
Order Eubodonida Vickerman 2004
Family Bodonidae Bütschli, 1883
Order Neobodonida Vickerman, 2004
Family Allobodonidae Goodwin et al., 2018
Family Neobodonidae Cavalier-Smith, 2016
Family Rhynchomonadinae Cavalier-Smith, 2016
Order Parabodonida Vickerman, 2004
Family Cryptobiidae Poche, 1911 emend. Kostygov, 2021
Family Trypanoplasmatidae Hartmann and Chagas, 1910 emend. Kostygov, 2021
Order Trypanosomatida Kent, 1880
Family Trypanosomatidae Doflein, 1901
Class Diplonemea Cavalier-Smith, 1993
Order Diplonemida Cavalier-Smith, 1993
Family Diplonemidae Cavalier-Smith, 1993
Family Hemistasiidae Cavalier-Smith, 2016
Family Eupelagonemidae Okamoto and Keeling, 2019
Class Euglenida Bütschli, 1884 emend. Simpson, 1997
Clade Olkaspira Lax and Simpson, 2020
Clade Spirocuta Cavalier-Smith, 2016
Clade Euglenophyceae Schoenichen, 1925 emend. Marin and Melkonian, 2003
Order Euglenales Leedale, 1967 emend. Marin and Melkonian, 2003
Family Euglenaceae Dujardin, 1841 emend. Kim et al., 2010 [Euglenidae Dujardin, 1841]
Family Phacaceae Kim, Triemer and Shin 2010 [Phacidae Kim, Triemer and Shin 2010]
Order Eutreptiales Leedale, 1967 emend. Marin and Melkonian, 2003
Family Eutreptiaceae Hollande, 1942 [Eutreptiidae Hollande, 1942]
Order Rapazida Cavalier-Smith, 2016
Family Rapazidae Cavalier-Smith, 2016
Clade Anisonemia Cavalier-Smith, 2016
Order Anisonemida Cavalier-Smith, 2016
Family Anisonemidae Kent, 1880
Clade Aphagea Cavalier-Smith, 1993 emend. Busse and Preisfeld, 2002
Order Peranemida Cavalier-Smith, 1993
Clade Alistosa Lax et al., 2020
Order Petalomonadida Cavalier-Smith, 1993
Class Symbiontida Yubuki, Edgcomb, Bernhard and Leander, 2009
| Biology and health sciences | Other organisms | null |
9251 | https://en.wikipedia.org/wiki/Engineering | Engineering | Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems. Modern engineering comprises many subfields which include designing and improving infrastructure, machinery, vehicles, electronics, materials, and energy systems.
The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering.
The term engineering is derived from the Latin , meaning "cleverness".
Definition
The American Engineers' Council for Professional Development (ECPD, the predecessor of ABET) has defined "engineering" as:
History
Engineering has existed since ancient times, when humans devised inventions such as the wedge, lever, wheel and pulley, etc.
The term engineering is derived from the word engineer, which itself dates back to the 14th century when an engine'er (literally, one who builds or operates a siege engine) referred to "a constructor of military engines". In this context, now obsolete, an "engine" referred to a military machine, i.e., a mechanical contraption used in war (for example, a catapult). Notable examples of the obsolete usage which have survived to the present day are military engineering corps, e.g., the U.S. Army Corps of Engineers.
The word "engine" itself is of even older origin, ultimately deriving from the Latin (), meaning "innate quality, especially mental power, hence a clever invention."
Later, as the design of civilian structures, such as bridges and buildings, matured as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the discipline of military engineering.
Ancient era
The pyramids in ancient Egypt, ziggurats of Mesopotamia, the Acropolis and Parthenon in Greece, the Roman aqueducts, Via Appia and Colosseum, Teotihuacán, and the Brihadeeswarar Temple of Thanjavur, among many others, stand as a testament to the ingenuity and skill of ancient civil and military engineers. Other monuments, no longer standing, such as the Hanging Gardens of Babylon and the Pharos of Alexandria, were important engineering achievements of their time and were considered among the Seven Wonders of the Ancient World.
The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia , and then in ancient Egyptian technology . The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991–1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza.
The earliest civil engineer known by name is Imhotep. As one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser (the Step Pyramid) at Saqqara in Egypt around 2630–2611 BC. The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC.
Kush developed the Sakia during the 4th century BC, which relied on animal power instead of human energy. Hafirs were developed as a type of reservoir in Kush to store and contain water as well as boost irrigation. Sappers were employed to build causeways during military campaigns. Kushite ancestors built speos during the Bronze Age between 3700 and 3250 BC. Bloomeries and blast furnaces were also created during the 7th centuries BC in Kush.
Ancient Greece developed machines in both civilian and military domains. The Antikythera mechanism, an early known mechanical analog computer, and the mechanical inventions of Archimedes, are examples of Greek mechanical engineering. Some of Archimedes' inventions, as well as the Antikythera mechanism, required sophisticated knowledge of differential gearing or epicyclic gearing, two key principles in machine theory that helped design the gear trains of the Industrial Revolution, and are widely used in fields such as robotics and automotive engineering.
Ancient Chinese, Greek, Roman and Hunnic armies employed military machines and inventions such as artillery which was developed by the Greeks around the 4th century BC, the trireme, the ballista and the catapult. In the Middle Ages, the trebuchet was developed.
Middle Ages
The earliest practical wind-powered machines, the windmill and wind pump, first appeared in the Muslim world during the Islamic Golden Age, in what are now Iran, Afghanistan, and Pakistan, by the 9th century AD. The earliest practical steam-powered machine was a steam jack driven by a steam turbine, described in 1551 by Taqi al-Din Muhammad ibn Ma'ruf in Ottoman Egypt.
The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, both of which were fundamental to the growth of the cotton industry. The spinning wheel was also a precursor to the spinning jenny, which was a key development during the early Industrial Revolution in the 18th century.
The earliest programmable machines were developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns.
Before the development of modern engineering, mathematics was used by artisans and craftsmen, such as millwrights, clockmakers, instrument makers and surveyors. Aside from these professions, universities were not believed to have had much practical significance to technology.
A standard reference for the state of mechanical arts during the Renaissance is given in the mining engineering treatise De re metallica (1556), which also contains sections on geology, mining, and chemistry. De re metallica was the standard chemistry reference for the next 180 years.
Modern era
The science of classical mechanics, sometimes called Newtonian mechanics, formed the scientific basis of much of modern engineering. With the rise of engineering as a profession in the 18th century, the term became more narrowly applied to fields in which mathematics and science were applied to these ends. Similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering.
Canal building was an important engineering work during the early phases of the Industrial Revolution.
John Smeaton was the first self-proclaimed civil engineer and is often regarded as the "father" of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbors, and lighthouses. He was also a capable mechanical engineer and an eminent physicist. Using a model water wheel, Smeaton conducted experiments for seven years, determining ways to increase efficiency. Smeaton introduced iron axles and gears to water wheels. Smeaton also made mechanical improvements to the Newcomen steam engine. Smeaton designed the third Eddystone Lighthouse (1755–59) where he pioneered the use of 'hydraulic lime' (a form of mortar which will set under water) and developed a technique involving dovetailed blocks of granite in the building of the lighthouse. He is important in the history, rediscovery of, and development of modern cement, because he identified the compositional requirements needed to obtain "hydraulicity" in lime; work which led ultimately to the invention of Portland cement.
Applied science led to the development of the steam engine. The sequence of events began with the invention of the barometer and the measurement of atmospheric pressure by Evangelista Torricelli in 1643, demonstration of the force of atmospheric pressure by Otto von Guericke using the Magdeburg hemispheres in 1656, laboratory experiments by Denis Papin, who built experimental model steam engines and demonstrated the use of a piston, which he published in 1707. Edward Somerset, 2nd Marquess of Worcester published a book of 100 inventions containing a method for raising waters similar to a coffee percolator. Samuel Morland, a mathematician and inventor who worked on pumps, left notes at the Vauxhall Ordinance Office on a steam pump design that Thomas Savery read. In 1698 Savery built a steam pump called "The Miner's Friend". It employed both vacuum and pressure. Iron merchant Thomas Newcomen, who built the first commercial piston steam engine in 1712, was not known to have any scientific training.
The application of steam-powered cast iron blowing cylinders for providing pressurized air for blast furnaces lead to a large increase in iron production in the late 18th century. The higher furnace temperatures made possible with steam-powered blast allowed for the use of more lime in blast furnaces, which enabled the transition from charcoal to coke. These innovations lowered the cost of iron, making horse railways and iron bridges practical. The puddling process, patented by Henry Cort in 1784 produced large scale quantities of wrought iron. Hot blast, patented by James Beaumont Neilson in 1828, greatly lowered the amount of fuel needed to smelt iron. With the development of the high pressure steam engine, the power to weight ratio of steam engines made practical steamboats and locomotives possible. New steel making processes, such as the Bessemer process and the open hearth furnace, ushered in an area of heavy engineering in the late 19th century.
One of the most famous engineers of the mid-19th century was Isambard Kingdom Brunel, who built railroads, dockyards and steamships.
The Industrial Revolution created a demand for machinery with metal parts, which led to the development of several machine tools. Boring cast iron cylinders with precision was not possible until John Wilkinson invented his boring machine, which is considered the first machine tool. Other machine tools included the screw cutting lathe, milling machine, turret lathe and the metal planer. Precision machining techniques were developed in the first half of the 19th century. These included the use of gigs to guide the machining tool over the work and fixtures to hold the work in the proper position. Machine tools and machining techniques capable of producing interchangeable parts lead to large scale factory production by the late 19th century.
The United States Census of 1850 listed the occupation of "engineer" for the first time with a count of 2,000. There were fewer than 50 engineering graduates in the U.S. before 1865. In 1870 there were a dozen U.S. mechanical engineering graduates, with that number increasing to 43 per year in 1875. In 1890, there were 6,000 engineers in civil, mining, mechanical and electrical.
There was no chair of applied mechanism and applied mechanics at Cambridge until 1875, and no chair of engineering at Oxford until 1907. Germany established technical universities earlier.
The foundations of electrical engineering in the 1800s included the experiments of Alessandro Volta, Michael Faraday, Georg Ohm and others and the invention of the electric telegraph in 1816 and the electric motor in 1872. The theoretical work of James Maxwell (see: Maxwell's equations) and Heinrich Hertz in the late 19th century gave rise to the field of electronics. The later inventions of the vacuum tube and the transistor further accelerated the development of electronics to such an extent that electrical and electronics engineers currently outnumber their colleagues of any other engineering specialty.
Chemical engineering developed in the late nineteenth century. Industrial scale manufacturing demanded new materials and new processes and by 1880 the need for large scale production of chemicals was such that a new industry was created, dedicated to the development and large scale manufacturing of chemicals in new industrial plants. The role of the chemical engineer was the design of these chemical plants and processes.
Aeronautical engineering deals with aircraft design process design while aerospace engineering is a more modern term that expands the reach of the discipline by including spacecraft design. Its origins can be traced back to the aviation pioneers around the start of the 20th century although the work of Sir George Cayley has recently been dated as being from the last decade of the 18th century. Early knowledge of aeronautical engineering was largely empirical with some concepts and skills imported from other branches of engineering.
The first PhD in engineering (technically, applied science and engineering) awarded in the United States went to Josiah Willard Gibbs at Yale University in 1863; it was also the second PhD awarded in science in the U.S.
Only a decade after the successful flights by the Wright brothers, there was extensive development of aeronautical engineering through development of military aircraft that were used in World War I. Meanwhile, research to provide fundamental background science continued by combining theoretical physics with experiments.
Main branches of engineering
Engineering is a broad discipline that is often broken down into several sub-disciplines. Although an engineer will usually be trained in a specific discipline, he or she may become multi-disciplined through experience. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering.
Chemical engineering
Chemical engineering is the application of physics, chemistry, biology, and engineering principles in order to carry out chemical processes on a commercial scale, such as the manufacture of commodity chemicals, specialty chemicals, petroleum refining, microfabrication, fermentation, and biomolecule production.
Civil engineering
Civil engineering is the design and construction of public and private works, such as infrastructure (airports, roads, railways, water supply, and treatment etc.), bridges, tunnels, dams, and buildings. Civil engineering is traditionally broken into a number of sub-disciplines, including structural engineering, environmental engineering, and surveying. It is traditionally considered to be separate from military engineering.
Electrical engineering
Electrical engineering is the design, study, and manufacture of various electrical and electronic systems, such as broadcast engineering, electrical circuits, generators, motors, electromagnetic/electromechanical devices, electronic devices, electronic circuits, optical fibers, optoelectronic devices, computer systems, telecommunications, instrumentation, control systems, and electronics.
Mechanical engineering
Mechanical engineering is the design and manufacture of physical or mechanical systems, such as power and energy systems, aerospace/aircraft products, weapon systems, transportation products, engines, compressors, powertrains, kinematic chains, vacuum technology, vibration isolation equipment, manufacturing, robotics, turbines, audio equipments, and mechatronics.
Bioengineering
Bioengineering is the engineering of biological systems for a useful purpose. Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs.
Interdisciplinary engineering
Interdisciplinary engineering draws from more than one of the principle branches of the practice. Historically, naval engineering and mining engineering were major branches. Other engineering fields are manufacturing engineering, acoustical engineering, corrosion engineering, instrumentation and control, aerospace, automotive, computer, electronic, information engineering, petroleum, environmental, systems, audio, software, architectural, agricultural, biosystems, biomedical, geological, textile, industrial, materials, and nuclear engineering. These and other branches of engineering are represented in the 36 licensed member institutions of the UK Engineering Council.
New specialties sometimes combine with the traditional fields and form new branches – for example, Earth systems engineering and management involves a wide range of subject areas including engineering studies, environmental science, engineering ethics and philosophy of engineering.
Other branches of engineering
Aerospace engineering
Aerospace engineering covers the design, development, manufacture and operational behaviour of aircraft, satellites and rockets.
Marine engineering
Marine engineering covers the design, development, manufacture and operational behaviour of watercraft and stationary structures like oil platforms and ports.
Computer engineering
Computer engineering (CE) is a branch of engineering that integrates several fields of computer science and electronic engineering required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering.
Geological engineering
Geological engineering is associated with anything constructed on or within the Earth. This discipline applies geological sciences and engineering principles to direct or support the work of other disciplines such as civil engineering, environmental engineering, and mining engineering. Geological engineers are involved with impact studies for facilities and operations that affect surface and subsurface environments, such as rock excavations (e.g. tunnels), building foundation consolidation, slope and fill stabilization, landslide risk assessment, groundwater monitoring, groundwater remediation, mining excavations, and natural resource exploration.
Practice
One who practices engineering is called an engineer, and those licensed to do so may have more formal designations such as Professional Engineer, Chartered Engineer, Incorporated Engineer, Ingenieur, European Engineer, or Designated Engineering Representative.
Methodology
In the engineering design process, engineers apply mathematics and sciences such as physics to find novel solutions to problems or to improve existing solutions. Engineers need proficient knowledge of relevant sciences for their design projects. As a result, many engineers continue to learn new material throughout their careers.
If multiple solutions exist, engineers weigh each design choice based on their merit and choose the solution that best matches the requirements. The task of the engineer is to identify, understand, and interpret the constraints on a design in order to yield a successful result. It is generally insufficient to build a technically successful product, rather, it must also meet further requirements.
Constraints may include available resources, physical, imaginative or technical limitations, flexibility for future modifications and additions, and other factors, such as requirements for cost, safety, marketability, productivity, and serviceability. By understanding the constraints, engineers derive specifications for the limits within which a viable object or system may be produced and operated.
Problem solving
Engineers use their knowledge of science, mathematics, logic, economics, and appropriate experience or tacit knowledge to find suitable solutions to a particular problem. Creating an appropriate mathematical model of a problem often allows them to analyze it (sometimes definitively), and to test potential solutions.
More than one solution to a design problem usually exists so the different design choices have to be evaluated on their merits before the one judged most suitable is chosen. Genrich Altshuller, after gathering statistics on a large number of patents, suggested that compromises are at the heart of "low-level" engineering designs, while at a higher level the best design is one which eliminates the core contradiction causing the problem.
Engineers typically attempt to predict how well their designs will perform to their specifications prior to full-scale production. They use, among other things: prototypes, scale models, simulations, destructive tests, nondestructive tests, and stress tests. Testing ensures that products will perform as expected but only in so far as the testing has been representative of use in service. For products, such as aircraft, that are used differently by different users failures and unexpected shortcomings (and necessary design changes) can be expected throughout the operational life of the product.
Engineers take on the responsibility of producing designs that will perform as well as expected and, except those employed in specific areas of the arms industry, will not harm people. Engineers typically include a factor of safety in their designs to reduce the risk of unexpected failure.
The study of failed products is known as forensic engineering. It attempts to identify the cause of failure to allow a redesign of the product and so prevent a re-occurrence. Careful analysis is needed to establish the cause of failure of a product. The consequences of a failure may vary in severity from the minor cost of a machine breakdown to large loss of life in the case of accidents involving aircraft and large stationary structures like buildings and dams.
Computer use
As with all modern scientific and technological endeavors, computers and software play an increasingly important role. As well as the typical business application software there are a number of computer aided applications (computer-aided technologies) specifically for engineering. Computers can be used to generate models of fundamental physical processes, which can be solved using numerical methods.
One of the most widely used design tools in the profession is computer-aided design (CAD) software. It enables engineers to create 3D models, 2D drawings, and schematics of their designs. CAD together with digital mockup (DMU) and CAE software such as finite element method analysis or analytic element method allows engineers to create models of designs that can be analyzed without having to make expensive and time-consuming physical prototypes.
These allow products and components to be checked for flaws; assess fit and assembly; study ergonomics; and to analyze static and dynamic characteristics of systems such as stresses, temperatures, electromagnetic emissions, electrical currents and voltages, digital logic levels, fluid flows, and kinematics. Access and distribution of all this information is generally organized with the use of product data management software.
There are also many tools to support specific engineering tasks such as computer-aided manufacturing (CAM) software to generate CNC machining instructions; manufacturing process management software for production engineering; EDA for printed circuit board (PCB) and circuit schematics for electronic engineers; MRO applications for maintenance management; and Architecture, engineering and construction (AEC) software for civil engineering.
In recent years the use of computer software to aid the development of goods has collectively come to be known as product lifecycle management (PLM).
Social context
The engineering profession engages in a range of activities, from collaboration at the societal level, and smaller individual projects. Almost all engineering projects are obligated to a funding source: a company, a set of investors, or a government. The types of engineering that are less constrained by such a funding source, are pro bono, and open-design engineering.
Engineering has interconnections with society, culture and human behavior. Most products and constructions used by modern society, are influenced by engineering. Engineering activities have an impact on the environment, society, economies, and public safety.
Engineering projects can be controversial. Examples from different engineering disciplines include: the development of nuclear weapons, the Three Gorges Dam, the design and use of sport utility vehicles and the extraction of oil. In response, some engineering companies have enacted serious corporate and social responsibility policies.
The attainment of many of the Millennium Development Goals requires the achievement of sufficient engineering capacity to develop infrastructure and sustainable technological development.
Overseas development and relief NGOs make considerable use of engineers, to apply solutions in disaster and development scenarios. Some charitable organizations use engineering directly for development:
Engineers Without Borders
Engineers Against Poverty
Registered Engineers for Disaster Relief
Engineers for a Sustainable World
Engineering for Change
Engineering Ministries International
Engineering companies in more developed economies face challenges with regard to the number of engineers being trained, compared with those retiring. This problem is prominent in the UK where engineering has a poor image and low status. There are negative economic and political issues that this can cause, as well as ethical issues. It is agreed the engineering profession faces an "image crisis". The UK holds the most engineering companies compared to other European countries, together with the United States.
Code of ethics
Many engineering societies have established codes of practice and codes of ethics to guide members and inform the public at large. The National Society of Professional Engineers code of ethics states:
In Canada, engineers wear the Iron Ring as a symbol and reminder of the obligations and ethics associated with their profession.
Relationships with other disciplines
Science
There exists an overlap between the sciences and engineering practice; in engineering, one applies science. Both areas of endeavor rely on accurate observation of materials and phenomena. Both use mathematics and classification criteria to analyze and communicate observations.
Scientists may also have to complete engineering tasks, such as designing experimental apparatus or building prototypes. Conversely, in the process of developing technology, engineers sometimes find themselves exploring new phenomena, thus becoming, for the moment, scientists or more precisely "engineering scientists".
In the book What Engineers Know and How They Know It, Walter Vincenti asserts that engineering research has a character different from that of scientific research. First, it often deals with areas in which the basic physics or chemistry are well understood, but the problems themselves are too complex to solve in an exact manner.
There is a "real and important" difference between engineering and physics as similar to any science field has to do with technology. Physics is an exploratory science that seeks knowledge of principles while engineering uses knowledge for practical applications of principles. The former equates an understanding into a mathematical principle while the latter measures variables involved and creates technology. For technology, physics is an auxiliary and in a way technology is considered as applied physics. Though physics and engineering are interrelated, it does not mean that a physicist is trained to do an engineer's job. A physicist would typically require additional and relevant training. Physicists and engineers engage in different lines of work. But PhD physicists who specialize in sectors of engineering physics and applied physics are titled as Technology officer, R&D Engineers and System Engineers.
An example of this is the use of numerical approximations to the Navier–Stokes equations to describe aerodynamic flow over an aircraft, or the use of the finite element method to calculate the stresses in complex components. Second, engineering research employs many semi-empirical methods that are foreign to pure scientific research, one example being the method of parameter variation.
As stated by Fung et al. in the revision to the classic engineering text Foundations of Solid Mechanics:
Engineering is quite different from science. Scientists try to understand nature. Engineers try to make things that do not exist in nature. Engineers stress innovation and invention. To embody an invention the engineer must put his idea in concrete terms, and design something that people can use. That something can be a complex system, device, a gadget, a material, a method, a computing program, an innovative experiment, a new solution to a problem, or an improvement on what already exists. Since a design has to be realistic and functional, it must have its geometry, dimensions, and characteristics data defined. In the past engineers working on new designs found that they did not have all the required information to make design decisions. Most often, they were limited by insufficient scientific knowledge. Thus they studied mathematics, physics, chemistry, biology and mechanics. Often they had to add to the sciences relevant to their profession. Thus engineering sciences were born.
Although engineering solutions make use of scientific principles, engineers must also take into account safety, efficiency, economy, reliability, and constructability or ease of fabrication as well as the environment, ethical and legal considerations such as patent infringement or liability in the case of failure of the solution.
Medicine and biology
The study of the human body, albeit from different directions and for different purposes, is an important common link between medicine and some engineering disciplines. Medicine aims to sustain, repair, enhance and even replace functions of the human body, if necessary, through the use of technology.
Modern medicine can replace several of the body's functions through the use of artificial organs and can significantly alter the function of the human body through artificial devices such as, for example, brain implants and pacemakers. The fields of bionics and medical bionics are dedicated to the study of synthetic implants pertaining to natural systems.
Conversely, some engineering disciplines view the human body as a biological machine worth studying and are dedicated to emulating many of its functions by replacing biology with technology. This has led to fields such as artificial intelligence, neural networks, fuzzy logic, and robotics. There are also substantial interdisciplinary interactions between engineering and medicine.
Both fields provide solutions to real world problems. This often requires moving forward before phenomena are completely understood in a more rigorous scientific sense and therefore experimentation and empirical knowledge is an integral part of both.
Medicine, in part, studies the function of the human body. The human body, as a biological machine, has many functions that can be modeled using engineering methods.
The heart for example functions much like a pump, the skeleton is like a linked structure with levers, the brain produces electrical signals etc. These similarities as well as the increasing importance and application of engineering principles in medicine, led to the development of the field of biomedical engineering that uses concepts developed in both disciplines.
Newly emerging branches of science, such as systems biology, are adapting analytical tools traditionally used for engineering, such as systems modeling and computational analysis, to the description of biological systems.
Art
There are connections between engineering and art, for example, architecture, landscape architecture and industrial design (even to the extent that these disciplines may sometimes be included in a university's Faculty of Engineering).
The Art Institute of Chicago, for instance, held an exhibition about the art of NASA's aerospace design. Robert Maillart's bridge design is perceived by some to have been deliberately artistic. At the University of South Florida, an engineering professor, through a grant with the National Science Foundation, has developed a course that connects art and engineering.
Among famous historical figures, Leonardo da Vinci is a well-known Renaissance artist and engineer, and a prime example of the nexus between art and engineering.
Business
Business engineering deals with the relationship between professional engineering, IT systems, business administration and change management. Engineering management or "Management engineering" is a specialized field of management concerned with engineering practice or the engineering industry sector. The demand for management-focused engineers (or from the opposite perspective, managers with an understanding of engineering), has resulted in the development of specialized engineering management degrees that develop the knowledge and skills needed for these roles. During an engineering management course, students will develop industrial engineering skills, knowledge, and expertise, alongside knowledge of business administration, management techniques, and strategic thinking. Engineers specializing in change management must have in-depth knowledge of the application of industrial and organizational psychology principles and methods. Professional engineers often train as certified management consultants in the very specialized field of management consulting applied to engineering practice or the engineering sector. This work often deals with large scale complex business transformation or business process management initiatives in aerospace and defence, automotive, oil and gas, machinery, pharmaceutical, food and beverage, electrical and electronics, power distribution and generation, utilities and transportation systems. This combination of technical engineering practice, management consulting practice, industry sector knowledge, and change management expertise enables professional engineers who are also qualified as management consultants to lead major business transformation initiatives. These initiatives are typically sponsored by C-level executives.
Other fields
In political science, the term engineering has been borrowed for the study of the subjects of social engineering and political engineering, which deal with forming political and social structures using engineering methodology coupled with political science principles. Marketing engineering and financial engineering have similarly borrowed the term.
| Technology | Technology | null |
9256 | https://en.wikipedia.org/wiki/Enigma%20machine | Enigma machine | The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages.
The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. If plaintext is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress.
The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to decrypt a message.
Although Nazi Germany introduced a series of improvements to the Enigma over the years that hampered decryption efforts, they did not prevent Poland from cracking the machine as early as December 1932 and reading messages prior to and into the war. Poland's sharing of their achievements enabled the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decrypting of Enigma, Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome.
History
The Enigma machine was invented by German engineer Arthur Scherbius at the end of World War I. The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand name Enigma in 1923, initially targeted at commercial markets. Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II.
Several Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the name Enigma became widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known as blitzkrieg, which depended on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need.
Breaking Enigma
Hans-Thilo Schmidt was a German who spied for the French, obtaining access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material to Poland. Around December 1932, Marian Rejewski, a Polish mathematician and cryptologist at the Polish Cipher Bureau, used the theory of permutations, and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine. Rejewski used the French supplied material and the message traffic that took place in September and October to solve for the unknown rotor wiring. Consequently, the Polish mathematicians were able to build their own Enigma machines, dubbed "Enigma doubles". Rejewski was aided by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University, which had been selected for its students' knowledge of the German language, since that area was held by Germany prior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933.
Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built a cyclometer (invented by Rejewski) to help make a catalogue with 100,000 entries, invented and produced Zygalski sheets, and built the electromechanical cryptologic bomba (invented by Rejewski) to search for rotor settings. In 1938 the Poles had six bomby (plural of bomba), but when that year the Germans added two more rotors, ten times as many bomby would have been needed to read the traffic.
On 26 and 27 July 1939, in Pyry, just south of Warsaw, the Poles initiated French and British military intelligence representatives into the Polish Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered).
In September 1939, British Military Mission 4, which included Colin Gubbins and Vera Atkins, went to Poland, intending to evacuate cipher-breakers Marian Rejewski, Jerzy Różycki, and Henryk Zygalski from the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with the British, who began work on decrypting German Enigma messages, using the Polish equipment and techniques.
Gordon Welchman, who became head of Hut 6 at Bletchley Park, wrote: "Hut 6 Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort at Bletchley Park, where Welchman worked.
During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to the Allied war effort.
Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed.
The Abwehr used different versions of Enigma machines. In November 1942, during Operation Torch, a machine was captured which had no plugboard and the three rotors had been changed to rotate 11, 15, and 19 times rather than once every 26 letters, plus a plate on the left acted as a fourth rotor.
The Abwehr code had been broken on 8 December 1941 by Dilly Knox. Agents sent messages to the Abwehr in a simple code which was then sent on using an Enigma machine. The simple codes were broken and helped break the daily Enigma cipher. This breaking of the code enabled the Double-Cross System to operate.From October 1944, the German Abwehr used the Schlüsselgerät 41 in limited quantities.
Design
Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called rotors arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915.
Electrical pathway
An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages. The mechanical parts act by forming a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting ANX..., the operator would first press the A key, and the Z lamp might light, so Z would be the first letter of the ciphertext. The operator would next press N, and then X in the same fashion, and so on.
Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (Kriegsmarine M4 and Abwehr variants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp.
The repeated changes of electrical path through an Enigma scrambler implement a polyalphabetic substitution cipher that provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter A encrypts differently with consecutive key presses, first to G, and then to C. This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press.
Rotors
The rotors (alternatively wheels or drums, Walzen in German) form the heart of an Enigma machine. Each rotor is a disc approximately in diameter made from Ebonite or Bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent the alphabet — typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in the M4 naval variant.
By itself, a rotor performs only a very simple type of encryption, a simple substitution cipher. For example, the pin corresponding to the letter E might be wired to the contact for letter T on the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher.
Each rotor can be set to one of 26 starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has an alphabet tyre (or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as the Ringstellung ("ring setting"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of the initialization vector.
Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring.
The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single turnover notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks.
The Naval version of the Wehrmacht Enigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, Beta or Gamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine.
Stepping
To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently.
Turnover
The advancement of a rotor other than the left-hand one was called a turnover by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression. For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation.
The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows.
The design also included a feature known as double-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping. This double-stepping caused the rotors to deviate from odometer-style regular motion.
With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping). Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues.
To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions.
A device that was designed, but not implemented before the war's end, was the Lückenfüllerwalze (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured.
Entry wheel
The current entry wheel (Eintrittswalze in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: Q→A, W→B, E→C and so on. The military Enigma connects them in straight alphabetical order: A→A, B→B, C→C, and so on. It took inspired guesswork for Rejewski to penetrate the modification.
Reflector
With the exception of models A and B, the last rotor came before a 'reflector' (German: Umkehrwalze, meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would be self-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers.
In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the Abwehr Enigma, the reflector stepped during encryption in a manner similar to the other wheels.
In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by Umkehrwalze B on 1 November 1937. A third version, Umkehrwalze C was used briefly in 1940, possibly by mistake, and was solved by Hut 6. The fourth version, first observed on 2 January 1944, had a rewireable reflector, called Umkehrwalze D, nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings.
Plugboard
The plugboard (Steckerbrett in German) permitted variable wiring that could be reconfigured by the operator. It was introduced on German Army versions in 1928, and was soon adopted by the Reichsmarine (German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below). Enigma without a plugboard (known as unsteckered Enigma) could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it.
A cable placed onto the plugboard connected letters in pairs; for example, E and Q might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressed E, the signal was diverted to Q before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used.
Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or Eintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters.
Accessories
Other features made various Enigma machines more secure or more convenient.
Schreibmax
Some M4 Enigmas used the Schreibmax, a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The Schreibmax was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext.
Fernlesegerät
Another accessory was the remote lamp panel Fernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the Schreibmax, that the lamp panel and light bulbs be removed. The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it.
Uhr
In 1944, the Luftwaffe introduced a plugboard switch, called the Uhr (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise. In one switch position, the Uhr did not swap letters, but simply emulated the 13 stecker wires with plugs.
Mathematical analysis
The Enigma transformation for each letter can be specified mathematically as a product of permutations. Assuming a three-rotor German Army/Air Force Enigma, let denote the plugboard transformation, denote that of the reflector (), and , , denote those of the left, middle and right rotors respectively. Then the encryption can be expressed as
After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor is rotated positions, the transformation becomes
where is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as and rotations of and . The encryption transformation can then be described as
Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159 quintillion or about 67 bits).
Choose 3 rotors from a set of 5 rotors = 5 x 4 x 3 = 60
26 positions per rotor = 26 x 26 x 26 = 17,576
Plugboard = 26! / ( 6! x 10! x 2^10) = 150,738,274,937,250
Multiply each of the above = 158,962,555,217,826,360,000
Operation
Basic operation
A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to a pseudo-random substitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as the cyphertext letter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a different substitution alphabet being used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio in Morse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge.
Details
In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed keys at Bletchley Park, and were assigned code names, such as Red, Chaffinch, and Shark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk.
An Enigma machine's setting (its cryptographic key in modern terms; Schlüssel in German) specified each operator-adjustable aspect of the machine:
Wheel order (Walzenlage) – the choice of rotors and the order in which they are fitted.
Ring settings (Ringstellung) – the position of each alphabet ring relative to its rotor wiring.
Plug connections (Steckerverbindungen) – the pairs of letters in the plugboard that are connected together.
In very late versions, the wiring of the reconfigurable reflector.
Starting position of the rotors (Grundstellung) – chosen by the operator, should be different for each message.
For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows:
Wheel order: IV, II, V
Ring settings: 15, 23, 26
Plugboard connections: EJ OY IV AQ KW FX MT PS LU BD
Reconfigurable reflector wiring: IU AS DV GL FT OX EZ CH MR KN BQ PW
Indicator groups: lsa zbw vcj rxn
Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around (76 bits). Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try a brute-force attack.
Indicator
Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being in depth), would enable an attack using a statistical procedure such as Friedman's Index of coincidence. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the indicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible.
One of the earliest indicator procedures for the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (the Grundstellung), say, AOH. The operator turned his rotors until AOH was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might select EIN, and that became the message setting for that encryption session. The operator then typed EIN into the machine twice, this producing the encrypted indicator, for example XHTLOA. This was then transmitted, at which point the operator would turn the rotors to his message settings, EIN in this example, and then type the plaintext of the message.
At the receiving end, the operator set the machine to the initial settings (AOH) and typed in the first six letters of the message (XHTLOA). In this example, EINEIN emerged on the lamps, so the operator would learn the message setting that the sender used to encrypt this message. The receiving operator would set his rotors to EIN, type in the rest of the ciphertext, and get the deciphered message.
This indicator scheme had two weaknesses. First, the use of a global initial position (Grundstellung) meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the "faulty indicator technique".
During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say WZA, and a random message key, perhaps SXT. He moved the rotors to the WZA start position and encoded the message key SXT. Assume the result was UHL. He then set up the message key, SXT, as the start position and encrypted the message. Next, he transmitted the start position, WZA, the encoded message key, UHL, and then the ciphertext. The receiver set up the start position according to the first trigram, WZA, and decoded the second trigram, UHL, to obtain the SXT message setting. Next, he used this SXT message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings.
This procedure was used by Wehrmacht and Luftwaffe only. The Kriegsmarine procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the Kurzsignalheft code book. The Kurzsignalheft contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the Kenngruppen and Spruchschlüssel: the key identification and message key.
Additional details
The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop.
Some punctuation marks were different in other parts of the armed forces. The Wehrmacht replaced a comma with ZZ and the question mark with FRAGE or FRAQ.
The Kriegsmarine replaced the comma with Y and the question mark with UD. The combination CH, as in "Acht" (eight) or "Richtung" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA.
The Wehrmacht and the Luftwaffe transmitted messages in groups of five characters and counted the letters.
The Kriegsmarine used four-character groups and counted those groups.
Frequently used names or words were varied as much as possible. Words like Minensuchboot (minesweeper) could be written as MINENSUCHBOOT, MINBOOT or MMMBOOT. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key.
Example enciphering process
The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that enciphered A to L, B to U, C to S, ..., and Z to J could be represented compactly as
LUSHQOXDMZNAIKFREPCYBWVGTJ
and the enciphering of a particular character by that configuration could be represented by highlighting the enciphered character as in
D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ
Since the operation of an Enigma machine enciphering a message is a series of such configurations, each associated with a single character being enciphered, a sequence of such representations can be used to represent the operation of the machine as it enciphers a message. For example, the process of enciphering the first sentence of the main body of the famous "Dönitz message" to
RBBF PMHP HGCZ XTDY GAHG UFXG EWKB LKGJ
can be represented as
0001 F > KGWNT(R)BLQPAHYDVJIFXEZOCSMU CDTK 25 15 16 26
0002 O > UORYTQSLWXZHNM(B)VFCGEAPIJDK CDTL 25 15 16 01
0003 L > HLNRSKJAMGF(B)ICUQPDEYOZXWTV CDTM 25 15 16 02
0004 G > KPTXIG(F)MESAUHYQBOVJCLRZDNW CDUN 25 15 17 03
0005 E > XDYB(P)WOSMUZRIQGENLHVJTFACK CDUO 25 15 17 04
0006 N > DLIAJUOVCEXBN(M)GQPWZYFHRKTS CDUP 25 15 17 05
0007 D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ CDUQ 25 15 17 06
0008 E > JKGO(P)TCIHABRNMDEYLZFXWVUQS CDUR 25 15 17 07
0009 S > GCBUZRASYXVMLPQNOF(H)WDKTJIE CDUS 25 15 17 08
0010 I > XPJUOWIY(G)CVRTQEBNLZMDKFAHS CDUT 25 15 17 09
0011 S > DISAUYOMBPNTHKGJRQ(C)LEZXWFV CDUU 25 15 17 10
0012 T > FJLVQAKXNBGCPIRMEOY(Z)WDUHST CDUV 25 15 17 11
0013 S > KTJUQONPZCAMLGFHEW(X)BDYRSVI CDUW 25 15 17 12
0014 O > ZQXUVGFNWRLKPH(T)MBJYODEICSA CDUX 25 15 17 13
0015 F > XJWFR(D)ZSQBLKTVPOIEHMYNCAUG CDUY 25 15 17 14
0016 O > FSKTJARXPECNUL(Y)IZGBDMWVHOQ CDUZ 25 15 17 15
0017 R > CEAKBMRYUVDNFLTXW(G)ZOIJQPHS CDVA 25 15 18 16
0018 T > TLJRVQHGUCXBZYSWFDO(A)IEPKNM CDVB 25 15 18 17
0019 B > Y(H)LPGTEBKWICSVUDRQMFONJZAX CDVC 25 15 18 18
0020 E > KRUL(G)JEWNFADVIPOYBXZCMHSQT CDVD 25 15 18 19
0021 K > RCBPQMVZXY(U)OFSLDEANWKGTIJH CDVE 25 15 18 20
0022 A > (F)CBJQAWTVDYNXLUSEZPHOIGMKR CDVF 25 15 18 21
0023 N > VFTQSBPORUZWY(X)HGDIECJALNMK CDVG 25 15 18 22
0024 N > JSRHFENDUAZYQ(G)XTMCBPIWVOLK CDVH 25 15 18 23
0025 T > RCBUTXVZJINQPKWMLAY(E)DGOFSH CDVI 25 15 18 24
0026 Z > URFXNCMYLVPIGESKTBOQAJZDH(W) CDVJ 25 15 18 25
0027 U > JIOZFEWMBAUSHPCNRQLV(K)TGYXD CDVK 25 15 18 26
0028 G > ZGVRKO(B)XLNEIWJFUSDQYPCMHTA CDVL 25 15 18 01
0029 E > RMJV(L)YQZKCIEBONUGAWXPDSTFH CDVM 25 15 18 02
0030 B > G(K)QRFEANZPBMLHVJCDUXSOYTWI CDWN 25 15 19 03
0031 E > YMZT(G)VEKQOHPBSJLIUNDRFXWAC CDWO 25 15 19 04
0032 N > PDSBTIUQFNOVW(J)KAHZCEGLMYXR CDWP 25 15 19 05
where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor.
The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the enciphering of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the enciphering above can be expanded to show each of these stages using the same representation of mappings and highlighting for the enciphered character:
G > ABCDEF(G)HIJKLMNOPQRSTUVWXYZ
P EFMQAB(G)UINKXCJORDPZTHWVLYS AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW
1 OFRJVM(A)ZHQNBXPYKCULGSWETDI N 03 VIII
2 (N)UKCHVSMDGTZQFYEWPIALOXRJB U 17 VI
3 XJMIYVCARQOWH(L)NDSUFKGBEPZT D 15 V
4 QUNGALXEPKZ(Y)RDSOFTVCMBIHWJ C 25 β
R RDOBJNTKVEHMLFCWZAXGYIPS(U)Q c
4 EVTNHQDXWZJFUCPIAMOR(B)SYGLK β
3 H(V)GPWSUMDBTNCOKXJIQZRFLAEY V
2 TZDIPNJESYCUHAVRMXGKB(F)QWOL VI
1 GLQYW(B)TIZDPSFKANJCUXREVMOH VIII
P E(F)MQABGUINKXCJORDPZTHWVLYS AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW
F < KPTXIG(F)MESAUHYQBOVJCLRZDNW
Here the enciphering begins trivially with the first "mapping" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F.
This model has 4 rotors (lines 1 through 4) and the reflector (line R) also permutes (garbles) letters.
Models
The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines.
An estimated 40,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries.
Commercial Enigma
On 23 February 1918, Arthur Scherbius applied for a patent for a ciphering machine that used rotors. Scherbius and E. Richard Ritter founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the Chiffriermaschinen Aktien-Gesellschaft (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors.
Enigma Handelsmaschine (1923)
Chiffriermaschinen AG began advertising a rotor machine, Enigma Handelsmaschine, which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about .
Schreibende Enigma (1924)
This was also a model with a type writer. There were a number of problems associated with the printer and the construction was not stable until 1926. Both early versions of Enigma lacked the reflector and had to be switched between enciphering and deciphering.
Glühlampenmaschine, Enigma A (1924)
The reflector, suggested by Scherbius' colleague Willi Korn, was introduced with the glow lamp version.
The machine was also known as the military Enigma. It had two rotors and a manually rotatable reflector. The typewriter was omitted and glow lamps were used for output. The operation was somewhat different from later models. Before the next key pressure, the operator had to press a button to advance the right rotor one step.
Enigma B (1924)
Enigma model B was introduced late in 1924, and was of a similar construction. While bearing the Enigma name, both models A and B were quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma or Glühlampenmaschine since it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor.
Enigma C (1926)
Model C was the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter.
Enigma D (1927)
The Enigma C quickly gave way to Enigma D (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken, provided suitable cribs were available. Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This "QWERTZ" layout is very similar to the American QWERTY keyboard format used in many languages.
"Navy Cipher D"
Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma machines during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard. Enigma machines were also used by diplomatic services.
Enigma H (1929)
There was also a large, eight-rotor printing model, the Enigma H, called Enigma II by the Reichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently.
Enigma K
The Swiss used a version of Enigma called Model K or Swiss K for military and diplomatic use, which was very similar to commercial Enigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. An Enigma T model, code-named Tirpitz, was used by Japan.
Military Enigma
The various services of the Wehrmacht used various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including the Geheimschreiber.
Funkschlüssel C
The Reichsmarine was the first military branch to adopt Enigma. This version, named Funkschlüssel C ("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926.
The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering. The rotors had 28 contacts, with the letter X wired to bypass the rotors unencrypted. Three rotors were chosen from a set of five and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ. The machine was revised slightly in July 1933.
Enigma G (1928–1930)
By 15 July 1928, the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine, the Enigma G.
The Abwehr used the Enigma G. This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the "counter machine" or the Zählwerk Enigma.
Wehrmacht Enigma I (1930–1938)
Enigma machine G was modified to the Enigma I by June 1930. Enigma I is also known as the Wehrmacht, or "Services" Enigma, and was used extensively by German military services and other government organisations (such as the railways) before and during World War II.
The major difference between Enigma I (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength.
Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured and weighed around .
In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications.
M3 (1934)
By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications. The Reichsmarine eventually agreed and in 1934 brought into service the Navy version of the Army Enigma, designated Funkschlüssel ' or M3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five.
Two extra rotors (1938)
In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five. In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight.
M4 (1942)
A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called M4 (the network was known as Triton, or Shark to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor.
Surviving machines
The effort to break the Enigma was not disclosed until 1973. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts.
The Deutsches Museum in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. The Deutsches Spionagemuseum in Berlin also showcases two military variants. Enigma machines are also exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, Discovery Park of America in Tennessee, the Polish Army Museum in Warsaw, the Swedish Army Museum (Armémuseum) in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik, Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland the Technical University of Denmark in Lyngby, Denmark, in Skanderborg Bunkerne at Skanderborg, Denmark, and at the Australian War Memorial and in the foyer of the Australian Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibited a rare Polish Enigma double assembled in France in 1940. In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum.
In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of during World War II are on display alongside the submarine at the Museum of Science and Industry in Chicago, Illinois. A three-rotor Enigma is on display at Discovery Park of America in Union City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located at The National WWII Museum in New Orleans. The International Museum of World War II near Boston has seven Enigma machines on display, including a U-boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages. Computer Museum of America in Roswell, Georgia has a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. The National Museum of Computing also contains surviving Enigma machines in Bletchley, England.
In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario.
Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000 to US$547,500 in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues.
A rare Abwehr Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors.
In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning The Sunday Times to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months.
In October 2008, the Spanish daily newspaper El País reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War, because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums, including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at the Spanish Army Museum. Two have been given to Britain's GCHQ.
The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia.
On 3 December 2020, German divers working on behalf of the World Wide Fund for Nature discovered a destroyed Enigma machine in Flensburg Firth (part of the Baltic Sea) which is believed to be from a scuttled U-boat. This Enigma machine will be restored by and be the property of the Archaeology Museum of Schleswig Holstein.
An M4 Enigma was salvaged in the 1980s from the German minesweeper R15, which was sunk off the Istrian coast in 1945. The machine was put on display in the Pivka Park of Military History in Slovenia on 13 April 2023.
Derivatives
The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. Once the British discovered Enigma's principle of operation, they created the Typex rotor cipher, which the Germans believed to be unsolvable. Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents. In the United States, cryptologist William Friedman designed the M-325 machine, starting in 1936, that is logically similar.
Machines like the SIGABA, NEMA, Typex, and so forth, are not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform.
A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts.
Simulators
| Technology | Computer security | null |
9257 | https://en.wikipedia.org/wiki/Enzyme | Enzyme | Enzymes () are proteins that act as biological catalysts by accelerating chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called enzymology and the field of pseudoenzyme analysis recognizes that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties.
Enzymes are known to catalyze more than 5,000 biochemical reaction types.
Other biocatalysts are catalytic RNA molecules, also called ribozymes. They are sometimes described as a type of enzyme rather than being like an enzyme, but even in the decades since ribozymes' discovery in 1980–1982, the word enzyme alone often means the protein type specifically (as is used in this article).
An enzyme's specificity comes from its unique three-dimensional structure.
Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties.
Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew.
Etymology and history
By the late 17th and early 18th centuries, the digestion of meat by stomach secretions and the conversion of starch to sugars by plant extracts and saliva were known but the mechanisms by which these occurred had not been identified.
French chemist Anselme Payen was the first to discover an enzyme, diastase, in 1833. A few decades later, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that this fermentation was caused by a vital force contained within the yeast cells called "ferments", which were thought to function only within living organisms. He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells."
In 1877, German physiologist Wilhelm Kühne (1837–1900) first used the term enzyme, which comes , to describe this process. The word enzyme was used later to refer to nonliving substances such as pepsin, and the word ferment was used to refer to chemical activity produced by living organisms.
Eduard Buchner submitted his first paper on the study of yeast extracts in 1897. In a series of experiments at the University of Berlin, he found that sugar was fermented by yeast extracts even when there were no living yeast cells in the mixture. He named the enzyme that brought about the fermentation of sucrose "zymase". In 1907, he received the Nobel Prize in Chemistry for "his discovery of cell-free fermentation". Following Buchner's example, enzymes are usually named according to the reaction they carry out: the suffix -ase is combined with the name of the substrate (e.g., lactase is the enzyme that cleaves lactose) or to the type of reaction (e.g., DNA polymerase forms DNA polymers).
The biochemical identity of enzymes was still unknown in the early 1900s. Many scientists observed that enzymatic activity was associated with proteins, but others (such as Nobel laureate Richard Willstätter) argued that proteins were merely carriers for the true enzymes and that proteins per se were incapable of catalysis. In 1926, James B. Sumner showed that the enzyme urease was a pure protein and crystallized it; he did likewise for the enzyme catalase in 1937. The conclusion that pure proteins can be enzymes was definitively demonstrated by John Howard Northrop and Wendell Meredith Stanley, who worked on the digestive enzymes pepsin (1930), trypsin and chymotrypsin. These three scientists were awarded the 1946 Nobel Prize in Chemistry.
The discovery that enzymes could be crystallized eventually allowed their structures to be solved by x-ray crystallography. This was first done for lysozyme, an enzyme found in tears, saliva and egg whites that digests the coating of some bacteria; the structure was solved by a group led by David Chilton Phillips and published in 1965. This high-resolution structure of lysozyme marked the beginning of the field of structural biology and the effort to understand how enzymes work at an atomic level of detail.
Classification and nomenclature
Enzymes can be classified by two main criteria: either amino acid sequence similarity (and thus evolutionary relationship) or enzymatic activity.
Enzyme activity. An enzyme's name is often derived from its substrate or the chemical reaction it catalyzes, with the word ending in -ase. Examples are lactase, alcohol dehydrogenase and DNA polymerase. Different enzymes that catalyze the same chemical reaction are called isozymes.
The International Union of Biochemistry and Molecular Biology have developed a nomenclature for enzymes, the EC numbers (for "Enzyme Commission"). Each enzyme is described by "EC" followed by a sequence of four numbers which represent the hierarchy of enzymatic activity (from very general to very specific). That is, the first number broadly classifies the enzyme based on its mechanism while the other digits add more and more specificity.
The top-level classification is:
EC 1, Oxidoreductases: catalyze oxidation/reduction reactions
EC 2, Transferases: transfer a functional group (e.g. a methyl or phosphate group)
EC 3, Hydrolases: catalyze the hydrolysis of various bonds
EC 4, Lyases: cleave various bonds by means other than hydrolysis and oxidation
EC 5, Isomerases: catalyze isomerization changes within a single molecule
EC 6, Ligases: join two molecules with covalent bonds.
EC 7, Translocases: catalyze the movement of ions or molecules across membranes, or their separation within membranes.
These sections are subdivided by other features such as the substrate, products, and chemical mechanism. An enzyme is fully specified by four numerical designations. For example, hexokinase (EC 2.7.1.1) is a transferase (EC 2) that adds a phosphate group (EC 2.7) to a hexose sugar, a molecule containing an alcohol group (EC 2.7.1).
Sequence similarity. EC categories do not reflect sequence similarity. For instance, two ligases of the same EC number that catalyze exactly the same reaction can have completely different sequences. Independent of their function, enzymes, like any other proteins, have been classified by their sequence similarity into numerous families. These families have been documented in dozens of different protein and protein family databases such as Pfam.
Non-homologous isofunctional enzymes. Unrelated enzymes that have the same enzymatic activity have been called non-homologous isofunctional enzymes. Horizontal gene transfer may spread these genes to unrelated species, especially bacteria where they can replace endogenous genes of the same function, leading to hon-homologous gene displacement.
Structure
Enzymes are generally globular proteins, acting alone or in larger complexes. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzyme structures unfold (denature) when heated or exposed to chemical denaturants and this disruption to the structure typically causes a loss of activity. Enzyme denaturation is normally linked to temperatures above a species' normal level; as a result, enzymes from bacteria living in volcanic environments such as hot springs are prized by industrial users for their ability to function at high temperatures, allowing enzyme-catalysed reactions to be operated at a very high rate.
Enzymes are usually much larger than their substrates. Sizes range from just 62 amino acid residues, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. Only a small portion of their structure (around 2–4 amino acids) is directly involved in catalysis: the catalytic site. This catalytic site is located next to one or more binding sites where residues orient the substrates. The catalytic site and binding site together compose the enzyme's active site. The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site.
In some enzymes, no amino acids are directly involved in catalysis; instead, the enzyme contains sites to bind and orient catalytic cofactors. Enzyme structures may also contain allosteric sites where the binding of a small molecule causes a conformational change that increases or decreases activity.
A small number of RNA-based biological catalysts called ribozymes exist, which again can act alone or in complex with proteins. The most common of these is the ribosome which is a complex of protein and catalytic RNA components.
Mechanism
Substrate binding
Enzymes must bind their substrates before they can catalyse any chemical reaction. Enzymes are usually very specific as to what substrates they bind and then the chemical reaction catalysed. Specificity is achieved by binding pockets with complementary shape, charge and hydrophilic/hydrophobic characteristics to the substrates. Enzymes can therefore distinguish between very similar substrate molecules to be chemoselective, regioselective and stereospecific.
Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. Some of these enzymes have "proof-reading" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes.
Conversely, some enzymes display enzyme promiscuity, having broad specificity and acting on a range of different physiologically relevant substrates. Many enzymes possess small side activities which arose fortuitously (i.e. neutrally), which may be the starting point for the evolutionary selection of a new function.
"Lock and key" model
To explain the observed specificity of enzymes, in 1894 Emil Fischer proposed that both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another. This is often referred to as "the lock and key" model. This early model explains enzyme specificity, but fails to explain the stabilization of the transition state that enzymes achieve.
Induced fit model
In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continuously reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side-chains that make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined.
Induced fit may enhance the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism.
Catalysis
Enzymes can accelerate reactions in several ways, all of which lower the activation energy (ΔG‡, Gibbs free energy)
By stabilizing the transition state:
Creating an environment with a charge distribution complementary to that of the transition state to lower its energy
By providing an alternative reaction pathway:
Temporarily reacting with the substrate, forming a covalent intermediate to provide a lower energy transition state
By destabilizing the substrate ground state:
Distorting bound substrate(s) into their transition state form to reduce the energy required to reach the transition state
By orienting the substrates into a productive arrangement to reduce the reaction entropy change (the contribution of this mechanism to catalysis is relatively small)
Enzymes may use several of these mechanisms simultaneously. For example, proteases such as trypsin perform covalent catalysis using a catalytic triad, stabilize charge build-up on the transition states using an oxyanion hole, complete hydrolysis using an oriented water substrate.
Dynamics
Enzymes are not rigid, static structures; instead they have complex internal dynamic motions – that is, movements of parts of the enzyme's structure such as individual amino acid residues, groups of residues forming a protein loop or unit of secondary structure, or even an entire protein domain. These motions give rise to a conformational ensemble of slightly different structures that interconvert with one another at equilibrium. Different states within this ensemble may be associated with different aspects of an enzyme's function. For example, different conformations of the enzyme dihydrofolate reductase are associated with the substrate binding, catalysis, cofactor release, and product release steps of the catalytic cycle, consistent with catalytic resonance theory.
Substrate presentation
Substrate presentation is a process where the enzyme is sequestered away from its substrate. Enzymes can be sequestered to the plasma membrane away from a substrate in the nucleus or cytosol. Or within the membrane, an enzyme can be sequestered into lipid rafts away from its substrate in the disordered region. When the enzyme is released it mixes with its substrate. Alternatively, the enzyme can be sequestered near its substrate to activate the enzyme. For example, the enzyme can be soluble and upon activation bind to a lipid in the plasma membrane and then act upon molecules in the plasma membrane.
Allosteric modulation
Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In this way, allosteric interactions can either inhibit or activate enzymes. Allosteric interactions with metabolites upstream or downstream in an enzyme's metabolic pathway cause feedback regulation, altering the activity of the enzyme according to the flux through the rest of the pathway.
Cofactors
Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron–sulfur clusters) or organic compounds (e.g., flavin and heme). These cofactors serve many purposes; for instance, metal ions can help in stabilizing nucleophilic species within the active site. Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase).
An example of an enzyme that contains a cofactor is carbonic anhydrase, which uses a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions.
Enzymes that require a cofactor but do not have one bound are called apoenzymes or apoproteins. An enzyme together with the cofactor(s) required for activity is called a holoenzyme (or haloenzyme). The term holoenzyme can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity.
Coenzymes
Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Examples include NADH, NADPH and adenosine triphosphate (ATP). Some coenzymes, such as flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), thiamine pyrophosphate (TPP), and tetrahydrofolate (THF), are derived from vitamins. These coenzymes cannot be synthesized by the body de novo and closely related compounds (vitamins) must be acquired from the diet. The chemical groups carried include:
the hydride ion (H−), carried by NAD or NADP+
the phosphate group, carried by adenosine triphosphate
the acetyl group, carried by coenzyme A
formyl, methenyl or methyl groups, carried by folic acid and
the methyl group, carried by S-adenosylmethionine
Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. For example, about 1000 enzymes are known to use the coenzyme NADH.
Coenzymes are usually continuously regenerated and their concentrations maintained at a steady level inside the cell. For example, NADPH is regenerated through the pentose phosphate pathway and S-adenosylmethionine by methionine adenosyltransferase. This continuous regeneration means that small amounts of coenzymes can be used very intensively. For example, the human body turns over its own weight in ATP each day.
Thermodynamics
As with all catalysts, enzymes do not alter the position of the chemical equilibrium of the reaction. In the presence of an enzyme, the reaction runs in the same direction as it would without the enzyme, just more quickly. For example, carbonic anhydrase catalyzes its reaction in either direction depending on the concentration of its reactants:
The rate of a reaction is dependent on the activation energy needed to form the transition state which then decays into products. Enzymes increase reaction rates by lowering the energy of the transition state. First, binding forms a low energy enzyme-substrate complex (ES). Second, the enzyme stabilises the transition state such that it requires less energy to achieve compared to the uncatalyzed reaction (ES‡). Finally the enzyme-product complex (EP) dissociates to release the products.
Enzymes can couple two or more reactions, so that a thermodynamically favorable reaction can be used to "drive" a thermodynamically unfavourable one so that the combined energy of the products is lower than the substrates. For example, the hydrolysis of ATP is often used to drive other chemical reactions.
Kinetics
Enzyme kinetics is the investigation of how enzymes bind substrates and turn them into products. The rate data used in kinetic analyses are commonly obtained from enzyme assays. In 1913 Leonor Michaelis and Maud Leonora Menten proposed a quantitative theory of enzyme kinetics, which is referred to as Michaelis–Menten kinetics. The major contribution of Michaelis and Menten was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis–Menten complex in their honor. The enzyme then catalyzes the chemical step in the reaction and releases the product. This work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely used today.
Enzyme rates depend on solution conditions and substrate concentration. To find the maximum speed of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is seen. This is shown in the saturation curve on the right. Saturation happens because, as substrate concentration increases, more and more of the free enzyme is converted into the substrate-bound ES complex. At the maximum reaction rate (Vmax) of the enzyme, all the enzyme active sites are bound to substrate, and the amount of ES complex is the same as the total amount of enzyme.
Vmax is only one of several important kinetic parameters. The amount of substrate needed to achieve a given rate of reaction is also important. This is given by the Michaelis–Menten constant (Km), which is the substrate concentration required for an enzyme to reach one-half its maximum reaction rate; generally, each enzyme has a characteristic KM for a given substrate. Another useful constant is kcat, also called the turnover number, which is the number of substrate molecules handled by one active site per second.
The efficiency of an enzyme can be expressed in terms of kcat/Km. This is also called the specificity constant and incorporates the rate constants for all steps in the reaction up to and including the first irreversible step. Because the specificity constant reflects both affinity and catalytic ability, it is useful for comparing different enzymes against each other, or the same enzyme with different substrates. The theoretical maximum for the specificity constant is called the diffusion limit and is about 108 to 109 (M−1 s−1). At this point every collision of the enzyme with its substrate will result in catalysis, and the rate of product formation is not limited by the reaction rate but by the diffusion rate. Enzymes with this property are called catalytically perfect or kinetically perfect. Example of such enzymes are triose-phosphate isomerase, carbonic anhydrase, acetylcholinesterase, catalase, fumarase, β-lactamase, and superoxide dismutase. The turnover of such enzymes can reach several million reactions per second. But most enzymes are far from perfect: the average values of and are about and , respectively.
Michaelis–Menten kinetics relies on the law of mass action, which is derived from the assumptions of free diffusion and thermodynamically driven random collision. Many biochemical or cellular processes deviate significantly from these conditions, because of macromolecular crowding and constrained molecular movement. More recent, complex extensions of the model attempt to correct for these effects.
Inhibition
Enzyme reaction rates can be decreased by various types of enzyme inhibitors.
Types of inhibition
Competitive
A competitive inhibitor and substrate cannot bind to the enzyme at the same time. Often competitive inhibitors strongly resemble the real substrate of the enzyme. For example, the drug methotrexate is a competitive inhibitor of the enzyme dihydrofolate reductase, which catalyzes the reduction of dihydrofolate to tetrahydrofolate. The similarity between the structures of dihydrofolate and this drug are shown in the accompanying figure. This type of inhibition can be overcome with high substrate concentration. In some cases, the inhibitor can bind to a site other than the binding-site of the usual substrate and exert an allosteric effect to change the shape of the usual binding-site.
Non-competitive
A non-competitive inhibitor binds to a site other than where the substrate binds. The substrate still binds with its usual affinity and hence Km remains the same. However the inhibitor reduces the catalytic efficiency of the enzyme so that Vmax is reduced. In contrast to competitive inhibition, non-competitive inhibition cannot be overcome with high substrate concentration.
Uncompetitive
An uncompetitive inhibitor cannot bind to the free enzyme, only to the enzyme-substrate complex; hence, these types of inhibitors are most effective at high substrate concentration. In the presence of the inhibitor, the enzyme-substrate complex is inactive. This type of inhibition is rare.
Mixed
A mixed inhibitor binds to an allosteric site and the binding of the substrate and the inhibitor affect each other. The enzyme's function is reduced but not eliminated when bound to the inhibitor. This type of inhibitor does not follow the Michaelis–Menten equation.
Irreversible
An irreversible inhibitor permanently inactivates the enzyme, usually by forming a covalent bond to the protein. Penicillin and aspirin are common drugs that act in this manner.
Functions of inhibitors
In many organisms, inhibitors may act as part of a feedback mechanism. If an enzyme produces too much of one substance in the organism, that substance may act as an inhibitor for the enzyme at the beginning of the pathway that produces it, causing production of the substance to slow down or stop when there is sufficient amount. This is a form of negative feedback. Major metabolic pathways such as the citric acid cycle make use of this mechanism.
Since inhibitors modulate the function of enzymes they are often used as drugs. Many such drugs are reversible competitive inhibitors that resemble the enzyme's native substrate, similar to methotrexate above; other well-known examples include statins used to treat high cholesterol, and protease inhibitors used to treat retroviral infections such as HIV. A common example of an irreversible inhibitor that is used as a drug is aspirin, which inhibits the COX-1 and COX-2 enzymes that produce the inflammation messenger prostaglandin. Other enzyme inhibitors are poisons. For example, the poison cyanide is an irreversible enzyme inhibitor that combines with the copper and iron in the active site of the enzyme cytochrome c oxidase and blocks cellular respiration.
Factors affecting enzyme activity
As enzymes are made up of proteins, their actions are sensitive to change in many physio chemical factors such as pH, temperature, substrate concentration, etc.
The following table shows pH optima for various enzymes.
Biological function
Enzymes serve a wide variety of functions inside living organisms. They are indispensable for signal transduction and cell regulation, often via kinases and phosphatases. They also generate movement, with myosin hydrolyzing adenosine triphosphate (ATP) to generate muscle contraction, and also transport cargo around the cell as part of the cytoskeleton. Other ATPases in the cell membrane are ion pumps involved in active transport. Enzymes are also involved in more exotic functions, such as luciferase generating light in fireflies. Viruses can also contain enzymes for infecting cells, such as the HIV integrase and reverse transcriptase, or for viral release from cells, like the influenza virus neuraminidase.
An important function of enzymes is in the digestive systems of animals. Enzymes such as amylases and proteases break down large molecules (starch or proteins, respectively) into smaller ones, so they can be absorbed by the intestines. Starch molecules, for example, are too large to be absorbed from the intestine, but enzymes hydrolyze the starch chains into smaller molecules such as maltose and eventually glucose, which can then be absorbed. Different enzymes digest different food substances. In ruminants, which have herbivorous diets, microorganisms in the gut produce another enzyme, cellulase, to break down the cellulose cell walls of plant fiber.
Metabolism
Several enzymes can work together in a specific order, creating metabolic pathways. In a metabolic pathway, one enzyme takes the product of another enzyme as a substrate. After the catalytic reaction, the product is then passed on to another enzyme. Sometimes more than one enzyme can catalyze the same reaction in parallel; this can allow more complex regulation: with, for example, a low constant activity provided by one enzyme but an inducible high activity from a second enzyme.
Enzymes determine what steps occur in these pathways. Without enzymes, metabolism would neither progress through the same steps and could not be regulated to serve the needs of the cell. Most central metabolic pathways are regulated at a few key steps, typically through enzymes whose activity involves the hydrolysis of ATP. Because this reaction releases so much energy, other reactions that are thermodynamically unfavorable can be coupled to ATP hydrolysis, driving the overall series of linked metabolic reactions.
Control of activity
There are five main ways that enzyme activity is controlled in the cell.
Regulation
Enzymes can be either activated or inhibited by other molecules. For example, the end product(s) of a metabolic pathway are often inhibitors for one of the first enzymes of the pathway (usually the first irreversible step, called committed step), thus regulating the amount of end product made by the pathways. Such a regulatory mechanism is called a negative feedback mechanism, because the amount of the end product produced is regulated by its own concentration. Negative feedback mechanism can effectively adjust the rate of synthesis of intermediate metabolites according to the demands of the cells. This helps with effective allocations of materials and energy economy, and it prevents the excess manufacture of end products. Like other homeostatic devices, the control of enzymatic action helps to maintain a stable internal environment in living organisms.
Post-translational modification
Examples of post-translational modification include phosphorylation, myristoylation and glycosylation. For example, in the response to insulin, the phosphorylation of multiple enzymes, including glycogen synthase, helps control the synthesis or degradation of glycogen and allows the cell to respond to changes in blood sugar. Another example of post-translational modification is the cleavage of the polypeptide chain. Chymotrypsin, a digestive protease, is produced in inactive form as chymotrypsinogen in the pancreas and transported in this form to the stomach where it is activated. This stops the enzyme from digesting the pancreas or other tissues before it enters the gut. This type of inactive precursor to an enzyme is known as a zymogen or proenzyme.
Quantity
Enzyme production (transcription and translation of enzyme genes) can be enhanced or diminished by a cell in response to changes in the cell's environment. This form of gene regulation is called enzyme induction. For example, bacteria may become resistant to antibiotics such as penicillin because enzymes called beta-lactamases are induced that hydrolyse the crucial beta-lactam ring within the penicillin molecule. Another example comes from enzymes in the liver called cytochrome P450 oxidases, which are important in drug metabolism. Induction or inhibition of these enzymes can cause drug interactions. Enzyme levels can also be regulated by changing the rate of enzyme degradation. The opposite of enzyme induction is enzyme repression.
Subcellular distribution
Enzymes can be compartmentalized, with different metabolic pathways occurring in different cellular compartments. For example, fatty acids are synthesized by one set of enzymes in the cytosol, endoplasmic reticulum and Golgi and used by a different set of enzymes as a source of energy in the mitochondrion, through β-oxidation. In addition, trafficking of the enzyme to different compartments may change the degree of protonation (e.g., the neutral cytoplasm and the acidic lysosome) or oxidative state (e.g., oxidizing periplasm or reducing cytoplasm) which in turn affects enzyme activity. In contrast to partitioning into membrane bound organelles, enzyme subcellular localisation may also be altered through polymerisation of enzymes into macromolecular cytoplasmic filaments.
Organ specialization
In multicellular eukaryotes, cells in different organs and tissues have different patterns of gene expression and therefore have different sets of enzymes (known as isozymes) available for metabolic reactions. This provides a mechanism for regulating the overall metabolism of the organism. For example, hexokinase, the first enzyme in the glycolysis pathway, has a specialized form called glucokinase expressed in the liver and pancreas that has a lower affinity for glucose yet is more sensitive to glucose concentration. This enzyme is involved in sensing blood sugar and regulating insulin production.
Involvement in disease
Since the tight control of enzyme activity is essential for homeostasis, any malfunction (mutation, overproduction, underproduction or deletion) of a single critical enzyme can lead to a genetic disease. The malfunction of just one type of enzyme out of the thousands of types present in the human body can be fatal. An example of a fatal genetic disease due to enzyme insufficiency is Tay–Sachs disease, in which patients lack the enzyme hexosaminidase.
One example of enzyme deficiency is the most common type of phenylketonuria. Many different single amino acid mutations in the enzyme phenylalanine hydroxylase, which catalyzes the first step in the degradation of phenylalanine, result in build-up of phenylalanine and related products. Some mutations are in the active site, directly disrupting binding and catalysis, but many are far from the active site and reduce activity by destabilising the protein structure, or affecting correct oligomerisation. This can lead to intellectual disability if the disease is untreated. Another example is pseudocholinesterase deficiency, in which the body's ability to break down choline ester drugs is impaired.
Oral administration of enzymes can be used to treat some functional enzyme deficiencies, such as pancreatic insufficiency and lactose intolerance.
Another way enzyme malfunctions can cause disease comes from germline mutations in genes coding for DNA repair enzymes. Defects in these enzymes cause cancer because cells are less able to repair mutations in their genomes. This causes a slow accumulation of mutations and results in the development of cancers. An example of such a hereditary cancer syndrome is xeroderma pigmentosum, which causes the development of skin cancers in response to even minimal exposure to ultraviolet light.
Evolution
Similar to any other protein, enzymes change over time through mutations and sequence divergence. Given their central role in metabolism, enzyme evolution plays a critical role in adaptation. A key question is therefore whether and how enzymes can change their enzymatic activities alongside. It is generally accepted that many new enzyme activities have evolved through gene duplication and mutation of the duplicate copies although evolution can also happen without duplication. One example of an enzyme that has changed its activity is the ancestor of methionyl aminopeptidase (MAP) and creatine amidinohydrolase (creatinase) which are clearly homologous but catalyze very different reactions (MAP removes the amino-terminal methionine in new proteins while creatinase hydrolyses creatine to sarcosine and urea). In addition, MAP is metal-ion dependent while creatinase is not, hence this property was also lost over time. Small changes of enzymatic activity are extremely common among enzymes. In particular, substrate binding specificity (see above) can easily and quickly change with single amino acid changes in their substrate binding pockets. This is frequently seen in the main enzyme classes such as kinases.
Artificial (in vitro) evolution is now commonly used to modify enzyme activity or specificity for industrial applications (see below).
Industrial applications
Enzymes are used in the chemical industry and other industrial applications when extremely specific catalysts are required. Enzymes in general are limited in the number of reactions they have evolved to catalyze and also by their lack of stability in organic solvents and at high temperatures. As a consequence, protein engineering is an active area of research and involves attempts to create new enzymes with novel properties, either through rational design or in vitro evolution. These efforts have begun to be successful, and a few enzymes have now been designed "from scratch" to catalyze reactions that do not occur in nature.
| Biology and health sciences | Biochemistry and molecular biology | null |
9259 | https://en.wikipedia.org/wiki/Equivalence%20relation | Equivalence relation | In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive. The equipollence relation between line segments in geometry is a common example of an equivalence relation. A simpler example is equality. Any number is equal to itself (reflexive). If , then (symmetric). If and , then (transitive).
Each equivalence relation provides a partition of the underlying set into disjoint equivalence classes. Two elements of the given set are equivalent to each other if and only if they belong to the same equivalence class.
Notation
Various notations are used in the literature to denote that two elements and of a set are equivalent with respect to an equivalence relation the most common are "" and "", which are used when is implicit, and variations of "", "", or "" to specify explicitly. Non-equivalence may be written "" or "".
Definition
A binary relation on a set is said to be an equivalence relation, if and only if it is reflexive, symmetric and transitive. That is, for all and in
(reflexivity).
if and only if (symmetry).
If and then (transitivity).
together with the relation is called a setoid. The equivalence class of under denoted is defined as
Alternative definition using relational algebra
In relational algebra, if and are relations, then the composite relation is defined so that if and only if there is a such that and . This definition is a generalisation of the definition of functional composition. The defining properties of an equivalence relation on a set can then be reformulated as follows:
. (reflexivity). (Here, denotes the identity function on .)
(symmetry).
(transitivity).
Examples
Simple example
On the set , the relation is an equivalence relation. The following sets are equivalence classes of this relation:
The set of all equivalence classes for is This set is a partition of the set . It is also called the quotient set of by .
Equivalence relations
The following relations are all equivalence relations:
"Is equal to" on the set of numbers. For example, is equal to
"Has the same birthday as" on the set of all people.
"Is similar to" on the set of all triangles.
"Is congruent to" on the set of all triangles.
Given a natural number , "is congruent to, modulo " on the integers.
Given a function , "has the same image under as" on the elements of 's domain . For example, and have the same image under , viz. .
"Has the same absolute value as" on the set of real numbers
"Has the same cosine as" on the set of all angles.
Relations that are not equivalences
The relation "≥" between real numbers is reflexive and transitive, but not symmetric. For example, 7 ≥ 5 but not 5 ≥ 7.
The relation "has a common factor greater than 1 with" between natural numbers greater than 1, is reflexive and symmetric, but not transitive. For example, the natural numbers 2 and 6 have a common factor greater than 1, and 6 and 3 have a common factor greater than 1, but 2 and 3 do not have a common factor greater than 1.
The empty relation R (defined so that aRb is never true) on a set X is vacuously symmetric and transitive; however, it is not reflexive (unless X itself is empty).
The relation "is approximately equal to" between real numbers, even if more precisely defined, is not an equivalence relation, because although reflexive and symmetric, it is not transitive, since multiple small changes can accumulate to become a big change. However, if the approximation is defined asymptotically, for example by saying that two functions f and g are approximately equal near some point if the limit of f − g is 0 at that point, then this defines an equivalence relation.
Connections to other relations
A partial order is a relation that is reflexive, , and transitive.
Equality is both an equivalence relation and a partial order. Equality is also the only relation on a set that is reflexive, symmetric and antisymmetric. In algebraic expressions, equal variables may be substituted for one another, a facility that is not available for equivalence related variables. The equivalence classes of an equivalence relation can substitute for one another, but not individuals within a class.
A strict partial order is irreflexive, transitive, and asymmetric.
A partial equivalence relation is transitive and symmetric. Such a relation is reflexive if and only if it is total, that is, if for all there exists some Therefore, an equivalence relation may be alternatively defined as a symmetric, transitive, and total relation.
A ternary equivalence relation is a ternary analogue to the usual (binary) equivalence relation.
A reflexive and symmetric relation is a dependency relation (if finite), and a tolerance relation if infinite.
A preorder is reflexive and transitive.
A congruence relation is an equivalence relation whose domain is also the underlying set for an algebraic structure, and which respects the additional structure. In general, congruence relations play the role of kernels of homomorphisms, and the quotient of a structure by a congruence relation can be formed. In many important cases, congruence relations have an alternative representation as substructures of the structure on which they are defined (e.g., the congruence relations on groups correspond to the normal subgroups).
Any equivalence relation is the negation of an apartness relation, though the converse statement only holds in classical mathematics (as opposed to constructive mathematics), since it is equivalent to the law of excluded middle.
Each relation that is both reflexive and left (or right) Euclidean is also an equivalence relation.
Well-definedness under an equivalence relation
If is an equivalence relation on and is a property of elements of such that whenever is true if is true, then the property is said to be well-defined or a under the relation
A frequent particular case occurs when is a function from to another set if implies then is said to be a for a or simply This occurs, e.g. in the character theory of finite groups. The latter case with the function can be expressed by a commutative triangle. | Mathematics | Discrete mathematics | null |
9260 | https://en.wikipedia.org/wiki/Equivalence%20class | Equivalence class | In mathematics, when the elements of some set have a notion of equivalence (formalized as an equivalence relation), then one may naturally split the set into equivalence classes. These equivalence classes are constructed so that elements and belong to the same equivalence class if, and only if, they are equivalent.
Formally, given a set and an equivalence relation on the of an element in is denoted or, equivalently, to emphasize its equivalence relation The definition of equivalence relations implies that the equivalence classes form a partition of meaning, that every element of the set belongs to exactly one equivalence class.
The set of the equivalence classes is sometimes called the quotient set or the quotient space of by and is denoted by
When the set has some structure (such as a group operation or a topology) and the equivalence relation is compatible with this structure, the quotient set often inherits a similar structure from its parent set. Examples include quotient spaces in linear algebra, quotient spaces in topology, quotient groups, homogeneous spaces, quotient rings, quotient monoids, and quotient categories.
Definition and notation
An equivalence relation on a set is a binary relation on satisfying the three properties:
for all (reflexivity),
implies for all (symmetry),
if and then for all (transitivity).
The equivalence class of an element is defined as
The word "class" in the term "equivalence class" may generally be considered as a synonym of "set", although some equivalence classes are not sets but proper classes. For example, "being isomorphic" is an equivalence relation on groups, and the equivalence classes, called isomorphism classes, are not sets.
The set of all equivalence classes in with respect to an equivalence relation is denoted as and is called modulo (or the of by ). The surjective map from onto which maps each element to its equivalence class, is called the , or the canonical projection.
Every element of an equivalence class characterizes the class, and may be used to represent it. When such an element is chosen, it is called a representative of the class. The choice of a representative in each class defines an injection from to . Since its composition with the canonical surjection is the identity of such an injection is called a section, when using the terminology of category theory.
Sometimes, there is a section that is more "natural" than the other ones. In this case, the representatives are called . For example, in modular arithmetic, for every integer greater than , the congruence modulo is an equivalence relation on the integers, for which two integers and are equivalent—in this case, one says congruent—if divides this is denoted Each class contains a unique non-negative integer smaller than and these integers are the canonical representatives.
The use of representatives for representing classes allows avoiding to consider explicitly classes as sets. In this case, the canonical surjection that maps an element to its class is replaced by the function that maps an element to the representative of its class. In the preceding example, this function is denoted and produces the remainder of the Euclidean division of by .
Properties
Every element of is a member of the equivalence class Every two equivalence classes and are either equal or disjoint. Therefore, the set of all equivalence classes of forms a partition of : every element of belongs to one and only one equivalence class. Conversely, every partition of comes from an equivalence relation in this way, according to which if and only if and belong to the same set of the partition.
It follows from the properties in the previous section that if is an equivalence relation on a set and and are two elements of the following statements are equivalent:
Examples
Let be the set of all rectangles in a plane, and the equivalence relation "has the same area as", then for each positive real number there will be an equivalence class of all the rectangles that have area
Consider the modulo 2 equivalence relation on the set of integers, such that if and only if their difference is an even number. This relation gives rise to exactly two equivalence classes: one class consists of all even numbers, and the other class consists of all odd numbers. Using square brackets around one member of the class to denote an equivalence class under this relation, and all represent the same element of
Let be the set of ordered pairs of integers with non-zero and define an equivalence relation on such that if and only if then the equivalence class of the pair can be identified with the rational number and this equivalence relation and its equivalence classes can be used to give a formal definition of the set of rational numbers. The same construction can be generalized to the field of fractions of any integral domain.
If consists of all the lines in, say, the Euclidean plane, and means that and are parallel lines, then the set of lines that are parallel to each other form an equivalence class, as long as a line is considered parallel to itself. In this situation, each equivalence class determines a point at infinity.
Graphical representation
An undirected graph may be associated to any symmetric relation on a set where the vertices are the elements of and two vertices and are joined if and only if Among these graphs are the graphs of equivalence relations. These graphs, called cluster graphs, are characterized as the graphs such that the connected components are cliques.
Invariants
If is an equivalence relation on and is a property of elements of such that whenever is true if is true, then the property is said to be an invariant of or well-defined under the relation
A frequent particular case occurs when is a function from to another set ; if whenever then is said to be or simply This occurs, for example, in the character theory of finite groups. Some authors use "compatible with " or just "respects " instead of "invariant under ".
Any function is class invariant under according to which if and only if The equivalence class of is the set of all elements in which get mapped to that is, the class is the inverse image of This equivalence relation is known as the kernel of
More generally, a function may map equivalent arguments (under an equivalence relation on ) to equivalent values (under an equivalence relation on ). Such a function is a morphism of sets equipped with an equivalence relation.
Quotient space in topology
In topology, a quotient space is a topological space formed on the set of equivalence classes of an equivalence relation on a topological space, using the original space's topology to create the topology on the set of equivalence classes.
In abstract algebra, congruence relations on the underlying set of an algebra allow the algebra to induce an algebra on the equivalence classes of the relation, called a quotient algebra. In linear algebra, a quotient space is a vector space formed by taking a quotient group, where the quotient homomorphism is a linear map. By extension, in abstract algebra, the term quotient space may be used for quotient modules, quotient rings, quotient groups, or any quotient algebra. However, the use of the term for the more general cases can as often be by analogy with the orbits of a group action.
The orbits of a group action on a set may be called the quotient space of the action on the set, particularly when the orbits of the group action are the right cosets of a subgroup of a group, which arise from the action of the subgroup on the group by left translations, or respectively the left cosets as orbits under right translation.
A normal subgroup of a topological group, acting on the group by translation action, is a quotient space in the senses of topology, abstract algebra, and group actions simultaneously.
Although the term can be used for any equivalence relation's set of equivalence classes, possibly with further structure, the intent of using the term is generally to compare that type of equivalence relation on a set either to an equivalence relation that induces some structure on the set of equivalence classes from a structure of the same kind on or to the orbits of a group action. Both the sense of a structure preserved by an equivalence relation, and the study of invariants under group actions, lead to the definition of invariants of equivalence relations given above.
| Mathematics | Set theory | null |
9263 | https://en.wikipedia.org/wiki/Ether | Ether | In organic chemistry, ethers are a class of compounds that contain an ether group—a single oxygen atom bonded to two separate carbon atoms, each part of an organyl group (e.g., alkyl or aryl). They have the general formula , where R and R′ represent the organyl groups. Ethers can again be classified into two varieties: if the organyl groups are the same on both sides of the oxygen atom, then it is a simple or symmetrical ether, whereas if they are different, the ethers are called mixed or unsymmetrical ethers. A typical example of the first group is the solvent and anaesthetic diethyl ether, commonly referred to simply as "ether" (). Ethers are common in organic chemistry and even more prevalent in biochemistry, as they are common linkages in carbohydrates and lignin.
Structure and bonding
Ethers feature bent linkages. In dimethyl ether, the bond angle is 111° and C–O distances are 141 pm. The barrier to rotation about the C–O bonds is low. The bonding of oxygen in ethers, alcohols, and water is similar. In the language of valence bond theory, the hybridization at oxygen is sp3.
Oxygen is more electronegative than carbon, thus the alpha hydrogens of ethers are more acidic than those of simple hydrocarbons. They are far less acidic than alpha hydrogens of carbonyl groups (such as in ketones or aldehydes), however.
Ethers can be symmetrical of the type ROR or unsymmetrical of the type ROR'. Examples of the former are dimethyl ether, diethyl ether, dipropyl ether etc. Illustrative unsymmetrical ethers are anisole (methoxybenzene) and dimethoxyethane.
Vinyl- and acetylenic ethers
Vinyl- and acetylenic ethers are far less common than alkyl or aryl ethers. Vinylethers, often called enol ethers, are important intermediates in organic synthesis. Acetylenic ethers are especially rare. Di-tert-butoxyacetylene is the most common example of this rare class of compounds.
Nomenclature
In the IUPAC Nomenclature system, ethers are named using the general formula "alkoxyalkane", for example CH3–CH2–O–CH3 is methoxyethane. If the ether is part of a more-complex molecule, it is described as an alkoxy substituent, so –OCH3 would be considered a "methoxy-" group. The simpler alkyl radical is written in front, so CH3–O–CH2CH3 would be given as methoxy(CH3O)ethane(CH2CH3).
Trivial name
IUPAC rules are often not followed for simple ethers. The trivial names for simple ethers (i.e., those with none or few other functional groups) are a composite of the two substituents followed by "ether". For example, ethyl methyl ether (CH3OC2H5), diphenylether (C6H5OC6H5). As for other organic compounds, very common ethers acquired names before rules for nomenclature were formalized. Diethyl ether is simply called ether, but was once called sweet oil of vitriol. Methyl phenyl ether is anisole, because it was originally found in aniseed. The aromatic ethers include furans. Acetals (α-alkoxy ethers R–CH(–OR)–O–R) are another class of ethers with characteristic properties.
Polyethers
Polyethers are generally polymers containing ether linkages in their main chain. The term polyol generally refers to polyether polyols with one or more functional end-groups such as a hydroxyl group. The term "oxide" or other terms are used for high molar mass polymer when end-groups no longer affect polymer properties.
Crown ethers are cyclic polyethers. Some toxins produced by dinoflagellates such as brevetoxin and ciguatoxin are extremely large and are known as cyclic or ladder polyethers.
The phenyl ether polymers are a class of aromatic polyethers containing aromatic cycles in their main chain: polyphenyl ether (PPE) and poly(p-phenylene oxide) (PPO).
Related compounds
Many classes of compounds with C–O–C linkages are not considered ethers: Esters (R–C(=O)–O–R′), hemiacetals (R–CH(–OH)–O–R′), carboxylic acid anhydrides (RC(=O)–O–C(=O)R′).
There are compounds which, instead of C in the linkage, contain heavier group 14 chemical elements (e.g., Si, Ge, Sn, Pb). Such compounds are considered ethers as well. Examples of such ethers are silyl enol ethers (containing the linkage), disiloxane (the other name of this compound is disilyl ether, containing the linkage) and stannoxanes (containing the linkage).
Physical properties
Ethers have boiling points similar to those of the analogous alkanes. Simple ethers are generally colorless.
Reactions
The C-O bonds that comprise simple ethers are strong. They are unreactive toward all but the strongest bases. Although generally of low chemical reactivity, they are more reactive than alkanes.
Specialized ethers such as epoxides, ketals, and acetals are unrepresentative classes of ethers and are discussed in separate articles. Important reactions are listed below.
Cleavage
Although ethers resist hydrolysis, they are cleaved by hydrobromic acid and hydroiodic acid. Hydrogen chloride cleaves ethers only slowly. Methyl ethers typically afford methyl halides:
ROCH3 + HBr → CH3Br + ROH
These reactions proceed via onium intermediates, i.e. [RO(H)CH3]+Br−.
Some ethers undergo rapid cleavage with boron tribromide (even aluminium chloride is used in some cases) to give the alkyl bromide. Depending on the substituents, some ethers can be cleaved with a variety of reagents, e.g. strong base.
Despite these difficulties the chemical paper pulping processes are based on cleavage of ether bonds in the lignin.
Peroxide formation
When stored in the presence of air or oxygen, ethers tend to form explosive peroxides, such as diethyl ether hydroperoxide. The reaction is accelerated by light, metal catalysts, and aldehydes. In addition to avoiding storage conditions likely to form peroxides, it is recommended, when an ether is used as a solvent, not to distill it to dryness, as any peroxides that may have formed, being less volatile than the original ether, will become concentrated in the last few drops of liquid. The presence of peroxide in old samples of ethers may be detected by shaking them with freshly prepared solution of a ferrous sulfate followed by addition of KSCN. Appearance of blood red color indicates presence of peroxides. The dangerous properties of ether peroxides are the reason that diethyl ether and other peroxide forming ethers like tetrahydrofuran (THF) or ethylene glycol dimethyl ether (1,2-dimethoxyethane) are avoided in industrial processes.
Lewis bases
Ethers serve as Lewis bases. For instance, diethyl ether forms a complex with boron trifluoride, i.e. borane diethyl etherate (). Ethers also coordinate to the Mg center in Grignard reagents. Tetrahydrofuran is more basic than acyclic ethers. It forms with many complexes.
Alpha-halogenation
This reactivity is similar to the tendency of ethers with alpha hydrogen atoms to form peroxides. Reaction with chlorine produces alpha-chloroethers.
Synthesis
Dehydration of alcohols
The dehydration of alcohols affords ethers:
2 R–OH → R–O–R + H2O at high temperature
This direct nucleophilic substitution reaction requires elevated temperatures (about 125 °C). The reaction is catalyzed by acids, usually sulfuric acid. The method is effective for generating symmetrical ethers, but not unsymmetrical ethers, since either OH can be protonated, which would give a mixture of products. Diethyl ether is produced from ethanol by this method. Cyclic ethers are readily generated by this approach. Elimination reactions compete with dehydration of the alcohol:
R–CH2–CH2(OH) → R–CH=CH2 + H2O
The dehydration route often requires conditions incompatible with delicate molecules. Several milder methods exist to produce ethers.
Electrophilic addition of alcohols to alkenes
Alcohols add to electrophilically activated alkenes. The method is atom-economical:
R2C=CR2 + R–OH → R2CH–C(–O–R)–R2
Acid catalysis is required for this reaction. Commercially important ethers prepared in this way are derived from isobutene or isoamylene, which protonate to give relatively stable carbocations. Using ethanol and methanol with these two alkenes, four fuel-grade ethers are produced: methyl tert-butyl ether (MTBE), methyl tert-amyl ether (TAME), ethyl tert-butyl ether (ETBE), and ethyl tert-amyl ether (TAEE).
Solid acid catalysts are typically used to promote this reaction.
Epoxides
Epoxides are typically prepared by oxidation of alkenes. The most important epoxide in terms of industrial scale is ethylene oxide, which is produced by oxidation of ethylene with oxygen. Other epoxides are produced by one of two routes:
By the oxidation of alkenes with a peroxyacid such as m-CPBA.
By the base intramolecular nucleophilic substitution of a halohydrin.
Many ethers, ethoxylates and crown ethers, are produced from epoxides.
Williamson and Ullmann ether syntheses
Nucleophilic displacement of alkyl halides by alkoxides
R–ONa + R′–X → R–O–R′ + NaX
This reaction, the Williamson ether synthesis, involves treatment of a parent alcohol with a strong base to form the alkoxide, followed by addition of an appropriate aliphatic compound bearing a suitable leaving group (R–X). Although popular in textbooks, the method is usually impractical on scale because it cogenerates significant waste.
Suitable leaving groups (X) include iodide, bromide, or sulfonates. This method usually does not work well for aryl halides (e.g. bromobenzene, see Ullmann condensation below). Likewise, this method only gives the best yields for primary halides. Secondary and tertiary halides are prone to undergo E2 elimination on exposure to the basic alkoxide anion used in the reaction due to steric hindrance from the large alkyl groups.
In a related reaction, alkyl halides undergo nucleophilic displacement by phenoxides. The R–X cannot be used to react with the alcohol. However phenols can be used to replace the alcohol while maintaining the alkyl halide. Since phenols are acidic, they readily react with a strong base like sodium hydroxide to form phenoxide ions. The phenoxide ion will then substitute the –X group in the alkyl halide, forming an ether with an aryl group attached to it in a reaction with an SN2 mechanism.
C6H5OH + OH− → C6H5–O− + H2O
C6H5–O− + R–X → C6H5OR
The Ullmann condensation is similar to the Williamson method except that the substrate is an aryl halide. Such reactions generally require a catalyst, such as copper.
Important ethers
| Physical sciences | Carbon–oxygen bond | null |
9264 | https://en.wikipedia.org/wiki/Ecliptic | Ecliptic | The ecliptic or ecliptic plane is the orbital plane of Earth around the Sun. It was a central concept in a number of ancient sciences, providing the framework for key measurements in astronomy, astrology and calendar-making.
From the perspective of an observer on Earth, the Sun's movement around the celestial sphere over the course of a year traces out a path along the ecliptic against the background of stars – specifically the Zodiac constellations. The planets of the solar system can also be seen along the ecliptic, because their orbital planes are very close to Earth's. The moon's orbital plane is also similar to Earth's; the ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it.
The ecliptic is an important reference plane and is the basis of the ecliptic coordinate system. Ancient scientists were able to calculate Earth's axial tilt by comparing the ecliptic plane to that of the equator.
Sun's apparent motion
The ecliptic is the apparent path of the Sun throughout the course of a year.
Because Earth takes one year to orbit the Sun, the apparent position of the Sun takes one year to make a complete circuit of the ecliptic. With slightly more than 365 days in one year, the Sun moves a little less than 1° eastward every day. This small difference in the Sun's position against the stars causes any particular spot on Earth's surface to catch up with (and stand directly north or south of) the Sun about four minutes later each day than it would if Earth did not orbit; a day on Earth is therefore 24 hours long rather than the approximately 23-hour 56-minute sidereal day. Again, this is a simplification, based on a hypothetical Earth that orbits at uniform speed around the Sun. The actual speed with which Earth orbits the Sun varies slightly during the year, so the speed with which the Sun seems to move along the ecliptic also varies. For example, the Sun is north of the celestial equator for about 185 days of each year, and south of it for about 180 days. The variation of orbital speed accounts for part of the equation of time.
Because of the movement of Earth around the Earth–Moon center of mass, the apparent path of the Sun wobbles slightly, with a period of about one month. Because of further perturbations by the other planets of the Solar System, the Earth–Moon barycenter wobbles slightly around a mean position in a complex fashion.
Relationship to the celestial equator
Because Earth's rotational axis is not perpendicular to its orbital plane, Earth's equatorial plane is not coplanar with the ecliptic plane, but is inclined to it by an angle of about 23.4°, which is known as the obliquity of the ecliptic. If the equator is projected outward to the celestial sphere, forming the celestial equator, it crosses the ecliptic at two points known as the equinoxes. The Sun, in its apparent motion along the ecliptic, crosses the celestial equator at these points, one from south to north, the other from north to south. The crossing from south to north is known as the March equinox, also known as the first point of Aries and the ascending node of the ecliptic on the celestial equator. The crossing from north to south is the September equinox or descending node.
The orientation of Earth's axis and equator are not fixed in space, but rotate about the poles of the ecliptic with a period of about 26,000 years, a process known as lunisolar precession, as it is due mostly to the gravitational effect of the Moon and Sun on Earth's equatorial bulge. Likewise, the ecliptic itself is not fixed. The gravitational perturbations of the other bodies of the Solar System cause a much smaller motion of the plane of Earth's orbit, and hence of the ecliptic, known as planetary precession. The combined action of these two motions is called general precession, and changes the position of the equinoxes by about 50 arc seconds (about 0.014°) per year.
Once again, this is a simplification. Periodic motions of the Moon and apparent periodic motions of the Sun (actually of Earth in its orbit) cause short-term small-amplitude periodic oscillations of Earth's axis, and hence the celestial equator, known as nutation.
This adds a periodic component to the position of the equinoxes; the positions of the celestial equator and (March) equinox with fully updated precession and nutation are called the true equator and equinox; the positions without nutation are the mean equator and equinox.
Obliquity of the ecliptic
Obliquity of the ecliptic is the term used by astronomers for the inclination of Earth's equator with respect to the ecliptic, or of Earth's rotation axis to a perpendicular to the ecliptic. It is about 23.4° and is currently decreasing 0.013 degrees (47 arcseconds) per hundred years because of planetary perturbations.
The angular value of the obliquity is found by observation of the motions of Earth and other planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived.
Until 1983 the obliquity for any date was calculated from work of Newcomb, who analyzed positions of the planets until about 1895:
where is the obliquity and is tropical centuries from B1900.0 to the date in question.
From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated:
where hereafter is Julian centuries from J2000.0.
JPL's fundamental ephemerides have been continually updated. The Astronomical Almanac for 2010 specifies:
These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps several centuries. J. Laskar computed an expression to order good to /1000 years over 10,000 years.
All of these expressions are for the mean obliquity, that is, without the nutation of the equator included. The true or instantaneous obliquity includes the nutation.
Plane of the Solar System
Most of the major bodies of the Solar System orbit the Sun in nearly the same plane. This is likely due to the way in which the Solar System formed from a protoplanetary disk. Probably the closest current representation of the disk is known as the invariable plane of the Solar System. Earth's orbit, and hence, the ecliptic, is inclined a little more than 1° to the invariable plane, Jupiter's orbit is within a little more than ½° of it, and the other major planets are all within about 6°. Because of this, most Solar System bodies appear very close to the ecliptic in the sky.
The invariable plane is defined by the angular momentum of the entire Solar System, essentially the vector sum of all of the orbital and rotational angular momenta of all the bodies of the system; more than 60% of the total comes from the orbit of Jupiter. That sum requires precise knowledge of every object in the system, making it a somewhat uncertain value. Because of the uncertainty regarding the exact location of the invariable plane, and because the ecliptic is well defined by the apparent motion of the Sun, the ecliptic is used as the reference plane of the Solar System both for precision and convenience. The only drawback of using the ecliptic instead of the invariable plane is that over geologic time scales, it will move against fixed reference points in the sky's distant background.
Celestial reference plane
The ecliptic forms one of the two fundamental planes used as reference for positions on the celestial sphere, the other being the celestial equator. Perpendicular to the ecliptic are the ecliptic poles, the north ecliptic pole being the pole north of the equator. Of the two fundamental planes, the ecliptic is closer to unmoving against the background stars, its motion due to planetary precession being roughly 1/100 that of the celestial equator.
Spherical coordinates, known as ecliptic longitude and latitude or celestial longitude and latitude, are used to specify positions of bodies on the celestial sphere with respect to the ecliptic. Longitude is measured positively eastward 0° to 360° along the ecliptic from the March equinox, the same direction in which the Sun appears to move. Latitude is measured perpendicular to the ecliptic, to +90° northward or −90° southward to the poles of the ecliptic, the ecliptic itself being 0° latitude. For a complete spherical position, a distance parameter is also necessary. Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near Earth, Earth radii or kilometers are used. A corresponding right-handed rectangular coordinate system is also used occasionally; the x-axis is directed toward the March equinox, the y-axis 90° to the east, and the z-axis toward the north ecliptic pole; the astronomical unit is the unit of measure. Symbols for ecliptic coordinates are somewhat standardized; see the table.
Ecliptic coordinates are convenient for specifying positions of Solar System objects, as most of the planets' orbits have small inclinations to the ecliptic, and therefore always appear relatively close to it on the sky. Because Earth's orbit, and hence the ecliptic, moves very little, it is a relatively fixed reference with respect to the stars.
Because of the precessional motion of the equinox, the ecliptic coordinates of objects on the celestial sphere are continuously changing. Specifying a position in ecliptic coordinates requires specifying a particular equinox, that is, the equinox of a particular date, known as an epoch; the coordinates are referred to the direction of the equinox at that date. For instance, the Astronomical Almanac lists the heliocentric position of Mars at 0h Terrestrial Time, 4 January 2010 as: longitude 118°09′15.8″, latitude +1°43′16.7″, true heliocentric distance 1.6302454 AU, mean equinox and ecliptic of date. This specifies the mean equinox of 4 January 2010 0h TT as above, without the addition of nutation.
Eclipses
Because the orbit of the Moon is inclined only about 5.145° to the ecliptic and the Sun is always very near the ecliptic, eclipses always occur on or near it. Because of the inclination of the Moon's orbit, eclipses do not occur at every conjunction and opposition of the Sun and Moon, but only when the Moon is near an ascending or descending node at the same time it is at conjunction (new) or opposition (full). The ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it.
Equinoxes and solstices
The exact instants of equinoxes and solstices are the times when the apparent ecliptic longitude (including the effects of aberration and nutation) of the Sun is 0°, 90°, 180°, and 270°. Because of perturbations of Earth's orbit and anomalies of the calendar, the dates of these are not fixed.
In the constellations
The ecliptic currently passes through the following thirteen constellations:
There are twelve constellations that are not on the ecliptic, but are close enough that the Moon and planets can occasionally appear in them.
Cetus
Pegasus
Aquila
Scutum
Serpens
Hydra
Corvus
Crater
Sextans
Canis Minor
Auriga
Orion
Astrology
The ecliptic forms the center of the zodiac, a celestial belt about 20° wide in latitude through which the Sun, Moon, and planets always appear to move.
Traditionally, this region is divided into 12 signs of 30° longitude, each of which approximates the Sun's motion in one month. In ancient times, the signs corresponded roughly to 12 of the constellations that straddle the ecliptic.
These signs are sometimes still used in modern terminology. The "First Point of Aries" was named when the March equinox Sun was actually in the constellation Aries; it has since moved into Pisces because of precession of the equinoxes.
| Physical sciences | Celestial sphere | null |
9277 | https://en.wikipedia.org/wiki/Ellipse | Ellipse | In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity , a number ranging from (the limiting case of a circle) to (the limiting case of infinite elongation, no longer an ellipse but a parabola).
An ellipse has a simple algebraic solution for its area, but for its perimeter (also known as circumference), integration is required to obtain an exact solution.
Analytically, the equation of a standard ellipse centered at the origin with width and height is:
Assuming , the foci are for . The standard parametric equation is:
Ellipses are the closed type of conic section: a plane curve tracing the intersection of a cone with a plane (see figure). Ellipses have many similarities with the other two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. An angled cross section of a right circular cylinder is also an ellipse.
An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. This constant ratio is the above-mentioned eccentricity:
Ellipses are common in physics, astronomy and engineering. For example, the orbit of each planet in the Solar System is approximately an ellipse with the Sun at one focus point (more precisely, the focus is the barycenter of the Sunplanet pair). The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described by ellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle under parallel or perspective projection. The ellipse is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency: a similar effect leads to elliptical polarization of light in optics.
The name, (, "omission"), was given by Apollonius of Perga in his Conics.
Definition as locus of points
An ellipse can be defined geometrically as a set or locus of points in the Euclidean plane:
The midpoint of the line segment joining the foci is called the center of the ellipse. The line through the foci is called the major axis, and the line perpendicular to it through the center is the minor axis. The major axis intersects the ellipse at two vertices , which have distance to the center. The distance of the foci to the center is called the focal distance or linear eccentricity. The quotient is the eccentricity.
The case yields a circle and is included as a special type of ellipse.
The equation can be viewed in a different way (see figure):
is called the circular directrix (related to focus of the ellipse. This property should not be confused with the definition of an ellipse using a directrix line below.
Using Dandelin spheres, one can prove that any section of a cone with a plane is an ellipse, assuming the plane does not contain the apex and has slope less than that of the lines on the cone.
In Cartesian coordinates
Standard equation
The standard form of an ellipse in Cartesian coordinates assumes that the origin is the center of the ellipse, the x-axis is the major axis, and:
For an arbitrary point the distance to the focus is and to the other focus . Hence the point is on the ellipse whenever:
Removing the radicals by suitable squarings and using (see diagram) produces the standard equation of the ellipse:
or, solved for y:
The width and height parameters are called the semi-major and semi-minor axes. The top and bottom points are the co-vertices. The distances from a point on the ellipse to the left and right foci are and .
It follows from the equation that the ellipse is symmetric with respect to the coordinate axes and hence with respect to the origin.
Parameters
Principal axes
Throughout this article, the semi-major and semi-minor axes are denoted and , respectively, i.e.
In principle, the canonical ellipse equation may have (and hence the ellipse would be taller than it is wide). This form can be converted to the standard form by transposing the variable names and and the parameter names and
Linear eccentricity
This is the distance from the center to a focus: .
Eccentricity
The eccentricity can be expressed as:
assuming An ellipse with equal axes () has zero eccentricity, and is a circle.
Semi-latus rectum
The length of the chord through one focus, perpendicular to the major axis, is called the latus rectum. One half of it is the semi-latus rectum . A calculation shows:
The semi-latus rectum is equal to the radius of curvature at the vertices (see section curvature).
Tangent
An arbitrary line intersects an ellipse at 0, 1, or 2 points, respectively called an exterior line, tangent and secant. Through any point of an ellipse there is a unique tangent. The tangent at a point of the ellipse has the coordinate equation:
A vector parametric equation of the tangent is:
Proof:
Let be a point on an ellipse and be the equation of any line containing . Inserting the line's equation into the ellipse equation and respecting yields:
There are then cases:
Then line and the ellipse have only point in common, and is a tangent. The tangent direction has perpendicular vector , so the tangent line has equation for some . Because is on the tangent and the ellipse, one obtains .
Then line has a second point in common with the ellipse, and is a secant.
Using (1) one finds that is a tangent vector at point , which proves the vector equation.
If and are two points of the ellipse such that , then the points lie on two conjugate diameters (see below). (If , the ellipse is a circle and "conjugate" means "orthogonal".)
Shifted ellipse
If the standard ellipse is shifted to have center , its equation is
The axes are still parallel to the x- and y-axes.
General ellipse
In analytic geometry, the ellipse is defined as a quadric: the set of points of the Cartesian plane that, in non-degenerate cases, satisfy the implicit equation
provided
To distinguish the degenerate cases from the non-degenerate case, let ∆ be the determinant
Then the ellipse is a non-degenerate real ellipse if and only if C∆ < 0. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse.
The general equation's coefficients can be obtained from known semi-major axis , semi-minor axis , center coordinates , and rotation angle (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:
These expressions can be derived from the canonical equation
by a Euclidean transformation of the coordinates :
Conversely, the canonical form parameters can be obtained from the general-form coefficients by the equations:
where is the 2-argument arctangent function.
Parametric representation
Standard parametric representation
Using trigonometric functions, a parametric representation of the standard ellipse is:
The parameter t (called the eccentric anomaly in astronomy) is not the angle of with the x-axis, but has a geometric meaning due to Philippe de La Hire (see below).
Rational representation
With the substitution and trigonometric formulae one obtains
and the rational parametric equation of an ellipse
which covers any point of the ellipse except the left vertex .
For this formula represents the right upper quarter of the ellipse moving counter-clockwise with increasing The left vertex is the limit
Alternately, if the parameter is considered to be a point on the real projective line , then the corresponding rational parametrization is
Then
Rational representations of conic sections are commonly used in computer-aided design (see Bézier curve).
Tangent slope as parameter
A parametric representation, which uses the slope of the tangent at a point of the ellipse
can be obtained from the derivative of the standard representation :
With help of trigonometric formulae one obtains:
Replacing and of the standard representation yields:
Here is the slope of the tangent at the corresponding ellipse point, is the upper and the lower half of the ellipse. The vertices, having vertical tangents, are not covered by the representation.
The equation of the tangent at point has the form . The still unknown can be determined by inserting the coordinates of the corresponding ellipse point :
This description of the tangents of an ellipse is an essential tool for the determination of the orthoptic of an ellipse. The orthoptic article contains another proof, without differential calculus and trigonometric formulae.
General ellipse
Another definition of an ellipse uses affine transformations:
Any ellipse is an affine image of the unit circle with equation .
Parametric representation
An affine transformation of the Euclidean plane has the form , where is a regular matrix (with non-zero determinant) and is an arbitrary vector. If are the column vectors of the matrix , the unit circle , , is mapped onto the ellipse:
Here is the center and are the directions of two conjugate diameters, in general not perpendicular.
Vertices
The four vertices of the ellipse are , for a parameter defined by:
(If , then .) This is derived as follows. The tangent vector at point is:
At a vertex parameter , the tangent is perpendicular to the major/minor axes, so:
Expanding and applying the identities gives the equation for
Area
From Apollonios theorem (see below) one obtains:
The area of an ellipse is
Semiaxes
With the abbreviations
the statements of Apollonios's theorem can be written as:
Solving this nonlinear system for yields the semiaxes:
Implicit representation
Solving the parametric representation for by Cramer's rule and using , one obtains the implicit representation
Conversely: If the equation
with
of an ellipse centered at the origin is given, then the two vectors
point to two conjugate points and the tools developed above are applicable.
Example: For the ellipse with equation the vectors are
Rotated standard ellipse
For one obtains a parametric representation of the standard ellipse rotated by angle :
Ellipse in space
The definition of an ellipse in this section gives a parametric representation of an arbitrary ellipse, even in space, if one allows to be vectors in space.
Polar forms
Polar form relative to center
In polar coordinates, with the origin at the center of the ellipse and with the angular coordinate measured from the major axis, the ellipse's equation is
where is the eccentricity, not Euler's number.
Polar form relative to focus
If instead we use polar coordinates with the origin at one focus, with the angular coordinate still measured from the major axis, the ellipse's equation is
where the sign in the denominator is negative if the reference direction points towards the center (as illustrated on the right), and positive if that direction points away from the center.
The angle is called the true anomaly of the point. The numerator is the semi-latus rectum.
Eccentricity and the directrix property
Each of the two lines parallel to the minor axis, and at a distance of from it, is called a directrix of the ellipse (see diagram).
For an arbitrary point of the ellipse, the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity:
The proof for the pair follows from the fact that and satisfy the equation
The second case is proven analogously.
The converse is also true and can be used to define an ellipse (in a manner similar to the definition of a parabola):
For any point (focus), any line (directrix) not through , and any real number with the ellipse is the locus of points for which the quotient of the distances to the point and to the line is that is:
The extension to , which is the eccentricity of a circle, is not allowed in this context in the Euclidean plane. However, one may consider the directrix of a circle to be the line at infinity in the projective plane.
(The choice yields a parabola, and if , a hyperbola.)
Proof
Let , and assume is a point on the curve. The directrix has equation . With , the relation produces the equations
and
The substitution yields
This is the equation of an ellipse (), or a parabola (), or a hyperbola (). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram).
If , introduce new parameters so that , and then the equation above becomes
which is the equation of an ellipse with center , the x-axis as major axis, and
the major/minor semi axis .
Construction of a directrix
Because of point of directrix (see diagram) and focus are inverse with respect to the circle inversion at circle (in diagram green). Hence can be constructed as shown in the diagram. Directrix is the perpendicular to the main axis at point .
General ellipse
If the focus is and the directrix , one obtains the equation
(The right side of the equation uses the Hesse normal form of a line to calculate the distance .)
Focus-to-focus reflection property
An ellipse possesses the following property:
The normal at a point bisects the angle between the lines .
Proof
Because the tangent line is perpendicular to the normal, an equivalent statement is that the tangent is the external angle bisector of the lines to the foci (see diagram).
Let be the point on the line with distance to the focus , where is the semi-major axis of the ellipse. Let line be the external angle bisector of the lines and Take any other point on By the triangle inequality and the angle bisector theorem, therefore must be outside the ellipse. As this is true for every choice of only intersects the ellipse at the single point so must be the tangent line.
Application
The rays from one focus are reflected by the ellipse to the second focus. This property has optical and acoustic applications similar to the reflective property of a parabola (see whispering gallery).
Additionally, because of the focus-to-focus reflection property of ellipses, if the rays are allowed to continue propagating, reflected rays will eventually align closely with the major axis.
Conjugate diameters
Definition of conjugate diameters
A circle has the following property:
The midpoints of parallel chords lie on a diameter.
An affine transformation preserves parallelism and midpoints of line segments, so this property is true for any ellipse. (Note that the parallel chords and the diameter are no longer orthogonal.)
Definition
Two diameters of an ellipse are conjugate if the midpoints of chords parallel to lie on
From the diagram one finds:
Two diameters of an ellipse are conjugate whenever the tangents at and are parallel to .
Conjugate diameters in an ellipse generalize orthogonal diameters in a circle.
In the parametric equation for a general ellipse given above,
any pair of points belong to a diameter, and the pair belong to its conjugate diameter.
For the common parametric representation of the ellipse with equation one gets: The points
(signs: (+,+) or (−,−) )
(signs: (−,+) or (+,−) )
are conjugate and
In case of a circle the last equation collapses to
Theorem of Apollonios on conjugate diameters
For an ellipse with semi-axes the following is true:
Let and be halves of two conjugate diameters (see diagram) then
.
The triangle with sides (see diagram) has the constant area , which can be expressed by , too. is the altitude of point and the angle between the half diameters. Hence the area of the ellipse (see section metric properties) can be written as .
The parallelogram of tangents adjacent to the given conjugate diameters has the
Proof
Let the ellipse be in the canonical form with parametric equation
The two points are on conjugate diameters (see previous section). From trigonometric formulae one obtains and
The area of the triangle generated by is
and from the diagram it can be seen that the area of the parallelogram is 8 times that of . Hence
Orthogonal tangents
For the ellipse the intersection points of orthogonal tangents lie on the circle .
This circle is called orthoptic or director circle of the ellipse (not to be confused with the circular directrix defined above).
Drawing ellipses
Ellipses appear in descriptive geometry as images (parallel or central projection) of circles. There exist various tools to draw an ellipse. Computers provide the fastest and most accurate method for drawing an ellipse. However, technical tools (ellipsographs) to draw an ellipse without a computer exist. The principle was known to the 5th century mathematician Proclus, and the tool now known as an elliptical trammel was invented by Leonardo da Vinci.
If there is no ellipsograph available, one can draw an ellipse using an approximation by the four osculating circles at the vertices.
For any method described below, knowledge of the axes and the semi-axes is necessary (or equivalently: the foci and the semi-major axis). If this presumption is not fulfilled one has to know at least two conjugate diameters. With help of Rytz's construction the axes and semi-axes can be retrieved.
de La Hire's point construction
The following construction of single points of an ellipse is due to de La Hire. It is based on the standard parametric representation of an ellipse:
Draw the two circles centered at the center of the ellipse with radii and the axes of the ellipse.
Draw a line through the center, which intersects the two circles at point and , respectively.
Draw a line through that is parallel to the minor axis and a line through that is parallel to the major axis. These lines meet at an ellipse point (see diagram).
Repeat steps (2) and (3) with different lines through the center.
Pins-and-string method
The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using two drawing pins, a length of string, and a pencil. In this method, pins are pushed into the paper at two points, which become the ellipse's foci. A string is tied at each end to the two pins; its length after tying is . The tip of the pencil then traces an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called the gardener's ellipse. The Byzantine architect Anthemius of Tralles () described how this method could be used to construct an elliptical reflector, and it was elaborated in a now-lost 9th-century treatise by Al-Ḥasan ibn Mūsā.
A similar method for drawing confocal ellipses with a closed string is due to the Irish bishop Charles Graves.
Paper strip methods
The two following methods rely on the parametric representation (see , above):
This representation can be modeled technically by two simple methods. In both cases center, the axes and semi axes have to be known.
Method 1
The first method starts with
a strip of paper of length .
The point, where the semi axes meet is marked by . If the strip slides with both ends on the axes of the desired ellipse, then point traces the ellipse. For the proof one shows that point has the parametric representation , where parameter is the angle of the slope of the paper strip.
A technical realization of the motion of the paper strip can be achieved by a Tusi couple (see animation). The device is able to draw any ellipse with a fixed sum , which is the radius of the large circle. This restriction may be a disadvantage in real life. More flexible is the second paper strip method.
A variation of the paper strip method 1 uses the observation that the midpoint of the paper strip is moving on the circle with center (of the ellipse) and radius . Hence, the paperstrip can be cut at point into halves, connected again by a joint at and the sliding end fixed at the center (see diagram). After this operation the movement of the unchanged half of the paperstrip is unchanged. This variation requires only one sliding shoe.
Method 2
The second method starts with
a strip of paper of length .
One marks the point, which divides the strip into two substrips of length and . The strip is positioned onto the axes as described in the diagram. Then the free end of the strip traces an ellipse, while the strip is moved. For the proof, one recognizes that the tracing point can be described parametrically by , where parameter is the angle of slope of the paper strip.
This method is the base for several ellipsographs (see section below).
Similar to the variation of the paper strip method 1 a variation of the paper strip method 2 can be established (see diagram) by cutting the part between the axes into halves.
Most ellipsograph drafting instruments are based on the second paperstrip method.
Approximation by osculating circles
From Metric properties below, one obtains:
The radius of curvature at the vertices is:
The radius of curvature at the co-vertices is:
The diagram shows an easy way to find the centers of curvature at vertex and co-vertex , respectively:
mark the auxiliary point and draw the line segment
draw the line through , which is perpendicular to the line
the intersection points of this line with the axes are the centers of the osculating circles.
(proof: simple calculation.)
The centers for the remaining vertices are found by symmetry.
With help of a French curve one draws a curve, which has smooth contact to the osculating circles.
Steiner generation
The following method to construct single points of an ellipse relies on the Steiner generation of a conic section:
Given two pencils of lines at two points (all lines containing and , respectively) and a projective but not perspective mapping of onto , then the intersection points of corresponding lines form a non-degenerate projective conic section.
For the generation of points of the ellipse one uses the pencils at the vertices . Let be an upper co-vertex of the ellipse and .
is the center of the rectangle . The side of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonal as direction onto the line segment and assign the division as shown in the diagram. The parallel projection together with the reverse of the orientation is part of the projective mapping between the pencils at and needed. The intersection points of any two related lines and are points of the uniquely defined ellipse. With help of the points the points of the second quarter of the ellipse can be determined. Analogously one obtains the points of the lower half of the ellipse.
Steiner generation can also be defined for hyperbolas and parabolas. It is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle.
As hypotrochoid
The ellipse is a special case of the hypotrochoid when , as shown in the adjacent image. The special case of a moving circle with radius inside a circle with radius is called a Tusi couple.
Inscribed angles and three-point form
Circles
A circle with equation is uniquely determined by three points not on a line. A simple way to determine the parameters uses the inscribed angle theorem for circles:
For four points (see diagram) the following statement is true:
The four points are on a circle if and only if the angles at and are equal.
Usually one measures inscribed angles by a degree or radian θ, but here the following measurement is more convenient:
In order to measure the angle between two lines with equations one uses the quotient:
Inscribed angle theorem for circles
For four points no three of them on a line, we have the following (see diagram):
The four points are on a circle, if and only if the angles at and are equal. In terms of the angle measurement above, this means:
At first the measure is available only for chords not parallel to the y-axis, but the final formula works for any chord.
Three-point form of circle equation
As a consequence, one obtains an equation for the circle determined by three non-collinear points :
For example, for the three-point equation is:
, which can be rearranged to
Using vectors, dot products and determinants this formula can be arranged more clearly, letting :
The center of the circle satisfies:
The radius is the distance between any of the three points and the center.
Ellipses
This section considers the family of ellipses defined by equations with a fixed eccentricity . It is convenient to use the parameter:
and to write the ellipse equation as:
where q is fixed and vary over the real numbers. (Such ellipses have their axes parallel to the coordinate axes: if , the major axis is parallel to the x-axis; if , it is parallel to the y-axis.)
Like a circle, such an ellipse is determined by three points not on a line.
For this family of ellipses, one introduces the following q-analog angle measure, which is not a function of the usual angle measure θ:
In order to measure an angle between two lines with equations one uses the quotient:
Inscribed angle theorem for ellipses
Given four points , no three of them on a line (see diagram).
The four points are on an ellipse with equation if and only if the angles at and are equal in the sense of the measurement above—that is, if
At first the measure is available only for chords which are not parallel to the y-axis. But the final formula works for any chord. The proof follows from a straightforward calculation. For the direction of proof given that the points are on an ellipse, one can assume that the center of the ellipse is the origin.
Three-point form of ellipse equation
A consequence, one obtains an equation for the ellipse determined by three non-collinear points :
For example, for and one obtains the three-point form
and after conversion
Analogously to the circle case, the equation can be written more clearly using vectors:
where is the modified dot product
Pole-polar relation
Any ellipse can be described in a suitable coordinate system by an equation . The equation of the tangent at a point of the ellipse is If one allows point to be an arbitrary point different from the origin, then
point is mapped onto the line , not through the center of the ellipse.
This relation between points and lines is a bijection.
The inverse function maps
line onto the point and
line onto the point
Such a relation between points and lines generated by a conic is called pole-polar relation or polarity. The pole is the point; the polar the line.
By calculation one can confirm the following properties of the pole-polar relation of the ellipse:
For a point (pole) on the ellipse, the polar is the tangent at this point (see diagram:
For a pole outside the ellipse, the intersection points of its polar with the ellipse are the tangency points of the two tangents passing (see diagram:
For a point within the ellipse, the polar has no point with the ellipse in common (see diagram:
The intersection point of two polars is the pole of the line through their poles.
The foci and , respectively, and the directrices and , respectively, belong to pairs of pole and polar. Because they are even polar pairs with respect to the circle , the directrices can be constructed by compass and straightedge (see Inversive geometry).
Pole-polar relations exist for hyperbolas and parabolas as well.
Metric properties
All metric properties given below refer to an ellipse with equation
except for the section on the area enclosed by a tilted ellipse, where the generalized form of Eq.() will be given.
Area
The area enclosed by an ellipse is:
where and are the lengths of the semi-major and semi-minor axes, respectively. The area formula is intuitive: start with a circle of radius (so its area is ) and stretch it by a factor to make an ellipse. This scales the area by the same factor: However, using the same approach for the circumference would be fallacious – compare the integrals and . It is also easy to rigorously prove the area formula using integration as follows. Equation () can be rewritten as For this curve is the top half of the ellipse. So twice the integral of over the interval will be the area of the ellipse:
The second integral is the area of a circle of radius that is, So
An ellipse defined implicitly by has area
The area can also be expressed in terms of eccentricity and the length of the semi-major axis as (obtained by solving for flattening, then computing the semi-minor axis).
So far we have dealt with erect ellipses, whose major and minor axes are parallel to the and axes. However, some applications require tilted ellipses. In charged-particle beam optics, for instance, the enclosed area of an erect or tilted ellipse is an important property of the beam, its emittance. In this case a simple formula still applies, namely
where , are intercepts and , are maximum values. It follows directly from Apollonios's theorem.
Circumference
The circumference of an ellipse is:
where again is the length of the semi-major axis, is the eccentricity, and the function is the complete elliptic integral of the second kind,
which is in general not an elementary function.
The circumference of the ellipse may be evaluated in terms of using Gauss's arithmetic-geometric mean; this is a quadratically converging iterative method (see here for details).
The exact infinite series is:
where is the double factorial (extended to negative odd integers in the usual way, giving and ).
This series converges, but by expanding in terms of James Ivory, Bessel and Kummer derived a series that converges much more rapidly. It is most concisely written in terms of the binomial coefficient with :
The coefficients are slightly smaller (by a factor of ), but also is numerically much smaller than except at and . For eccentricities less than 0.5 the error is at the limits of double-precision floating-point after the term.
Srinivasa Ramanujan gave two close approximations for the circumference in §16 of "Modular Equations and Approximations to "; they are
and
where takes on the same meaning as above. The errors in these approximations, which were obtained empirically, are of order and respectively. This is because the second formula's infinite series expansion matches Ivory's formula up to the term.
Arc length
More generally, the arc length of a portion of the circumference, as a function of the angle subtended (or of any two points on the upper half of the ellipse), is given by an incomplete elliptic integral. The upper half of an ellipse is parameterized by
Then the arc length from to is:
This is equivalent to
where is the incomplete elliptic integral of the second kind with parameter
Some lower and upper bounds on the circumference of the canonical ellipse with are
Here the upper bound is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and the minor axes.
Given an ellipse whose axes are drawn, we can construct the endpoints of a particular elliptic arc whose length is one eighth of the ellipse's circumference using only straightedge and compass in a finite number of steps; for some specific shapes of ellipses, such as when the axes have a length ratio of , it is additionally possible to construct the endpoints of a particular arc whose length is one twelfth of the circumference. (The vertices and co-vertices are already endpoints of arcs whose length is one half or one quarter of the ellipse's circumference.) However, the general theory of straightedge-and-compass elliptic division appears to be unknown, unlike in the case of the circle and the lemniscate. The division in special cases has been investigated by Legendre in his classical treatise.
Curvature
The curvature is given by:
and the radius of curvature, ρ = 1/κ, at point :
The radius of curvature of an ellipse, as a function of angle from the center, is:
where e is the eccentricity.
Radius of curvature at the two vertices and the centers of curvature:
Radius of curvature at the two co-vertices and the centers of curvature:
The locus of all the centers of curvature is called an evolute. In the case of an ellipse, the evolute is an astroid.
In triangle geometry
Ellipses appear in triangle geometry as
Steiner ellipse: ellipse through the vertices of the triangle with center at the centroid,
inellipses: ellipses which touch the sides of a triangle. Special cases are the Steiner inellipse and the Mandart inellipse.
As plane sections of quadrics
Ellipses appear as plane sections of the following quadrics:
Ellipsoid
Elliptic cone
Elliptic cylinder
Hyperboloid of one sheet
Hyperboloid of two sheets
Applications
Physics
Elliptical reflectors and acoustics
If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, after reflecting off the walls, converge simultaneously to a single point: the second focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci.
Similarly, if a light source is placed at one focus of an elliptic mirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, a prolate spheroid), this property holds for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners.
Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under a vaulted roof shaped as a section of a prolate spheroid. Such a room is called a whisper chamber. The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are the National Statuary Hall at the United States Capitol (where John Quincy Adams is said to have used this property for eavesdropping on political matters); the Mormon Tabernacle at Temple Square in Salt Lake City, Utah; at an exhibit on sound at the Museum of Science and Industry in Chicago; in front of the University of Illinois at Urbana–Champaign Foellinger Auditorium; and also at a side chamber of the Palace of Charles V, in the Alhambra.
Planetary orbits
In the 17th century, Johannes Kepler discovered that the orbits along which the planets travel around the Sun are ellipses with the Sun [approximately] at one focus, in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation.
More generally, in the gravitational two-body problem, if the two bodies are bound to each other (that is, the total energy is negative), their orbits are similar ellipses with the common barycenter being one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus.
Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due to electromagnetic radiation and quantum effects, which become significant when the particles are moving at high speed.)
For elliptical orbits, useful relations involving the eccentricity are:
where
is the radius at apoapsis, i.e., the farthest distance of the orbit to the barycenter of the system, which is a focus of the ellipse
is the radius at periapsis, the closest distance
is the length of the semi-major axis
Also, in terms of and , the semi-major axis is their arithmetic mean, the semi-minor axis is their geometric mean, and the semi-latus rectum is their harmonic mean. In other words,
Harmonic oscillators
The general solution for a harmonic oscillator in two or more dimensions is also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elastic spring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these "harmonic orbits" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion.
Phase visualization
In electronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of an oscilloscope. If the Lissajous figure display is an ellipse, rather than a straight line, the two signals are out of phase.
Elliptical gears
Two non-circular gears with the same elliptical outline, each pivoting around one focus and positioned at the proper angle, turn smoothly while maintaining contact at all times. Alternatively, they can be connected by a link chain or timing belt, or in the case of a bicycle the main chainring may be elliptical, or an ovoid similar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variable angular speed or torque from a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varying mechanical advantage.
Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears.
An example gear application would be a device that winds thread onto a conical bobbin on a spinning machine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base.
Optics
In a material that is optically anisotropic (birefringent), the refractive index depends on the direction of the light. The dependency can be described by an index ellipsoid. (If the material is optically isotropic, this ellipsoid is a sphere.)
In lamp-pumped solid-state lasers, elliptical cylinder-shaped reflectors have been used to direct light from the pump lamp (coaxial with one ellipse focal axis) to the active medium rod (coaxial with the second focal axis).
In laser-plasma produced EUV light sources used in microchip lithography, EUV light is generated by plasma positioned in the primary focus of an ellipsoid mirror and is collected in the secondary focus at the input of the lithography machine.
Statistics and finance
In statistics, a bivariate random vector is jointly elliptically distributed if its iso-density contours—loci of equal values of the density function—are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is the multivariate normal distribution. The elliptical distributions are important in the financial field because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return.
Computer graphics
Drawing an ellipse as a graphics primitive is common in standard display libraries, such as the MacIntosh QuickDraw API, and Direct2D on Windows. Jack Bresenham at IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967. Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken.
In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties. These algorithms need only a few multiplications and additions to calculate each vector.
It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent "jaggedness" of the approximation.
Drawing with Bézier paths
Composite Bézier curves may also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as an affine transformation of a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituent Bézier curves behave appropriately under such transformations.
Optimization theory
It is sometimes useful to find the minimum bounding ellipse on a set of points. The ellipsoid method is quite useful for solving this problem.
| Mathematics | Geometry | null |
9279 | https://en.wikipedia.org/wiki/Elephant | Elephant | Elephants are the largest living land animals. Three living species are currently recognised: the African bush elephant (Loxodonta africana), the African forest elephant (L. cyclotis), and the Asian elephant (Elephas maximus). They are the only surviving members of the family Elephantidae and the order Proboscidea; extinct relatives include mammoths and mastodons. Distinctive features of elephants include a long proboscis called a trunk, tusks, large ear flaps, pillar-like legs, and tough but sensitive grey skin. The trunk is prehensile, bringing food and water to the mouth and grasping objects. Tusks, which are derived from the incisor teeth, serve both as weapons and as tools for moving objects and digging. The large ear flaps assist in maintaining a constant body temperature as well as in communication. African elephants have larger ears and concave backs, whereas Asian elephants have smaller ears and convex or level backs.
Elephants are scattered throughout sub-Saharan Africa, South Asia, and Southeast Asia and are found in different habitats, including savannahs, forests, deserts, and marshes. They are herbivorous, and they stay near water when it is accessible. They are considered to be keystone species, due to their impact on their environments. Elephants have a fission–fusion society, in which multiple family groups come together to socialise. Females (cows) tend to live in family groups, which can consist of one female with her calves or several related females with offspring. The leader of a female group, usually the oldest cow, is known as the matriarch.
Males (bulls) leave their family groups when they reach puberty and may live alone or with other males. Adult bulls mostly interact with family groups when looking for a mate. They enter a state of increased testosterone and aggression known as musth, which helps them gain dominance over other males as well as reproductive success. Calves are the centre of attention in their family groups and rely on their mothers for as long as three years. Elephants can live up to 70 years in the wild. They communicate by touch, sight, smell, and sound; elephants use infrasound and seismic communication over long distances. Elephant intelligence has been compared with that of primates and cetaceans. They appear to have self-awareness, and possibly show concern for dying and dead individuals of their kind.
African bush elephants and Asian elephants are listed as endangered and African forest elephants as critically endangered on the IUCN Red Lists. One of the biggest threats to elephant populations is the ivory trade, as the animals are poached for their ivory tusks. Other threats to wild elephants include habitat destruction and conflicts with local people. Elephants are used as working animals in Asia. In the past, they were used in war; today, they are often controversially put on display in zoos, or employed for entertainment in circuses. Elephants have an iconic status in human culture and have been widely featured in art, folklore, religion, literature, and popular culture.
Etymology
The word elephant is derived from the Latin word (genitive ) , which is the Latinised form of the ancient Greek () (genitive ()), probably from a non-Indo-European language, likely Phoenician. It is attested in Mycenaean Greek as (genitive ) in Linear B syllabic script. As in Mycenaean Greek, Homer used the Greek word to mean ivory, but after the time of Herodotus, it also referred to the animal. The word elephant appears in Middle English as () and was borrowed from Old French (12th century).
Taxonomy and evolution
Elephants belong to the family Elephantidae, the sole remaining family within the order Proboscidea. Their closest extant relatives are the sirenians (dugongs and manatees) and the hyraxes, with which they share the clade Paenungulata within the superorder Afrotheria. Elephants and sirenians are further grouped in the clade Tethytheria.
Three species of living elephants are recognised; the African bush elephant (Loxodonta africana), forest elephant (Loxodonta cyclotis), and Asian elephant (Elephas maximus). African elephants were traditionally considered a single species, Loxodonta africana, but molecular studies have affirmed their status as separate species. Mammoths (Mammuthus) are nested within living elephants as they are more closely related to Asian elephants than to African elephants. Another extinct genus of elephant, Palaeoloxodon, is also recognised, which appears to have close affinities with African elephants and to have hybridised with African forest elephants.
Evolution
Over 180 extinct members of order Proboscidea have been described. The earliest proboscideans, the African Eritherium and Phosphatherium are known from the late Paleocene. The Eocene included Numidotherium, Moeritherium, and Barytherium from Africa. These animals were relatively small and, some, like Moeritherium and Barytherium were probably amphibious. Later on, genera such as Phiomia and Palaeomastodon arose; the latter likely inhabited more forested areas. Proboscidean diversification changed little during the Oligocene. One notable species of this epoch was Eritreum melakeghebrekristosi of the Horn of Africa, which may have been an ancestor to several later species.
A major event in proboscidean evolution was the collision of Afro-Arabia with Eurasia, during the Early Miocene, around 18–19 million years ago, allowing proboscideans to disperse from their African homeland across Eurasia and later, around 16–15 million years ago into North America across the Bering Land Bridge. Proboscidean groups prominent during the Miocene include the deinotheres, along with the more advanced elephantimorphs, including mammutids (mastodons), gomphotheres, amebelodontids (which includes the "shovel tuskers" like Platybelodon), choerolophodontids and stegodontids. Around 10 million years ago, the earliest members of the family Elephantidae emerged in Africa, having originated from gomphotheres.
Elephantids are distinguished from earlier proboscideans by a major shift in the molar morphology to parallel lophs rather than the cusps of earlier proboscideans, allowing them to become higher-crowned (hypsodont) and more efficient in consuming grass. The Late Miocene saw major climactic changes, which resulted in the decline and extinction of many proboscidean groups. The earliest members of the modern genera of Elephantidae appeared during the latest Miocene–early Pliocene around 5 million years ago. The elephantid genera Elephas (which includes the living Asian elephant) and Mammuthus (mammoths) migrated out of Africa during the late Pliocene, around 3.6 to 3.2 million years ago.
Over the course of the Early Pleistocene, all non-elephantid probobscidean genera outside of the Americas became extinct with the exception of Stegodon, with gomphotheres dispersing into South America as part of the Great American interchange, and mammoths migrating into North America around 1.5 million years ago. At the end of the Early Pleistocene, around 800,000 years ago the elephantid genus Palaeoloxodon dispersed outside of Africa, becoming widely distributed in Eurasia. Proboscideans were represented by around 23 species at the beginning of the Late Pleistocene. Proboscideans underwent a dramatic decline during the Late Pleistocene as part of the Late Pleistocene extinctions of most large mammals globally, with all remaining non-elephantid proboscideans (including Stegodon, mastodons, and the American gomphotheres Cuvieronius and Notiomastodon) and Palaeoloxodon becoming extinct, with mammoths only surviving in relict populations on islands around the Bering Strait into the Holocene, with their latest survival being on Wrangel Island, where they persisted until around 4,000 years ago.
Over the course of their evolution, probobscideans grew in size. With that came longer limbs and wider feet with a more digitigrade stance, along with a larger head and shorter neck. The trunk evolved and grew longer to provide reach. The number of premolars, incisors, and canines decreased, and the cheek teeth (molars and premolars) became longer and more specialised. The incisors developed into tusks of different shapes and sizes. Several species of proboscideans became isolated on islands and experienced insular dwarfism, some dramatically reducing in body size, such as the tall dwarf elephant species Palaeoloxodon falconeri.
Living species
Anatomy
Elephants are the largest living terrestrial animals. Some species of the extinct elephant genus Palaeoloxodon considerably exceeded modern elephants in size making them among the largest land mammals ever. The skeleton is made up of 326–351 bones. The vertebrae are connected by tight joints, which limit the backbone's flexibility. African elephants have 21 pairs of ribs, while Asian elephants have 19 or 20 pairs. The skull contains air cavities (sinuses) that reduce the weight of the skull while maintaining overall strength. These cavities give the inside of the skull a honeycomb-like appearance. By contrast, the lower jaw is dense. The cranium is particularly large and provides enough room for the attachment of muscles to support the entire head. The skull is built to withstand great stress, particularly when fighting or using the tusks. The brain is surrounded by arches in the skull, which serve as protection. Because of the size of the head, the neck is relatively short to provide better support.
Elephants are homeotherms and maintain their average body temperature at ~ 36 °C (97 °F), with a minimum of 35.2 °C (95.4 °F) during the cool season, and a maximum of 38.0 °C (100.4 °F) during the hot dry season.
Ears and eyes
Elephant ear flaps, or pinnae, are thick in the middle with a thinner tip and supported by a thicker base. They contain numerous blood vessels called capillaries. Warm blood flows into the capillaries, releasing excess heat into the environment. This effect is increased by flapping the ears back and forth. Larger ear surfaces contain more capillaries, and more heat can be released. Of all the elephants, African bush elephants live in the hottest climates and have the largest ear flaps. The ossicles are adapted for hearing low frequencies, being most sensitive at 1 kHz.
Lacking a lacrimal apparatus (tear duct), the eye relies on the harderian gland in the orbit to keep it moist. A durable nictitating membrane shields the globe. The animal's field of vision is compromised by the location and limited mobility of the eyes. Elephants are dichromats and they can see well in dim light but not in bright light.
Trunk
The elongated and prehensile trunk, or proboscis, consists of both the nose and upper lip, which fuse in early fetal development. This versatile appendage contains up to 150,000 separate muscle fascicles, with no bone and little fat. These paired muscles consist of two major types: superficial (surface) and internal. The former are divided into dorsal, ventral, and lateral muscles, while the latter are divided into transverse and radiating muscles. The muscles of the trunk connect to a bony opening in the skull. The nasal septum consists of small elastic muscles between the nostrils, which are divided by cartilage at the base. A unique proboscis nerve – a combination of the maxillary and facial nerves – lines each side of the appendage.
As a muscular hydrostat, the trunk moves through finely controlled muscle contractions, working both with and against each other. Using three basic movements: bending, twisting, and longitudinal stretching or retracting, the trunk has near unlimited flexibility. Objects grasped by the end of the trunk can be moved to the mouth by curving the appendage inward. The trunk can also bend at different points by creating stiffened "pseudo-joints". The tip can be moved in a way similar to the human hand. The skin is more elastic on the dorsal side of the elephant trunk than underneath; allowing the animal to stretch and coil while maintaining a strong grasp. The flexibility of the trunk is aided by the numerous wrinkles in the skin. The African elephants have two finger-like extensions at the tip of the trunk that allow them to pluck small food. The Asian elephant has only one and relies more on wrapping around a food item. Asian elephant trunks have better motor coordination.
The trunk's extreme flexibility allows it to forage and wrestle other elephants with it. It is powerful enough to lift up to , but it also has the precision to crack a peanut shell without breaking the seed. With its trunk, an elephant can reach items up to high and dig for water in the mud or sand below. It also uses it to clean itself. Individuals may show lateral preference when grasping with their trunks: some prefer to twist them to the left, others to the right. Elephant trunks are capable of powerful siphoning. They can expand their nostrils by 30%, leading to a 64% greater nasal volume, and can breathe in almost 30 times faster than a human sneeze, at over . They suck up water, which is squirted into the mouth or over the body. The trunk of an adult Asian elephant is capable of retaining of water. They will also sprinkle dust or grass on themselves. When underwater, the elephant uses its trunk as a snorkel.
The trunk also acts as a sense organ. Its sense of smell may be four times greater than a bloodhound's nose. The infraorbital nerve, which makes the trunk sensitive to touch, is thicker than both the optic and auditory nerves. Whiskers grow all along the trunk, and are particularly packed at the tip, where they contribute to its tactile sensitivity. Unlike those of many mammals, such as cats and rats, elephant whiskers do not move independently ("whisk") to sense the environment; the trunk itself must move to bring the whiskers into contact with nearby objects. Whiskers grow in rows along each side on the ventral surface of the trunk, which is thought to be essential in helping elephants balance objects there, whereas they are more evenly arranged on the dorsal surface. The number and patterns of whiskers are distinctly different between species.
Damaging the trunk would be detrimental to an elephant's survival, although in rare cases, individuals have survived with shortened ones. One trunkless elephant has been observed to graze using its lips with its hind legs in the air and balancing on its front knees. Floppy trunk syndrome is a condition of trunk paralysis recorded in African bush elephants and involves the degeneration of the peripheral nerves and muscles. The disorder has been linked to lead poisoning.
Teeth
Elephants usually have 26 teeth: the incisors, known as the tusks; 12 deciduous premolars; and 12 molars. Unlike most mammals, teeth are not replaced by new ones emerging from the jaws vertically. Instead, new teeth start at the back of the mouth and push out the old ones. The first chewing tooth on each side of the jaw falls out when the elephant is two to three years old. This is followed by four more tooth replacements at the ages of four to six, 9–15, 18–28, and finally in their early 40s. The final (usually sixth) set must last the elephant the rest of its life. Elephant teeth have loop-shaped dental ridges, which are more diamond-shaped in African elephants.
Tusks
The tusks of an elephant are modified second incisors in the upper jaw. They replace deciduous milk teeth at 6–12 months of age and keep growing at about a year. As the tusk develops, it is topped with smooth, cone-shaped enamel that eventually wanes. The dentine is known as ivory and has a cross-section of intersecting lines, known as "engine turning", which create diamond-shaped patterns. Being living tissue, tusks are fairly soft and about as dense as the mineral calcite. The tusk protrudes from a socket in the skull, and most of it is external. At least one-third of the tusk contains the pulp, and some have nerves that stretch even further. Thus, it would be difficult to remove it without harming the animal. When removed, ivory will dry up and crack if not kept cool and wet. Tusks function in digging, debarking, marking, moving objects, and fighting.
Elephants are usually right- or left-tusked, similar to humans, who are typically right- or left-handed. The dominant, or "master" tusk, is typically more worn down, as it is shorter and blunter. For African elephants, tusks are present in both males and females and are around the same length in both sexes, reaching up to , but those of males tend to be more massive. In the Asian species, only the males have large tusks. Female Asians have very small tusks, or none at all. Tuskless males exist and are particularly common among Sri Lankan elephants. Asian males can have tusks as long as Africans', but they are usually slimmer and lighter; the largest recorded was long and weighed . Hunting for elephant ivory in Africa and Asia has resulted in an effective selection pressure for shorter tusks and tusklessness.
Skin
An elephant's skin is generally very tough, at thick on the back and parts of the head. The skin around the mouth, anus, and inside of the ear is considerably thinner. Elephants are typically grey, but African elephants look brown or reddish after rolling in coloured mud. Asian elephants have some patches of depigmentation, particularly on the head. Calves have brownish or reddish hair, with the head and back being particularly hairy. As elephants mature, their hair darkens and becomes sparser, but dense concentrations of hair and bristles remain on the tip of the tail and parts of the head and genitals. Normally, the skin of an Asian elephant is covered with more hair than its African counterpart. Their hair is thought to help them lose heat in their hot environments.
Although tough, an elephant's skin is very sensitive and requires mud baths to maintain moisture and protection from burning and insect bites. After bathing, the elephant will usually use its trunk to blow dust onto its body, which dries into a protective crust. Elephants have difficulty releasing heat through the skin because of their low surface-area-to-volume ratio, which is many times smaller than that of a human. They have even been observed lifting up their legs to expose their soles to the air. Elephants only have sweat glands between the toes, but the skin allows water to disperse and evaporate, cooling the animal. In addition, cracks in the skin may reduce dehydration and allow for increased thermal regulation in the long term.
Legs, locomotion, and posture
To support the animal's weight, an elephant's limbs are positioned more vertically under the body than in most other mammals. The long bones of the limbs have cancellous bones in place of medullary cavities. This strengthens the bones while still allowing haematopoiesis (blood cell creation). Both the front and hind limbs can support an elephant's weight, although 60% is borne by the front. The position of the limbs and leg bones allows an elephant to stand still for extended periods of time without tiring. Elephants are incapable of turning their manus as the ulna and radius of the front legs are secured in pronation. Elephants may also lack the pronator quadratus and pronator teres muscles or have very small ones. The circular feet of an elephant have soft tissues, or "cushion pads" beneath the manus or pes, which allow them to bear the animal's great mass. They appear to have a sesamoid, an extra "toe" similar in placement to a giant panda's extra "thumb", that also helps in weight distribution. As many as five toenails can be found on both the front and hind feet.
Elephants can move both forward and backward, but are incapable of trotting, jumping, or galloping. They can move on land only by walking or ambling: a faster gait similar to running. In walking, the legs act as pendulums, with the hips and shoulders moving up and down while the foot is planted on the ground. The fast gait does not meet all the criteria of running, since there is no point where all the feet are off the ground, although the elephant uses its legs much like other running animals, and can move faster by quickening its stride. Fast-moving elephants appear to 'run' with their front legs, but 'walk' with their hind legs and can reach a top speed of . At this speed, most other quadrupeds are well into a gallop, even accounting for leg length. Spring-like kinetics could explain the difference between the motion of elephants and other animals. The cushion pads expand and contract, and reduce both the pain and noise that would come from a very heavy animal moving. Elephants are capable swimmers: they can swim for up to six hours while completely waterborne, moving at and traversing up to continuously.
Internal systems
The brain of an elephant weighs compared to for a human brain. It is the largest of all terrestrial mammals. While the elephant brain is larger overall, it is proportionally smaller than the human brain. At birth, an elephant's brain already weighs 30–40% of its adult weight. The cerebrum and cerebellum are well developed, and the temporal lobes are so large that they bulge out laterally. Their temporal lobes are proportionally larger than those of other animals, including humans. The throat of an elephant appears to contain a pouch where it can store water for later use. The larynx of the elephant is the largest known among mammals. The vocal folds are anchored close to the epiglottis base. When comparing an elephant's vocal folds to those of a human, an elephant's are proportionally longer, thicker, with a greater cross-sectional area. In addition, they are located further up the vocal tract with an acute slope.
The heart of an elephant weighs . Its apex has two pointed ends, an unusual trait among mammals. In addition, the ventricles of the heart split towards the top, a trait also found in sirenians. When upright, the elephant's heart beats around 28 beats per minute and actually speeds up to 35 beats when it lies down. The blood vessels are thick and wide and can hold up under high blood pressure. The lungs are attached to the diaphragm, and breathing relies less on the expanding of the ribcage. Connective tissue exists in place of the pleural cavity. This may allow the animal to deal with the pressure differences when its body is underwater and its trunk is breaking the surface for air. Elephants breathe mostly with the trunk but also with the mouth. They have a hindgut fermentation system, and their large and small intestines together reach in length. Less than half of an elephant's food intake gets digested, despite the process lasting a day. An elephant's bladder can store up to 18 litres of urine and its kidneys can produce more than 50 litres of urine per day.
Sex characteristics
A male elephant's testes, like other Afrotheria, are internally located near the kidneys. The penis can be as long as with a wide base. It curves to an 'S' when fully erect and has an orifice shaped like a Y. The female's clitoris may be . The vulva is found lower than in other herbivores, between the hind legs instead of under the tail. Determining pregnancy status can be difficult due to the animal's large belly. The female's mammary glands occupy the space between the front legs, which puts the suckling calf within reach of the female's trunk. Elephants have a unique organ, the temporal gland, located on both sides of the head. This organ is associated with sexual behaviour, and males secrete a fluid from it when in musth. Females have also been observed with these secretions.
Behaviour and ecology
Elephants are herbivorous and will eat leaves, twigs, fruit, bark, grass, and roots. African elephants mostly browse, while Asian elephants mainly graze. They can eat as much as of food and drink of water in a day. Elephants tend to stay near water sources. They have morning, afternoon, and nighttime feeding sessions. At midday, elephants rest under trees and may doze off while standing. Sleeping occurs at night while the animal is lying down. Elephants average 3–4 hours of sleep per day. Both males and family groups typically move no more than a day, but distances as far as have been recorded in the Etosha region of Namibia. Elephants go on seasonal migrations in response to changes in environmental conditions. In northern Botswana, they travel to the Chobe River after the local waterholes dry up in late August.
Because of their large size, elephants have a huge impact on their environments and are considered keystone species. Their habit of uprooting trees and undergrowth can transform savannah into grasslands; smaller herbivores can access trees mowed down by elephants. When they dig for water during droughts, they create waterholes that can be used by other animals. When they use waterholes, they end up making them bigger. At Mount Elgon, elephants dig through caves and pave the way for ungulates, hyraxes, bats, birds, and insects. Elephants are important seed dispersers; African forest elephants consume and deposit many seeds over great distances, with either no effect or a positive effect on germination. In Asian forests, large seeds require giant herbivores like elephants and rhinoceros for transport and dispersal. This ecological niche cannot be filled by the smaller Malayan tapir. Because most of the food elephants eat goes undigested, their dung can provide food for other animals, such as dung beetles and monkeys. Elephants can have a negative impact on ecosystems. At Murchison Falls National Park in Uganda, elephant numbers have threatened several species of small birds that depend on woodlands. Their weight causes the soil to compress, leading to runoff and erosion.
Elephants typically coexist peacefully with other herbivores, which will usually stay out of their way. Some aggressive interactions between elephants and rhinoceros have been recorded. The size of adult elephants makes them nearly invulnerable to predators. Calves may be preyed on by lions, spotted hyenas, and wild dogs in Africa and tigers in Asia. The lions of Savuti, Botswana, have adapted to hunting elephants, targeting calves, juveniles or even sub-adults. There are rare reports of adult Asian elephants falling prey to tigers. Elephants tend to have high numbers of parasites, particularly nematodes, compared to many other mammals. This may be due to elephants being less vulnerable to predation; in other mammal species, individuals weakened by significant parasite loads are easily killed off by predators, removing them from the population.
Social organisation
Elephants are generally gregarious animals. African bush elephants in particular have a complex, stratified social structure. Female elephants spend their entire lives in tight-knit matrilineal family groups. They are led by the matriarch, who is often the eldest female. She remains leader of the group until death or if she no longer has the energy for the role; a study on zoo elephants found that the death of the matriarch led to greater stress in the surviving elephants. When her tenure is over, the matriarch's eldest daughter takes her place instead of her sister (if present). One study found that younger matriarchs take potential threats less seriously. Large family groups may split if they cannot be supported by local resources.
At Amboseli National Park, Kenya, female groups may consist of around ten members, including four adults and their dependent offspring. Here, a cow's life involves interaction with those outside her group. Two separate families may associate and bond with each other, forming what are known as bond groups. During the dry season, elephant families may aggregate into clans. These may number around nine groups, in which clans do not form strong bonds but defend their dry-season ranges against other clans. The Amboseli elephant population is further divided into the "central" and "peripheral" subpopulations.
Female Asian elephants tend to have more fluid social associations. In Sri Lanka, there appear to be stable family units or "herds" and larger, looser "groups". They have been observed to have "nursing units" and "juvenile-care units". In southern India, elephant populations may contain family groups, bond groups, and possibly clans. Family groups tend to be small, with only one or two adult females and their offspring. A group containing more than two cows and their offspring is known as a "joint family". Malay elephant populations have even smaller family units and do not reach levels above a bond group. Groups of African forest elephants typically consist of one cow with one to three offspring. These groups appear to interact with each other, especially at forest clearings.
Adult males live separate lives. As he matures, a bull associates more with outside males or even other families. At Amboseli, young males may be away from their families 80% of the time by 14–15 years of age. When males permanently leave, they either live alone or with other males. The former is typical of bulls in dense forests. A dominance hierarchy exists among males, whether they are social or solitary. Dominance depends on age, size, and sexual condition. Male elephants can be quite sociable when not competing for mates and form vast and fluid social networks. Older bulls act as the leaders of these groups. The presence of older males appears to subdue the aggression and "deviant" behaviour of younger ones. The largest all-male groups can reach close to 150 individuals. Adult males and females come together to breed. Bulls will accompany family groups if a cow is in oestrous.
Sexual behaviour
Musth
Adult males enter a state of increased testosterone known as musth. In a population in southern India, males first enter musth at 15 years old, but it is not very intense until they are older than 25. At Amboseli, no bulls under 24 were found to be in musth, while half of those aged 25–35 and all those over 35 were. In some areas, there may be seasonal influences on the timing of musths. The main characteristic of a bull's musth is a fluid discharged from the temporal gland that runs down the side of his face. Behaviours associated with musth include walking with a high and swinging head, nonsynchronous ear flapping, picking at the ground with the tusks, marking, rumbling, and urinating in the sheath. The length of this varies between males of different ages and conditions, lasting from days to months.
Males become extremely aggressive during musth. Size is the determining factor in agonistic encounters when the individuals have the same condition. In contests between musth and non-musth individuals, musth bulls win the majority of the time, even when the non-musth bull is larger. A male may stop showing signs of musth when he encounters a musth male of higher rank. Those of equal rank tend to avoid each other. Agonistic encounters typically consist of threat displays, chases, and minor sparring. Rarely do they full-on fight.
There is at least one documented case of infanticide among Asian elephants at Dong Yai Wildlife Sanctuary, with the researchers describing it as most likely normal behaviour among aggressive musth elephants.
Mating
Elephants are polygynous breeders, and most copulations occur during rainfall. An oestrous cow uses pheromones in her urine and vaginal secretions to signal her readiness to mate. A bull will follow a potential mate and assess her condition with the flehmen response, which requires him to collect a chemical sample with his trunk and taste it with the vomeronasal organ at the roof of the mouth. The oestrous cycle of a cow lasts 14–16 weeks, with the follicular phase lasting 4–6 weeks and the luteal phase lasting 8–10 weeks. While most mammals have one surge of luteinizing hormone during the follicular phase, elephants have two. The first (or anovulatory) surge, appears to change the female's scent, signaling to males that she is in heat, but ovulation does not occur until the second (or ovulatory) surge. Cows over 45–50 years of age are less fertile.
Bulls engage in a behaviour known as mate-guarding, where they follow oestrous females and defend them from other males. Most mate-guarding is done by musth males, and females seek them out, particularly older ones. Musth appears to signal to females the condition of the male, as weak or injured males do not have normal musths. For young females, the approach of an older bull can be intimidating, so her relatives stay nearby for comfort. During copulation, the male rests his trunk on the female. The penis is mobile enough to move without the pelvis. Before mounting, it curves forward and upward. Copulation lasts about 45 seconds and does not involve pelvic thrusting or an ejaculatory pause.
Homosexual behaviour has been observed in both sexes. As in heterosexual interactions, this involves mounting. Male elephants sometimes stimulate each other by playfighting, and "championships" may form between old bulls and younger males. Female same-sex behaviours have been documented only in captivity, where they engage in mutual masturbation with their trunks.
Birth and development
Gestation in elephants typically lasts between one and a half and two years and the female will not give birth again for at least four years. The relatively long pregnancy is supported by several corpus luteums and gives the foetus more time to develop, particularly the brain and trunk. Births tend to take place during the wet season. Typically, only a single young is born, but twins sometimes occur. Calves are born roughly tall and with a weight of around . They are precocial and quickly stand and walk to follow their mother and family herd. A newborn calf will attract the attention of all the herd members. Adults and most of the other young will gather around the newborn, touching and caressing it with their trunks. For the first few days, the mother limits access to her young. Alloparenting – where a calf is cared for by someone other than its mother – takes place in some family groups. Allomothers are typically aged two to twelve years.
For the first few days, the newborn is unsteady on its feet and needs its mother's help. It relies on touch, smell, and hearing, as its eyesight is less developed. With little coordination in its trunk, it can only flop it around which may cause it to trip. When it reaches its second week, the calf can walk with more balance and has more control over its trunk. After its first month, the trunk can grab and hold objects but still lacks sucking abilities, and the calf must bend down to drink. It continues to stay near its mother as it is still reliant on her. For its first three months, a calf relies entirely on its mother's milk, after which it begins to forage for vegetation and can use its trunk to collect water. At the same time, there is progress in lip and leg movements. By nine months, mouth, trunk, and foot coordination are mastered. Suckling bouts tend to last 2–4 min/hr for a calf younger than a year. After a year, a calf is fully capable of grooming, drinking, and feeding itself. It still needs its mother's milk and protection until it is at least two years old. Suckling after two years may improve growth, health, and fertility.
Play behaviour in calves differs between the sexes; females run or chase each other while males play-fight. The former are sexually mature by the age of nine years while the latter become mature around 14–15 years. Adulthood starts at about 18 years of age in both sexes. Elephants have long lifespans, reaching 60–70 years of age. Lin Wang, a captive male Asian elephant, lived for 86 years.
Communication
Elephants communicate in various ways. Individuals greet one another by touching each other on the mouth, temporal glands, and genitals. This allows them to pick up chemical cues. Older elephants use trunk-slaps, kicks, and shoves to control younger ones. Touching is especially important for mother–calf communication. When moving, elephant mothers will touch their calves with their trunks or feet when side-by-side or with their tails if the calf is behind them. A calf will press against its mother's front legs to signal it wants to rest and will touch her breast or leg when it wants to suckle.
Visual displays mostly occur in agonistic situations. Elephants will try to appear more threatening by raising their heads and spreading their ears. They may add to the display by shaking their heads and snapping their ears, as well as tossing around dust and vegetation. They are usually bluffing when performing these actions. Excited elephants also raise their heads and spread their ears but additionally may raise their trunks. Submissive elephants will lower their heads and trunks, as well as flatten their ears against their necks, while those that are ready to fight will bend their ears in a V shape.
Elephants produce several vocalisations—some of which pass though the trunk—for both short and long range communication. This includes trumpeting, bellowing, roaring, growling, barking, snorting, and rumbling. Elephants can produce infrasonic rumbles. For Asian elephants, these calls have a frequency of 14–24 Hz, with sound pressure levels of 85–90 dB and last 10–15 seconds. For African elephants, calls range from 15 to 35 Hz with sound pressure levels as high as 117 dB, allowing communication for many kilometres, possibly over . Elephants are known to communicate with seismics, vibrations produced by impacts on the earth's surface or acoustical waves that travel through it. An individual foot stomping or mock charging can create seismic signals that can be heard at travel distances of up to . Seismic waveforms produced by rumbles travel .
Intelligence and cognition
Elephants are among the most intelligent animals. They exhibit mirror self-recognition, an indication of self-awareness and cognition that has also been demonstrated in some apes and dolphins. One study of a captive female Asian elephant suggested the animal was capable of learning and distinguishing between several visual and some acoustic discrimination pairs. This individual was even able to score a high accuracy rating when re-tested with the same visual pairs a year later. Elephants are among the species known to use tools. An Asian elephant has been observed fine-tuning branches for use as flyswatters. Tool modification by these animals is not as advanced as that of chimpanzees. Elephants are popularly thought of as having an excellent memory. This could have a factual basis; they possibly have cognitive maps which give them long lasting memories of their environment on a wide scale. Individuals may be able to remember where their family members are located.
Scientists debate the extent to which elephants feel emotion. They are attracted to the bones of their own kind, regardless of whether they are related. As with chimpanzees and dolphins, a dying or dead elephant may elicit attention and aid from others, including those from other groups. This has been interpreted as expressing "concern"; however, the Oxford Companion to Animal Behaviour (1987) said that "one is well advised to study the behaviour rather than attempting to get at any underlying emotion".
Conservation
Status
African bush elephants were listed as Endangered by the International Union for Conservation of Nature (IUCN) in 2021, and African forest elephants were listed as Critically Endangered in the same year. In 1979, Africa had an estimated population of at least 1.3 million elephants, possibly as high as 3.0 million. A decade later, the population was estimated to be 609,000; with 277,000 in Central Africa, 110,000 in Eastern Africa, 204,000 in Southern Africa, and 19,000 in Western Africa. The population of rainforest elephants was lower than anticipated, at around 214,000 individuals. Between 1977 and 1989, elephant populations declined by 74% in East Africa. After 1987, losses in elephant numbers hastened, and savannah populations from Cameroon to Somalia experienced a decline of 80%. African forest elephants had a total loss of 43%. Population trends in southern Africa were various, with unconfirmed losses in Zambia, Mozambique and Angola while populations grew in Botswana and Zimbabwe and were stable in South Africa. The IUCN estimated that total population in Africa is estimated at to 415,000 individuals for both species combined as of 2016.
African elephants receive at least some legal protection in every country where they are found. Successful conservation efforts in certain areas have led to high population densities while failures have led to declines as high as 70% or more of the course of ten years. As of 2008, local numbers were controlled by contraception or translocation. Large-scale cullings stopped in the late 1980s and early 1990s. In 1989, the African elephant was listed under Appendix I by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), making trade illegal. Appendix II status (which allows restricted trade) was given to elephants in Botswana, Namibia, and Zimbabwe in 1997 and South Africa in 2000. In some countries, sport hunting of the animals is legal; Botswana, Cameroon, Gabon, Mozambique, Namibia, South Africa, Tanzania, Zambia, and Zimbabwe have CITES export quotas for elephant trophies.
In 2020, the IUCN listed the Asian elephant as endangered due to the population declining by half over "the last three generations". Asian elephants once ranged from Western to East Asia and south to Sumatra. and Java. It is now extinct in these areas, and the current range of Asian elephants is highly fragmented. The total population of Asian elephants is estimated to be around 40,000–50,000, although this may be a loose estimate. Around 60% of the population is in India. Although Asian elephants are declining in numbers overall, particularly in Southeast Asia, the population in the Western Ghats may have stabilised.
Threats
The poaching of elephants for their ivory, meat and hides has been one of the major threats to their existence. Historically, numerous cultures made ornaments and other works of art from elephant ivory, and its use was comparable to that of gold. The ivory trade contributed to the fall of the African elephant population in the late 20th century. This prompted international bans on ivory imports, starting with the United States in June 1989, and followed by bans in other North American countries, western European countries, and Japan. Around the same time, Kenya destroyed all its ivory stocks. Ivory was banned internationally by CITES in 1990. Following the bans, unemployment rose in India and China, where the ivory industry was important economically. By contrast, Japan and Hong Kong, which were also part of the industry, were able to adapt and were not as badly affected. Zimbabwe, Botswana, Namibia, Zambia, and Malawi wanted to continue the ivory trade and were allowed to, since their local populations were healthy, but only if their supplies were from culled individuals or those that died of natural causes.
The ban allowed the elephant to recover in parts of Africa. In February 2012, 650 elephants in Bouba Njida National Park, Cameroon, were slaughtered by Chadian raiders. This has been called "one of the worst concentrated killings" since the ivory ban. Asian elephants are potentially less vulnerable to the ivory trade, as females usually lack tusks. Still, members of the species have been killed for their ivory in some areas, such as Periyar National Park in India. China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015, and in September 2015, China and the United States said "they would enact a nearly complete ban on the import and export of ivory" due to causes of extinction.
Other threats to elephants include habitat destruction and fragmentation. The Asian elephant lives in areas with some of the highest human populations and may be confined to small islands of forest among human-dominated landscapes. Elephants commonly trample and consume crops, which contributes to conflicts with humans, and both elephants and humans have died by the hundreds as a result. Mitigating these conflicts is important for conservation. One proposed solution is the protection of wildlife corridors which give populations greater interconnectivity and space. Chili pepper products as well as guarding with defense tools have been found to be effective in preventing crop-raiding by elephants. Less effective tactics include beehive and electric fences.
Human relations
Working animal
Elephants have been working animals since at least the Indus Valley civilization over 4,000 years ago and continue to be used in modern times. There were 13,000–16,500 working elephants employed in Asia in 2000. These animals are typically captured from the wild when they are 10–20 years old, the age range when they are both more trainable and can work for more years. They were traditionally captured with traps and lassos, but since 1950, tranquillisers have been used. Individuals of the Asian species have been often trained as working animals. Asian elephants are used to carry and pull both objects and people in and out of areas as well as lead people in religious celebrations. They are valued over mechanised tools as they can perform the same tasks but in more difficult terrain, with strength, memory, and delicacy. Elephants can learn over 30 commands. Musth bulls are difficult and dangerous to work with and so are chained up until their condition passes.
In India, many working elephants are alleged to have been subject to abuse. They and other captive elephants are thus protected under The Prevention of Cruelty to Animals Act of 1960. In both Myanmar and Thailand, deforestation and other economic factors have resulted in sizable populations of unemployed elephants resulting in health problems for the elephants themselves as well as economic and safety problems for the people amongst whom they live.
The practice of working elephants has also been attempted in Africa. The taming of African elephants in the Belgian Congo began by decree of Leopold II of Belgium during the 19th century and continues to the present with the Api Elephant Domestication Centre.
Warfare
Historically, elephants were considered formidable instruments of war. They were described in Sanskrit texts as far back as 1500 BC. From South Asia, the use of elephants in warfare spread west to Persia and east to Southeast Asia. The Persians used them during the Achaemenid Empire (between the 6th and 4th centuries BC) while Southeast Asian states first used war elephants possibly as early as the 5th century BC and continued to the 20th century. War elephants were also employed in the Mediterranean and North Africa throughout the classical period since the reign of Ptolemy II in Egypt. The Carthaginian general Hannibal famously took African elephants across the Alps during his war with the Romans and reached the Po Valley in 218 BC with all of them alive, but died of disease and combat a year later.
An elephant's head and sides were equipped with armour, the trunk may have had a sword tied to it and tusks were sometimes covered with sharpened iron or brass. Trained elephants would attack both humans and horses with their tusks. They might have grasped an enemy soldier with the trunk and tossed him to their mahout, or pinned the soldier to the ground and speared him. Some shortcomings of war elephants included their great visibility, which made them easy to target, and limited maneuverability compared to horses. Alexander the Great achieved victory over armies with war elephants by having his soldiers injure the trunks and legs of the animals which caused them to panic and become uncontrollable.
Zoos and circuses
Elephants have traditionally been a major part of zoos and circuses around the world. In circuses, they are trained to perform tricks. The most famous circus elephant was probably Jumbo (1861 – 15 September 1885), who was a major attraction in the Barnum & Bailey Circus. These animals do not reproduce well in captivity due to the difficulty of handling musth bulls and limited understanding of female oestrous cycles. Asian elephants were always more common than their African counterparts in modern zoos and circuses. After CITES listed the Asian elephant under Appendix I in 1975, imports of the species almost stopped by the end of the 1980s. Subsequently, the US received many captive African elephants from Zimbabwe, which had an overabundance of the animals.
Keeping elephants in zoos has met with some controversy. Proponents of zoos argue that they allow easy access to the animals and provide fund and knowledge for preserving their natural habitats, as well as safekeeping for the species. Opponents claim that animals in zoos are under physical and mental stress. Elephants have been recorded displaying stereotypical behaviours in the form of wobbling the body or head and pacing the same route both forwards and backwards. This has been observed in 54% of individuals in UK zoos. Elephants in European zoos appear to have shorter lifespans than their wild counterparts at only 17 years, although other studies suggest that zoo elephants live just as long.
The use of elephants in circuses has also been controversial; the Humane Society of the United States has accused circuses of mistreating and distressing their animals. In testimony to a US federal court in 2009, Barnum & Bailey Circus CEO Kenneth Feld acknowledged that circus elephants are struck behind their ears, under their chins, and on their legs with metal-tipped prods, called bull hooks or ankus. Feld stated that these practices are necessary to protect circus workers and acknowledged that an elephant trainer was rebuked for using an electric prod on an elephant. Despite this, he denied that any of these practices hurt the animals. Some trainers have tried to train elephants without the use of physical punishment. Ralph Helfer is known to have relied on positive reinforcement when training his animals. Barnum and Bailey circus retired its touring elephants in May 2016.
Attacks
Elephants can exhibit bouts of aggressive behaviour and engage in destructive actions against humans. In Africa, groups of adolescent elephants damaged homes in villages after cullings in the 1970s and 1980s. Because of the timing, these attacks have been interpreted as vindictive. In parts of India, male elephants have entered villages at night, destroying homes and killing people. From 2000 to 2004, 300 people died in Jharkhand, and in Assam, 239 people were reportedly killed between 2001 and 2006.
Throughout the country, 1,500 people were killed by elephants between 2019 and 2022, which led to 300 elephants being killed in kind. Local people have reported that some elephants were drunk during the attacks, though officials have disputed this. Purportedly drunk elephants attacked an Indian village in December 2002, killing six people, which led to the retaliatory slaughter of about 200 elephants by locals.
Cultural significance
Elephants have a universal presence in global culture. They have been represented in art since Paleolithic times. Africa, in particular, contains many examples of elephant rock art, especially in the Sahara and southern Africa. In Asia, the animals are depicted as motifs in Hindu and Buddhist shrines and temples. Elephants were often difficult to portray by people with no first-hand experience of them. The ancient Romans, who kept the animals in captivity, depicted elephants more accurately than medieval Europeans who portrayed them more like fantasy creatures, with horse, bovine, and boar-like traits, and trumpet-like trunks. As Europeans gained more access to captive elephants during the 15th century, depictions of them became more accurate, including one made by Leonardo da Vinci.
Elephants have been the subject of religious beliefs. The Mbuti people of central Africa believe that the souls of their dead ancestors resided in elephants. Similar ideas existed among other African societies, who believed that their chiefs would be reincarnated as elephants. During the 10th century AD, the people of Igbo-Ukwu, in modern-day Nigeria, placed elephant tusks underneath their dead leader's feet in the grave. The animals' importance is only totemic in Africa but is much more significant in Asia. In Sumatra, elephants have been associated with lightning. Likewise, in Hinduism, they are linked with thunderstorms as Airavata, the father of all elephants, represents both lightning and rainbows. One of the most important Hindu deities, the elephant-headed Ganesha, is ranked equal with the supreme gods Shiva, Vishnu, and Brahma in some traditions. Ganesha is associated with writers and merchants, and it is believed that he can give people success as well as grant them their desires, but could also take these things away. In Buddhism, Buddha is said to have taken the form of a white elephant when he entered his mother's womb to be reincarnated as a human.
In Western popular culture, elephants symbolise the exotic, especially since – as with the giraffe, hippopotamus, and rhinoceros – there are no similar animals familiar to Western audiences. As characters, elephants are most common in children's stories, where they are portrayed positively. They are typically surrogates for humans with ideal human values. Many stories tell of isolated young elephants returning to or finding a family, such as "The Elephant's Child" from Rudyard Kipling's Just So Stories, Disney's Dumbo, and Kathryn and Byron Jackson's The Saggy Baggy Elephant. Other elephant heroes given human qualities include Jean de Brunhoff's Babar, David McKee's Elmer, and Dr. Seuss's Horton.
Several cultural references emphasise the elephant's size and strangeness. For instance, a "white elephant" is a byword for something that is weird, unwanted, and has no value. The expression "elephant in the room" refers to something that is being ignored but ultimately must be addressed. The story of the blind men and an elephant involves blind men touching different parts of an elephant and trying to figure out what it is.
| Biology and health sciences | Proboscidea | null |
9284 | https://en.wikipedia.org/wiki/Equation | Equation | In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign . The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.
Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables.
The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.
Description
An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides.
The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials.
The sides of a polynomial equation contain one or more terms. For example, the equation
has left-hand side , which has four terms, and right-hand side , consisting of just one term. The names of the variables suggest that and are unknowns, and that , , and are parameters, but this is normally fixed by the context (in some contexts, may be a parameter, or , , and may be ordinary variables).
An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount must be removed from the other pan to keep the scale in balance. More generally, an equation remains balanced if the same operation is performed on each side.
Properties
Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to:
Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero.
Multiplying or dividing both sides of an equation by a non-zero quantity.
Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum.
For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity.
If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation has the solution Raising both sides to the exponent of 2 (which means applying the function to both sides of the equation) changes the equation to , which not only has the previous solution but also introduces the extraneous solution, Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation.
The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination.
Examples
Analogous illustration
An equation is analogous to a weighing scale, balance, or seesaw.
Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation).
In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same.
Parameters and unknowns
Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters.
An example of an equation involving x and y as unknowns and the parameter R is
When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle.
Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax2 + bx + c = 0.
The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions.
A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system
has the unique solution x = −1, y = 1.
Identities
An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.
In algebra, an example of an identity is the difference of two squares:
which is true for all x and y.
Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are:
and
which are both true for all values of θ.
For example, to solve for the value of θ that satisfies the equation:
where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give:
yielding the following solution for θ:
Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number.
Algebra
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.
Polynomial equations
In general, an algebraic equation or polynomial equation is an equation of the form
, or
where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.).
For example,
is a univariate algebraic (polynomial) equation with integer coefficients and
is a multivariate polynomial equation over the rational numbers.
Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates.
A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
Systems of linear equations
A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example,
is a system of three equations in the three variables . A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
Geometry
Analytic geometry
In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form , where and are real numbers and are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in or as the solution set of two linear equations with values in
A conic section is the intersection of a cone with equation and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic.
The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians.
Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.
Cartesian equations
In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics.
One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).
The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation .
Parametric equations
A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example,
are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve.
The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
Number theory
Diophantine equations
A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns.
Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
Algebraic and transcendental numbers
An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.
Algebraic geometry
Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.
Differential equations
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
Ordinary differential equations
An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.
Partial differential equations
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
Types of equations
Equations can be classified according to the types of operations and quantities involved. Important types include:
An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree:
linear equation for degree one
quadratic equation for degree two
cubic equation for degree three
quartic equation for degree four
quintic equation for degree five
sextic equation for degree six
septic equation for degree seven
octic equation for degree eight
A Diophantine equation is an equation where the unknowns are required to be integers
A transcendental equation is an equation involving a transcendental function of its unknowns
A parametric equation is an equation in which the solutions for the variables are expressed as functions of some other variables, called parameters appearing in the equations
A functional equation is an equation in which the unknowns are functions rather than simple quantities
Equations involving derivatives, integrals and finite differences:
A differential equation is a functional equation involving derivatives of the unknown functions, where the function and its derivatives are evaluated at the same point, such as . Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables
An integral equation is a functional equation involving the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface
An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable.
A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as
A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation
A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process
| Mathematics | Algebra | null |
9300 | https://en.wikipedia.org/wiki/Ediacaran | Ediacaran | The Ediacaran ( ) is a geological period of the Neoproterozoic Era that spans 96 million years from the end of the Cryogenian Period at 635 Mya to the beginning of the Cambrian Period at 538.8 Mya. It is the last period of the Proterozoic Eon as well as the last of the so-called "Precambrian supereon", before the beginning of the subsequent Cambrian Period marks the start of the Phanerozoic Eon, where recognizable fossil evidence of life becomes common.
The Ediacaran Period is named after the Ediacara Hills of South Australia, where trace fossils of a diverse community of previously unrecognized lifeforms (later named the Ediacaran biota) were first discovered by geologist Reg Sprigg in 1946. Its status as an official geological period was ratified in 2004 by the International Union of Geological Sciences (IUGS), making it the first new geological period declared in 120 years. Although the period took namesake from the Ediacara Hills in the Nilpena Ediacara National Park, the type section is actually located in the bed of the Enorama Creek within the Brachina Gorge in the Ikara-Flinders Ranges National Park, at , approximately southeast of the Ediacara Hills fossil site.
The Ediacaran marks the first widespread appearance of complex multicellular fauna following the end of the Cryogenian global glaciation known as the Snowball Earth. The relatively sudden evolutionary radiation event, known as the Avalon Explosion, is represented by now-extinct, relatively simple soft-bodied animal phyla such as Proarticulata (bilaterians with simple articulation, e.g. Dickinsonia and Spriggina), Petalonamae (sea pen-like animals, e.g. Charnia), Aspidella (radial-shaped animals, e.g. Cyclomedusa) and Trilobozoa (animals with tri-radial symmetry, e.g. Tribrachidium). Most of these organisms appeared during or after the Avalon explosion 575 million years ago and died out during the End-Ediacaran extinction event 539 million years ago. Forerunners of some modern animal phyla also appeared during this period, including cnidarians and early bilaterians, as well as mollusc-like Kimberella. Hard-bodied organisms with mineralized shells also began their fossil record in the last few million years of the Ediacaran.
The supercontinent Pannotia formed and broke apart by the end of the period. The Ediacaran also witnessed several glaciation events, such as the Gaskiers and Baykonurian glaciations. The Shuram excursion also occurred during this period, but its glacial origin is unlikely.
Ediacaran vs. Vendian
The Ediacaran Period overlaps but is shorter than the Vendian Period (650 to 543 million years ago), a name that was earlier, in 1952, proposed by Russian geologist and paleontologist Boris Sokolov. The Vendian concept was formed stratigraphically top-down, and the lower boundary of the Cambrian became the upper boundary of the Vendian.
Paleontological substantiation of this boundary was worked out separately for the siliciclastic basin (base of the Baltic Stage of the Eastern European Platform) and for the carbonate basin (base of the Tommotian stage of the Siberian Platform).
The lower boundary of the Vendian was suggested to be defined at the base of the Varanger (Laplandian) tillites.
The Vendian in its type area consists of large subdivisions such as Laplandian, Redkino, Kotlin and Rovno regional stages with the globally traceable subdivisions and their boundaries, including its lower one.
The Redkino, Kotlin and Rovno regional stages have been substantiated in the type area of the Vendian on the basis of the abundant organic-walled microfossils, megascopic algae, metazoan body fossils and ichnofossils.
The lower boundary of the Vendian could have a biostratigraphic substantiation as well taking into consideration the worldwide occurrence of the Pertatataka assemblage of giant acanthomorph acritarchs.
Upper and lower boundaries
The Ediacaran Period (c. 635–538.8 Mya) represents the time from the end of global Marinoan glaciation to the first appearance worldwide of somewhat complicated trace fossils (Treptichnus pedum (Seilacher, 1955)).
Although the Ediacaran Period does contain soft-bodied fossils, it is unusual in comparison to later periods because its beginning is not defined by a change in the fossil record. Rather, the beginning is defined at the base of a chemically distinctive carbonate layer that is referred to as a "cap carbonate", because it caps glacial deposits.
This bed is characterized by an unusual depletion of 13C that indicates a sudden climatic change at the end of the Marinoan ice age. The lower global boundary stratotype section (GSSP) of the Ediacaran is at the base of the cap carbonate (Nuccaleena Formation), immediately above the Elatina diamictite in the Enorama Creek section, Brachina Gorge, Flinders Ranges, South Australia.
The GSSP of the upper boundary of the Ediacaran is the lower boundary of the Cambrian on the SE coast of Newfoundland approved by the International Commission on Stratigraphy as a preferred alternative to the base of the Tommotian Stage in Siberia which was selected on the basis of the ichnofossil Treptichnus pedum (Seilacher, 1955). In the history of stratigraphy it was the first case of usage of bioturbations for the System boundary definition.
Nevertheless, the definitions of the lower and upper boundaries of the Ediacaran on the basis of chemostratigraphy and ichnofossils are disputable.
Cap carbonates generally have a restricted geographic distribution (due to specific conditions of their precipitation) and usually siliciclastic sediments laterally replace the cap carbonates in a rather short distance but cap carbonates do not occur above every tillite elsewhere in the world.
The C-isotope chemostratigraphic characteristics obtained for contemporaneous cap carbonates in different parts of the world may be variable in a wide range owing to different degrees of secondary alteration of carbonates, dissimilar criteria used for selection of the least altered samples, and, as far as the C-isotope data are concerned, due to primary lateral variations of δ l3Ccarb in the upper layer of the ocean.
Furthermore, Oman presents in its stratigraphic record a large negative carbon isotope excursion, within the Shuram Formation that is clearly away from any glacial evidence strongly questioning systematic association of negative δ l3Ccarb excursion and glacial events. Also, the Shuram excursion is prolonged and is estimated to last for ~9.0 Myrs.
As to the Treptichnus pedum, a reference ichnofossil for the lower boundary of the Cambrian, its usage for the stratigraphic detection of this boundary is always risky, because of the occurrence of very similar trace fossils belonging to the Treptichnids group well below the level of T. pedum in Namibia, Spain and Newfoundland, and possibly, in the western United States. The stratigraphic range of T. pedum overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain.
Subdivisions
The Ediacaran Period is not yet formally subdivided, but a proposed scheme recognises an Upper Ediacaran whose base corresponds with the Gaskiers glaciation, a Terminal Ediacaran Stage starting around , a preceding stage beginning around 575 Ma with the earliest widespread Ediacaran biota fossils; two proposed schemes differ on whether the lower strata should be divided into an Early and Middle Ediacaran or not, because it is not clear whether the Shuram excursion (which would divide the Early and Middle) is a separate event from the Gaskiers, or whether the two events are correlated.
Absolute dating
The dating of the rock type section of the Ediacaran Period in South Australia has proven uncertain due to lack of overlying igneous material. Therefore, the age range of 635 to 538.8 million years is based on correlations to other countries where dating has been possible. The base age of approximately 635 million years is based on U–Pb (uranium–lead) and Re–Os (rhenium–osmium) dating from Africa, China, North America, and Tasmania.
Biota
The fossil record from the Ediacaran Period is sparse, as more easily fossilized hard-shelled animals had yet to evolve. The Ediacaran biota include the oldest definite multicellular organisms (with specialized tissues), the most common types of which resemble segmented worms, fronds, disks, or immobile bags. Auroralumina was a cnidarian.
Most members of the Ediacaran biota bear little resemblance to modern lifeforms, and their relationship even with the immediately following lifeforms of the Cambrian explosion is rather difficult to interpret. More than 100 genera have been described, and well known forms include Arkarua, Charnia, Dickinsonia, Ediacaria, Marywadea, Cephalonega, Pteridinium, and Yorgia. However, despite the overall enigmaticness of most Ediacaran organisms, some fossils identifiable as hard-shelled agglutinated foraminifera (which are not classified as animals) are known from latest Ediacaran sediments of western Siberia. Sponges recognisable as such also lived during the Ediacaran.
Four different biotic intervals are known in the Ediacaran, each being characterised by the prominence of a unique ecology and faunal assemblage. The first spanned from 635 to around 575 Ma and was dominated by acritarchs known as large ornamented Ediacaran microfossils. The second spanned from around 575 to 560 Ma and was characterised by the Avalon biota. The third spanned from 560 to 550 Ma; its biota has been dubbed the White Sea biota due to many fossils from this time being found along the coasts of the White Sea. The fourth lasted from 550 to 539 Ma and is known as the interval of the Nama biotic assemblage.
There is evidence for a mass extinction during this period from early animals changing the environment, dating to the same time as the transition between the White Sea and the Nama-type biotas. Alternatively, this mass extinction has also been theorised to have been the result of an anoxic event.
Astronomical factors
The relative proximity of the Moon at this time meant that tides were stronger and more rapid than they are now. The day was 21.9 ± 0.4 hours, and there were 13.1 ± 0.1 synodic months/year and 400 ± 7 solar days/year.
Documentaries
A few English language documentaries have featured the Ediacaran Period and biota:
The Time Traveller's Guide To Australia (2012, ABC Science; Part 1 of 4).
The Geological History of Canada, as part of The Nature of Things series, CBC-SRC; 2011; Eastern Canada.
The first episode of a BBC documentary titled Life on Earth, with David Attenborough as narrator.
Another documentary narrated by David Attenborough titled First Life featuring Charnia, Dickinsonia, Spriggina, Funisia, and Kimberella animated in CGI.
In our time - Ediacara Biota, BBC, 9 July 2009
| Physical sciences | Geological timescale | Earth science |
9311 | https://en.wikipedia.org/wiki/Endocrinology | Endocrinology | Endocrinology (from endocrine + -ology) is a branch of biology and medicine dealing with the endocrine system, its diseases, and its specific secretions known as hormones. It is also concerned with the integration of developmental events proliferation, growth, and differentiation, and the psychological or behavioral activities of metabolism, growth and development, tissue function, sleep, digestion, respiration, excretion, mood, stress, lactation, movement, reproduction, and sensory perception caused by hormones. Specializations include behavioral endocrinology and comparative endocrinology.
The endocrine system consists of several glands, all in different parts of the body, that secrete hormones directly into the blood rather than into a duct system. Therefore, endocrine glands are regarded as ductless glands. Hormones have many different functions and modes of action; one hormone may have several effects on different target organs, and, conversely, one target organ may be affected by more than one hormone.
The endocrine system
Endocrinology is the study of the endocrine system in the human body. This is a system of glands which secrete hormones. Hormones are chemicals that affect the actions of different organ systems in the body. Examples include thyroid hormone, growth hormone, and insulin. The endocrine system involves a number of feedback mechanisms, so that often one hormone (such as thyroid stimulating hormone) will control the action or release of another secondary hormone (such as thyroid hormone). If there is too much of the secondary hormone, it may provide negative feedback to the primary hormone, maintaining homeostasis.
In the original 1902 definition by Bayliss and Starling (see below), they specified that, to be classified as a hormone, a chemical must be produced by an organ, be released (in small amounts) into the blood, and be transported by the blood to a distant organ to exert its specific function. This definition holds for most "classical" hormones, but there are also paracrine mechanisms (chemical communication between cells within a tissue or organ), autocrine signals (a chemical that acts on the same cell), and intracrine signals (a chemical that acts within the same cell). A neuroendocrine signal is a "classical" hormone that is released into the blood by a neurosecretory neuron (see article on neuroendocrinology).
Hormones
Griffin and Ojeda identify three different classes of hormones based on their chemical composition:
Amines
Amines, such as norepinephrine, epinephrine, and dopamine (catecholamines), are derived from single amino acids, in this case tyrosine. Thyroid hormones such as 3,5,3'-triiodothyronine (T3) and 3,5,3',5'-tetraiodothyronine (thyroxine, T4) make up a subset of this class because they derive from the combination of two iodinated tyrosine amino acid residues.
Peptide and protein
Peptide hormones and protein hormones consist of three (in the case of thyrotropin-releasing hormone) to more than 200 (in the case of follicle-stimulating hormone) amino acid residues and can have a molecular mass as large as 31,000 grams per mole. All hormones secreted by the pituitary gland are peptide hormones, as are leptin from adipocytes, ghrelin from the stomach, and insulin from the pancreas.
Steroid
Steroid hormones are converted from their parent compound, cholesterol. Mammalian steroid hormones can be grouped into five groups by the receptors to which they bind: glucocorticoids, mineralocorticoids, androgens, estrogens, and progestogens. Some forms of vitamin D, such as calcitriol, are steroid-like and bind to homologous receptors, but lack the characteristic fused ring structure of true steroids.
As a profession
Although every organ system secretes and responds to hormones (including the brain, lungs, heart, intestine, skin, and the kidneys), the clinical specialty of endocrinology focuses primarily on the endocrine organs, meaning the organs whose primary function is hormone secretion. These organs include the pituitary, thyroid, adrenals, ovaries, testes, and pancreas.
An endocrinologist is a physician who specializes in treating disorders of the endocrine system, such as diabetes, hyperthyroidism, and many others (see list of diseases).
Work
The medical specialty of endocrinology involves the diagnostic evaluation of a wide variety of symptoms and variations and the long-term management of disorders of deficiency or excess of one or more hormones.
The diagnosis and treatment of endocrine diseases are guided by laboratory tests to a greater extent than for most specialties. Many diseases are investigated through excitation/stimulation or inhibition/suppression testing. This might involve injection with a stimulating agent to test the function of an endocrine organ. Blood is then sampled to assess the changes of the relevant hormones or metabolites. An endocrinologist needs extensive knowledge of clinical chemistry and biochemistry to understand the uses and limitations of the investigations.
A second important aspect of the practice of endocrinology is distinguishing human variation from disease. Atypical patterns of physical development and abnormal test results must be assessed as indicative of disease or not. Diagnostic imaging of endocrine organs may reveal incidental findings called incidentalomas, which may or may not represent disease.
Endocrinology involves caring for the person as well as the disease. Most endocrine disorders are chronic diseases that need lifelong care. Some of the most common endocrine diseases include diabetes mellitus, hypothyroidism and the metabolic syndrome. Care of diabetes, obesity and other chronic diseases necessitates understanding the patient at the personal and social level as well as the molecular, and the physician–patient relationship can be an important therapeutic process.
Apart from treating patients, many endocrinologists are involved in clinical science and medical research, teaching, and hospital management.
Training
Endocrinologists are specialists of internal medicine or pediatrics. Reproductive endocrinologists deal primarily with problems of fertility and menstrual function—often training first in obstetrics. Most qualify as an internist, pediatrician, or gynecologist for a few years before specializing, depending on the local training system. In the U.S. and Canada, training for board certification in internal medicine, pediatrics, or gynecology after medical school is called residency. Further formal training to subspecialize in adult, pediatric, or reproductive endocrinology is called a fellowship. Typical training for a North American endocrinologist involves 4 years of college, 4 years of medical school, 3 years of residency, and 2 years of fellowship. In the US, adult endocrinologists are board certified by the American Board of Internal Medicine (ABIM) or the American Osteopathic Board of Internal Medicine (AOBIM) in Endocrinology, Diabetes and Metabolism.
Diseases treated by endocrinologists
Diabetes mellitus: This is a chronic condition that affects how your body regulates blood sugar. There are two main types: type 1 diabetes, which is an autoimmune disease that occurs when the body attacks the cells that produce insulin, and type 2 diabetes, which is a condition in which the body either doesn't produce enough insulin or doesn't use it effectively.
Thyroid disorders: These are conditions that affect the thyroid gland, a butterfly-shaped gland located in the front of your neck. The thyroid gland produces hormones that regulate your metabolism, heart rate, and body temperature. Common thyroid disorders include hyperthyroidism (overactive thyroid) and hypothyroidism (underactive thyroid).
Adrenal disorders: The adrenal glands are located on top of your kidneys. They produce hormones that help regulate blood pressure, blood sugar, and the body's response to stress. Common adrenal disorders include Cushing syndrome (excess cortisol production) and Addison's disease (adrenal insufficiency).
Pituitary disorders: The pituitary gland is a pea-sized gland located at the base of the brain. It produces hormones that control many other hormone-producing glands in the body. Common pituitary disorders include acromegaly (excess growth hormone production) and Cushing's disease (excess ACTH production).
Metabolic disorders: These are conditions that affect how your body processes food into energy. Common metabolic disorders include obesity, high cholesterol, and gout.
Calcium and bone disorders: Endocrinologists also treat conditions that affect calcium levels in the blood, such as hyperparathyroidism (too much parathyroid hormone) and osteoporosis (weak bones).
Sexual and reproductive disorders: Endocrinologists can also help diagnose and treat hormonal problems that affect sexual development and function, such as polycystic ovary syndrome (PCOS) and erectile dysfunction.
Endocrine cancers: These are cancers that develop in the endocrine glands. Endocrinologists can help diagnose and treat these cancers.
Diseases and medicine
Diseases
See main article at Endocrine diseases
Endocrinology also involves the study of the diseases of the endocrine system. These diseases may relate to too little or too much secretion of a hormone, too little or too much action of a hormone, or problems with receiving the hormone.
Societies and Organizations
Because endocrinology encompasses so many conditions and diseases, there are many organizations that provide education to patients and the public. The Hormone Foundation is the public education affiliate of The Endocrine Society and provides information on all endocrine-related conditions. Other educational organizations that focus on one or more endocrine-related conditions include the American Diabetes Association, Human Growth Foundation, American Menopause Foundation, Inc., and American Thyroid Association.
In North America the principal professional organizations of endocrinologists include The Endocrine Society, the American Association of Clinical Endocrinologists, the American Diabetes Association, the Lawson Wilkins Pediatric Endocrine Society, and the American Thyroid Association.
In Europe, the European Society of Endocrinology (ESE) and the European Society for Paediatric Endocrinology (ESPE) are the main organisations representing professionals in the fields of adult and paediatric endocrinology, respectively.
In the United Kingdom, the Society for Endocrinology and the British Society for Paediatric Endocrinology and Diabetes are the main professional organisations.
The European Society for Paediatric Endocrinology is the largest international professional association dedicated solely to paediatric endocrinology. There are numerous similar associations around the world.
History
The earliest study of endocrinology began in China. The Chinese were isolating sex and pituitary hormones from human urine and using them for medicinal purposes by 200 BC. They used many complex methods, such as sublimation of steroid hormones. Another method specified by Chinese texts—the earliest dating to 1110—specified the use of saponin (from the beans of Gleditsia sinensis) to extract hormones, but gypsum (containing calcium sulfate) was also known to have been used.
Although most of the relevant tissues and endocrine glands had been identified by early anatomists, a more humoral approach to understanding biological function and disease was favoured by the ancient Greek and Roman thinkers such as Aristotle, Hippocrates, Lucretius, Celsus, and Galen, according to Freeman et al., and these theories held sway until the advent of germ theory, physiology, and organ basis of pathology in the 19th century.
In 1849, Arnold Berthold noted that castrated cockerels did not develop combs and wattles or exhibit overtly male behaviour. He found that replacement of testes back into the abdominal cavity of the same bird or another castrated bird resulted in normal behavioural and morphological development, and he concluded (erroneously) that the testes secreted a substance that "conditioned" the blood that, in turn, acted on the body of the cockerel. In fact, one of two other things could have been true: that the testes modified or activated a constituent of the blood or that the testes removed an inhibitory factor from the blood. It was not proven that the testes released a substance that engenders male characteristics until it was shown that the extract of testes could replace their function in castrated animals. Pure, crystalline testosterone was isolated in 1935.
Graves' disease was named after Irish doctor Robert James Graves, who described a case of goiter with exophthalmos in 1835. The German Karl Adolph von Basedow also independently reported the same constellation of symptoms in 1840, while earlier reports of the disease were also published by the Italians Giuseppe Flajani and Antonio Giuseppe Testa, in 1802 and 1810 respectively, and by the English physician Caleb Hillier Parry (a friend of Edward Jenner) in the late 18th century. Thomas Addison was first to describe Addison's disease in 1849.
In 1902 William Bayliss and Ernest Starling performed an experiment in which they observed that acid instilled into the duodenum caused the pancreas to begin secretion, even after they had removed all nervous connections between the two. The same response could be produced by injecting extract of jejunum mucosa into the jugular vein, showing that some factor in the mucosa was responsible. They named this substance "secretin" and coined the term hormone for chemicals that act in this way.
Joseph von Mering and Oskar Minkowski made the observation in 1889 that removing the pancreas surgically led to an increase in blood sugar, followed by a coma and eventual death—symptoms of diabetes mellitus. In 1922, Banting and Best realized that homogenizing the pancreas and injecting the derived extract reversed this condition.
Neurohormones were first identified by Otto Loewi in 1921. He incubated a frog's heart (innervated with its vagus nerve attached) in a saline bath, and left in the solution for some time. The solution was then used to bathe a non-innervated second heart. If the vagus nerve on the first heart was stimulated, negative inotropic (beat amplitude) and chronotropic (beat rate) activity were seen in both hearts. This did not occur in either heart if the vagus nerve was not stimulated. The vagus nerve was adding something to the saline solution. The effect could be blocked using atropine, a known inhibitor to heart vagal nerve stimulation. Clearly, something was being secreted by the vagus nerve and affecting the heart. The "vagusstuff" (as Loewi called it) causing the myotropic (muscle enhancing) effects was later identified to be acetylcholine and norepinephrine. Loewi won the Nobel Prize for his discovery.
Recent work in endocrinology focuses on the molecular mechanisms responsible for triggering the effects of hormones. The first example of such work being done was in 1962 by Earl Sutherland. Sutherland investigated whether hormones enter cells to evoke action, or stayed outside of cells. He studied norepinephrine, which acts on the liver to convert glycogen into glucose via the activation of the phosphorylase enzyme. He homogenized the liver into a membrane fraction and soluble fraction (phosphorylase is soluble), added norepinephrine to the membrane fraction, extracted its soluble products, and added them to the first soluble fraction. Phosphorylase activated, indicating that norepinephrine's target receptor was on the cell membrane, not located intracellularly. He later identified the compound as cyclic AMP (cAMP) and with his discovery created the concept of second-messenger-mediated pathways. He, like Loewi, won the Nobel Prize for his groundbreaking work in endocrinology.
| Biology and health sciences | Fields of medicine | Health |
9312 | https://en.wikipedia.org/wiki/Endocrine%20system | Endocrine system | The endocrine system is a messenger system in an organism comprising feedback loops of hormones that are released by internal glands directly into the circulatory system and that target and regulate distant organs. In vertebrates, the hypothalamus is the neural control center for all endocrine systems.
In humans, the major endocrine glands are the thyroid, parathyroid, pituitary, pineal, and adrenal glands, and the (male) testis and (female) ovaries. The hypothalamus, pancreas, and thymus also function as endocrine glands, among other functions. (The hypothalamus and pituitary glands are organs of the neuroendocrine system. One of the most important functions of the hypothalamusit is located in the brain adjacent to the pituitary glandis to link the endocrine system to the nervous system via the pituitary gland.) Other organs, such as the kidneys, also have roles within the endocrine system by secreting certain hormones. The study of the endocrine system and its disorders is known as endocrinology.
The thyroid secretes thyroxine, the pituitary secretes growth hormone, the pineal secretes melatonin, the testis secretes testosterone, and the ovaries secrete estrogen and progesterone.
Glands that signal each other in sequence are often referred to as an axis, such as the hypothalamic–pituitary–adrenal axis. In addition to the specialized endocrine organs mentioned above, many other organs that are part of other body systems have secondary endocrine functions, including bone, kidneys, liver, heart and gonads. For example, the kidney secretes the endocrine hormone erythropoietin. Hormones can be amino acid complexes, steroids, eicosanoids, leukotrienes, or prostaglandins.
The endocrine system is contrasted both to exocrine glands, which secrete hormones to the outside of the body, and to the system known as paracrine signalling between cells over a relatively short distance. Endocrine glands have no ducts, are vascular, and commonly have intracellular vacuoles or granules that store their hormones. In contrast, exocrine glands, such as salivary glands, mammary glands, and submucosal glands within the gastrointestinal tract, tend to be much less vascular and have ducts or a hollow lumen.
Endocrinology is a branch of internal medicine.
Structure
Major endocrine systems
The human endocrine system consists of several systems that operate via feedback loops. Several important feedback systems are mediated via the hypothalamus and pituitary.
TRH – TSH – T3/T4
GnRH – LH/FSH – sex hormones
CRH – ACTH – cortisol
Renin – angiotensin – aldosterone
Leptin vs. ghrelin
Glands
Endocrine glands are glands of the endocrine system that secrete their products, hormones, directly into interstitial spaces where they are absorbed into blood rather than through a duct. The major glands of the endocrine system include the pineal gland, pituitary gland, pancreas, ovaries, testes, thyroid gland, parathyroid gland, hypothalamus and adrenal glands. The hypothalamus and pituitary gland are neuroendocrine organs.
The hypothalamus and the anterior pituitary are two out of the three endocrine glands that are important in cell signaling. They are both part of the HPA axis which is known to play a role in cell signaling in the nervous system.
Hypothalamus: The hypothalamus is a key regulator of the autonomic nervous system. The endocrine system has three sets of endocrine outputs which include the magnocellular system, the parvocellular system, and autonomic intervention. The magnocellular is involved in the expression of oxytocin or vasopressin. The parvocellular is involved in controlling the secretion of hormones from the anterior pituitary.
Anterior Pituitary: The main role of the anterior pituitary gland is to produce and secrete tropic hormones. Some examples of tropic hormones secreted by the anterior pituitary gland include TSH, ACTH, GH, LH, and FSH.
Cells
There are many types of cells that make up the endocrine system and these cells typically make up larger tissues and organs that function within and outside of the endocrine system.
Hypothalamus
Anterior pituitary gland
Pineal gland
Posterior pituitary gland
The posterior pituitary gland is a section of the pituitary gland. This organ does not produce any hormone but stores and secretes hormones such as antidiuretic hormone (ADH) which is synthesized by supraoptic nucleus of hypothalamus and oxytocin which is synthesized by paraventricular nucleus of hypothalamus. ADH functions to help the body to retain water; this is important in maintaining a homeostatic balance between blood solutions and water. Oxytocin functions to induce uterine contractions, stimulate lactation, and allows for ejaculation.
Thyroid gland
follicular cells of the thyroid gland produce and secrete T3 and T4 in response to elevated levels of TRH, produced by the hypothalamus, and subsequent elevated levels of TSH, produced by the anterior pituitary gland, which further regulates the metabolic activity and rate of all cells, including cell growth and tissue differentiation.
Parathyroid gland The endocrine system can control all emotions and can control temperature.
Epithelial cells of the parathyroid glands are richly supplied with blood from the inferior and superior thyroid arteries and secrete parathyroid hormone (PTH). PTH acts on bone, the kidneys, and the GI tract to increase calcium reabsorption and phosphate excretion. In addition, PTH stimulates the conversion of Vitamin D to its most active variant, 1,25-dihydroxyvitamin D3, which further stimulates calcium absorption in the GI tract.
Thymus Gland
Adrenal glands
Adrenal cortex
Adrenal medulla
Pancreas
Pancreas contain nearly 1 to 2 million islets of Langerhans (a tissue which consists cells that secrete hormones) and acini. Acini secretes digestive enzymes.
Alpha cells
The alpha cells of the pancreas secrete hormones to maintain homeostatic blood sugar. Insulin is produced and excreted to lower blood sugar to normal levels. Glucagon, another hormone produced by alpha cells, is secreted in response to low blood sugar levels; glucagon stimulates glycogen stores in the liver to release sugar into the bloodstream to raise blood sugar to normal levels.
Beta cells
60% of the cells present in islet of Langerhans are beta cells. Beta cells secrete insulin. Along with glucagon, insulin helps in maintaining glucose levels in our body. Insulin decreases blood glucose level ( a hypoglycemic hormone) whereas glucagon increases blood glucose level.
Delta cells
F Cells
Ovaries
Granulosa cells
Testis
Leydig cells
Development
The fetal endocrine system is one of the first systems to develop during prenatal development.
Adrenal glands
The fetal adrenal cortex can be identified within four weeks of gestation. The adrenal cortex originates from the thickening of the intermediate mesoderm. At five to six weeks of gestation, the mesonephros differentiates into a tissue known as the genital ridge. The genital ridge produces the steroidogenic cells for both the gonads and the adrenal cortex. The adrenal medulla is derived from ectodermal cells. Cells that will become adrenal tissue move retroperitoneally to the upper portion of the mesonephros. At seven weeks of gestation, the adrenal cells are joined by sympathetic cells that originate from the neural crest to form the adrenal medulla. At the end of the eighth week, the adrenal glands have been encapsulated and have formed a distinct organ above the developing kidneys. At birth, the adrenal glands weigh approximately eight to nine grams (twice that of the adult adrenal glands) and are 0.5% of the total body weight. At 25 weeks, the adult adrenal cortex zone develops and is responsible for the primary synthesis of steroids during the early postnatal weeks.
Thyroid gland
The thyroid gland develops from two different clusterings of embryonic cells. One part is from the thickening of the pharyngeal floor, which serves as the precursor of the thyroxine (T4) producing follicular cells. The other part is from the caudal extensions of the fourth pharyngobranchial pouches which results in the parafollicular calcitonin-secreting cells. These two structures are apparent by 16 to 17 days of gestation. Around the 24th day of gestation, the foramen cecum, a thin, flask-like diverticulum of the median anlage develops. At approximately 24 to 32 days of gestation the median anlage develops into a bilobed structure. By 50 days of gestation, the medial and lateral anlage have fused together. At 12 weeks of gestation, the fetal thyroid is capable of storing iodine for the production of TRH, TSH, and free thyroid hormone. At 20 weeks, the fetus is able to implement feedback mechanisms for the production of thyroid hormones. During fetal development, T4 is the major thyroid hormone being produced while triiodothyronine (T3) and its inactive derivative, reverse T3, are not detected until the third trimester.
Parathyroid glands
A lateral and ventral view of an embryo showing the third (inferior) and fourth (superior) parathyroid glands during the 6th week of embryogenesis
Once the embryo reaches four weeks of gestation, the parathyroid glands begins to develop. The human embryo forms five sets of endoderm-lined pharyngeal pouches. The third and fourth pouch are responsible for developing into the inferior and superior parathyroid glands, respectively. The third pharyngeal pouch encounters the developing thyroid gland and they migrate down to the lower poles of the thyroid lobes. The fourth pharyngeal pouch later encounters the developing thyroid gland and migrates to the upper poles of the thyroid lobes. At 14 weeks of gestation, the parathyroid glands begin to enlarge from 0.1 mm in diameter to approximately 1 – 2 mm at birth. The developing parathyroid glands are physiologically functional beginning in the second trimester.
Studies in mice have shown that interfering with the HOX15 gene can cause parathyroid gland aplasia, which suggests the gene plays an important role in the development of the parathyroid gland. The genes, TBX1, CRKL, GATA3, GCM2, and SOX3 have also been shown to play a crucial role in the formation of the parathyroid gland. Mutations in TBX1 and CRKL genes are correlated with DiGeorge syndrome, while mutations in GATA3 have also resulted in a DiGeorge-like syndrome. Malformations in the GCM2 gene have resulted in hypoparathyroidism. Studies on SOX3 gene mutations have demonstrated that it plays a role in parathyroid development. These mutations also lead to varying degrees of hypopituitarism.
Pancreas
The human fetal pancreas begins to develop by the fourth week of gestation. Five weeks later, the pancreatic alpha and beta cells have begun to emerge. Reaching eight to ten weeks into development, the pancreas starts producing insulin, glucagon, somatostatin, and pancreatic polypeptide. During the early stages of fetal development, the number of pancreatic alpha cells outnumbers the number of pancreatic beta cells. The alpha cells reach their peak in the middle stage of gestation. From the middle stage until term, the beta cells continue to increase in number until they reach an approximate 1:1 ratio with the alpha cells. The insulin concentration within the fetal pancreas is 3.6 pmol/g at seven to ten weeks, which rises to 30 pmol/g at 16–25 weeks of gestation. Near term, the insulin concentration increases to 93 pmol/g. The endocrine cells have dispersed throughout the body within 10 weeks. At 31 weeks of development, the islets of Langerhans have differentiated.
While the fetal pancreas has functional beta cells by 14 to 24 weeks of gestation, the amount of insulin that is released into the bloodstream is relatively low. In a study of pregnant women carrying fetuses in the mid-gestation and near term stages of development, the fetuses did not have an increase in plasma insulin levels in response to injections of high levels of glucose. In contrast to insulin, the fetal plasma glucagon levels are relatively high and continue to increase during development. At the mid-stage of gestation, the glucagon concentration is 6 μg/g, compared to 2 μg/g in adult humans. Just like insulin, fetal glucagon plasma levels do not change in response to an infusion of glucose. However, a study of an infusion of alanine into pregnant women was shown to increase the cord blood and maternal glucagon concentrations, demonstrating a fetal response to amino acid exposure.
As such, while the fetal pancreatic alpha and beta islet cells have fully developed and are capable of hormone synthesis during the remaining fetal maturation, the islet cells are relatively immature in their capacity to produce glucagon and insulin. This is thought to be a result of the relatively stable levels of fetal serum glucose concentrations achieved via maternal transfer of glucose through the placenta. On the other hand, the stable fetal serum glucose levels could be attributed to the absence of pancreatic signaling initiated by incretins during feeding. In addition, the fetal pancreatic islets cells are unable to sufficiently produce cAMP and rapidly degrade cAMP by phosphodiesterase necessary to secrete glucagon and insulin.
During fetal development, the storage of glycogen is controlled by fetal glucocorticoids and placental lactogen. Fetal insulin is responsible for increasing glucose uptake and lipogenesis during the stages leading up to birth. Fetal cells contain a higher amount of insulin receptors in comparison to adults cells and fetal insulin receptors are not downregulated in cases of hyperinsulinemia. In comparison, fetal haptic glucagon receptors are lowered in comparison to adult cells and the glycemic effect of glucagon is blunted. This temporary physiological change aids the increased rate of fetal development during the final trimester. Poorly managed maternal diabetes mellitus is linked to fetal macrosomia, increased risk of miscarriage, and defects in fetal development. Maternal hyperglycemia is also linked to increased insulin levels and beta cell hyperplasia in the post-term infant. Children of diabetic mothers are at an increased risk for conditions such as: polycythemia, renal vein thrombosis, hypocalcemia, respiratory distress syndrome, jaundice, cardiomyopathy, congenital heart disease, and improper organ development.
Gonads
The reproductive system begins development at four to five weeks of gestation with germ cell migration. The bipotential gonad results from the collection of the medioventral region of the urogenital ridge. At the five-week point, the developing gonads break away from the adrenal primordium. Gonadal differentiation begins 42 days following conception.
Male gonadal development
For males, the testes form at six fetal weeks and the sertoli cells begin developing by the eight week of gestation. SRY, the sex-determining locus, serves to differentiate the Sertoli cells. The Sertoli cells are the point of origin for anti-Müllerian hormone. Once synthesized, the anti-Müllerian hormone initiates the ipsilateral regression of the Müllerian tract and inhibits the development of female internal features. At 10 weeks of gestation, the Leydig cells begin to produce androgen hormones. The androgen hormone dihydrotestosterone is responsible for the development of the male external genitalia.
The testicles descend during prenatal development in a two-stage process that begins at eight weeks of gestation and continues through the middle of the third trimester. During the transabdominal stage (8 to 15 weeks of gestation), the gubernacular ligament contracts and begins to thicken. The craniosuspensory ligament begins to break down. This stage is regulated by the secretion of insulin-like 3 (INSL3), a relaxin-like factor produced by the testicles, and the INSL3 G-coupled receptor, LGR8. During the transinguinal phase (25 to 35 weeks of gestation), the testicles descend into the scrotum. This stage is regulated by androgens, the genitofemoral nerve, and calcitonin gene-related peptide. During the second and third trimester, testicular development concludes with the diminution of the fetal Leydig cells and the lengthening and coiling of the seminiferous cords.
Female gonadal development
For females, the ovaries become morphologically visible by the 8th week of gestation. The absence of testosterone results in the diminution of the Wolffian structures. The Müllerian structures remain and develop into the fallopian tubes, uterus, and the upper region of the vagina. The urogenital sinus develops into the urethra and lower region of the vagina, the genital tubercle develops into the clitoris, the urogenital folds develop into the labia minora, and the urogenital swellings develop into the labia majora. At 16 weeks of gestation, the ovaries produce FSH and LH/hCG receptors. At 20 weeks of gestation, the theca cell precursors are present and oogonia mitosis is occurring. At 25 weeks of gestation, the ovary is morphologically defined and folliculogenesis can begin.
Studies of gene expression show that a specific complement of genes, such as follistatin and multiple cyclin kinase inhibitors are involved in ovarian development. An assortment of genes and proteins - such as WNT4, RSPO1, FOXL2, and various estrogen receptors - have been shown to prevent the development of testicles or the lineage of male-type cells.
Pituitary gland
The pituitary gland is formed within the rostral neural plate. The Rathke's pouch, a cavity of ectodermal cells of the oropharynx, forms between the fourth and fifth week of gestation and upon full development, it gives rise to the anterior pituitary gland. By seven weeks of gestation, the anterior pituitary vascular system begins to develop. During the first 12 weeks of gestation, the anterior pituitary undergoes cellular differentiation. At 20 weeks of gestation, the hypophyseal portal system has developed. The Rathke's pouch grows towards the third ventricle and fuses with the diverticulum. This eliminates the lumen and the structure becomes Rathke's cleft. The posterior pituitary lobe is formed from the diverticulum. Portions of the pituitary tissue may remain in the nasopharyngeal midline. In rare cases this results in functioning ectopic hormone-secreting tumors in the nasopharynx.
The functional development of the anterior pituitary involves spatiotemporal regulation of transcription factors expressed in pituitary stem cells and dynamic gradients of local soluble factors. The coordination of the dorsal gradient of pituitary morphogenesis is dependent on neuroectodermal signals from the infundibular bone morphogenetic protein 4 (BMP4). This protein is responsible for the development of the initial invagination of the Rathke's pouch. Other essential proteins necessary for pituitary cell proliferation are Fibroblast growth factor 8 (FGF8), Wnt4, and Wnt5. Ventral developmental patterning and the expression of transcription factors is influenced by the gradients of BMP2 and sonic hedgehog protein (SHH). These factors are essential for coordinating early patterns of cell proliferation.
Six weeks into gestation, the corticotroph cells can be identified. By seven weeks of gestation, the anterior pituitary is capable of secreting ACTH. Within eight weeks of gestation, somatotroph cells begin to develop with cytoplasmic expression of human growth hormone. Once a fetus reaches 12 weeks of development, the thyrotrophs begin expression of Beta subunits for TSH, while gonadotrophs being to express beta-subunits for LH and FSH. Male fetuses predominately produced LH-expressing gonadotrophs, while female fetuses produce an equal expression of LH and FSH expressing gonadotrophs. At 24 weeks of gestation, prolactin-expressing lactotrophs begin to emerge.
Function
Hormones
A hormone is any of a class of signaling molecules produced by cells in glands in multicellular organisms that are transported by the circulatory system to target distant organs to regulate physiology and behaviour. Hormones have diverse chemical structures, mainly of 3 classes: eicosanoids, steroids, and amino acid/protein derivatives (amines, peptides, and proteins). The glands that secrete hormones comprise the endocrine system. The term hormone is sometimes extended to include chemicals produced by cells that affect the same cell (autocrine or intracrine signalling) or nearby cells (paracrine signalling).
Hormones are used to communicate between organs and tissues for physiological regulation and behavioral activities, such as digestion, metabolism, respiration, tissue function, sensory perception, sleep, excretion, lactation, stress, growth and development, movement, reproduction, and mood.
Hormones affect distant cells by binding to specific receptor proteins in the target cell resulting in a change in cell function. This may lead to cell type-specific responses that include rapid changes to the activity of existing proteins, or slower changes in the expression of target genes. Amino acid–based hormones (amines and peptide or protein hormones) are water-soluble and act on the surface of target cells via signal transduction pathways; steroid hormones, being lipid-soluble, move through the plasma membranes of target cells to act within their nuclei.
Cell signalling
The typical mode of cell signalling in the endocrine system is endocrine signaling, that is, using the circulatory system to reach distant target organs. However, there are also other modes, i.e., paracrine, autocrine, and neuroendocrine signaling. Purely neurocrine signaling between neurons, on the other hand, belongs completely to the nervous system.
Autocrine
Autocrine signaling is a form of signaling in which a cell secretes a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on the same cell, leading to changes in the cells.
Paracrine
Some endocrinologists and clinicians include the paracrine system as part of the endocrine system, but there is not consensus. Paracrines are slower acting, targeting cells in the same tissue or organ. An example of this is somatostatin which is released by some pancreatic cells and targets other pancreatic cells.
Juxtacrine
Juxtacrine signaling is a type of intercellular communication that is transmitted via oligosaccharide, lipid, or protein components of a cell membrane, and may affect either the emitting cell or the immediately adjacent cells.
It occurs between adjacent cells that possess broad patches of closely opposed plasma membrane linked by transmembrane channels known as connexons. The gap between the cells can usually be between only 2 and 4 nm.
Clinical significance
Disease
Diseases of the endocrine system are common, including conditions such as diabetes mellitus, thyroid disease, and obesity.
Endocrine disease is characterized by misregulated hormone release (a productive pituitary adenoma), inappropriate response to signaling (hypothyroidism), lack of a gland (diabetes mellitus type 1, diminished erythropoiesis in chronic kidney failure), or structural enlargement in a critical site such as the thyroid (toxic multinodular goitre). Hypofunction of endocrine glands can occur as a result of loss of reserve, hyposecretion, agenesis, atrophy, or active destruction. Hyperfunction can occur as a result of hypersecretion, loss of suppression, hyperplastic or neoplastic change, or hyperstimulation.
Endocrinopathies are classified as primary, secondary, or tertiary. Primary endocrine disease inhibits the action of downstream glands. Secondary endocrine disease is indicative of a problem with the pituitary gland. Tertiary endocrine disease is associated with dysfunction of the hypothalamus and its releasing hormones.
As the thyroid, and hormones have been implicated in signaling distant tissues to proliferate, for example, the estrogen receptor has been shown to be involved in certain breast cancers. Endocrine, paracrine, and autocrine signaling have all been implicated in proliferation, one of the required steps of oncogenesis.
Other common diseases that result from endocrine dysfunction include Addison's disease, Cushing's disease and Graves' disease. Cushing's disease and Addison's disease are pathologies involving the dysfunction of the adrenal gland. Dysfunction in the adrenal gland could be due to primary or secondary factors and can result in hypercortisolism or hypocortisolism. Cushing's disease is characterized by the hypersecretion of the adrenocorticotropic hormone (ACTH) due to a pituitary adenoma that ultimately causes endogenous hypercortisolism by stimulating the adrenal glands. Some clinical signs of Cushing's disease include obesity, moon face, and hirsutism. Addison's disease is an endocrine disease that results from hypocortisolism caused by adrenal gland insufficiency. Adrenal insufficiency is significant because it is correlated with decreased ability to maintain blood pressure and blood sugar, a defect that can prove to be fatal.
Graves' disease involves the hyperactivity of the thyroid gland which produces the T3 and T4 hormones. Graves' disease effects range from excess sweating, fatigue, heat intolerance and high blood pressure to swelling of the eyes that causes redness, puffiness and in rare cases reduced or double vision.
Other animals
A neuroendocrine system has been observed in all animals with a nervous system and all vertebrates have a hypothalamus–pituitary axis. All vertebrates have a thyroid, which in amphibians is also crucial for transformation of larvae into adult form. All vertebrates have adrenal gland tissue, with mammals unique in having it organized into layers. All vertebrates have some form of a renin–angiotensin axis, and all tetrapods have aldosterone as a primary mineralocorticoid.
Additional images
| Biology and health sciences | Animal: General | null |
9320 | https://en.wikipedia.org/wiki/Ericales | Ericales | The Ericales are a large and diverse order of dicotyledons. Species in this order have considerable commercial importance including for tea, persimmon, blueberry, kiwifruit, Brazil nuts, argan, cranberry, sapote, and azalea. The order includes trees, bushes, lianas, and herbaceous plants. Together with ordinary autophytic plants, the Ericales include chlorophyll-deficient mycoheterotrophic plants (e.g., Sarcodes sanguinea) and carnivorous plants (e.g., genus Sarracenia).
Many species have five petals, often grown together. Fusion of the petals as a trait was traditionally used to place the order in the subclass Sympetalae.
Mycorrhizal associations are quite common among the order representatives, and three kinds of mycorrhiza are found exclusively among Ericales (namely, ericoid, arbutoid and monotropoid mycorrhiza). In addition, some families among the order are notable for their exceptional ability to accumulate aluminum.
Ericales are a cosmopolitan order. Areas of distribution of families vary largely – while some are restricted to tropics, others exist mainly in Arctic or temperate regions. The entire order contains over 8,000 species, of which the Ericaceae account for 2,000–4,000 species (by various estimates).
According to molecular studies, the lineage that led to Ericales diverged from other plants about 127 million years or diversified 110 million years ago.
Economic importance
The most commercially used plant in the order is tea (Camellia sinensis) from the family Theaceae. The order also includes some edible fruits, including kiwifruit (esp. Actinidia deliciosa), persimmon (genus Diospyros), blueberry, huckleberry, cranberry, Brazil nut, and Mamey sapote. The order also includes shea (Vitellaria paradoxa), which is the major dietary lipid source for millions of sub-Saharan Africans. Many Ericales species are cultivated for their showy flowers: well-known examples are azalea, rhododendron, camellia, heather, polyanthus, cyclamen, phlox, and busy Lizzie.
Gallery of photos
Classification
These families are recognized in the APG III system as members of the Ericales:
Family Actinidiaceae (kiwifruit family)
Family Balsaminaceae (balsam family)
Family Clethraceae (clethra family)
Family Cyrillaceae (cyrilla family)
Family Diapensiaceae
Family Ebenaceae (ebony and persimmon family)
Family Ericaceae (heath, rhododendron, and blueberry family)
Family Fouquieriaceae (ocotillo family)
Family Lecythidaceae (Brazil nut family)
Family Marcgraviaceae
Family Mitrastemonaceae
Family Pentaphylacaceae
Family Polemoniaceae (phlox family)
Family Primulaceae (primrose and snowbell family)
Family Roridulaceae
Family Sapotaceae (sapodilla family)
Family Sarraceniaceae (American pitcher plant family)
Family Sladeniaceae
Family Styracaceae (silverbell family)
Family Symplocaceae (sapphireberry family)
Family Tetrameristaceae
Family Theaceae (tea and camellia family)
Likely phylogenetic relationships between the families of the Ericales:
Previously included families
These families are not recognized in the APG III system but have been in common use in the recent past:
Family Myrsinaceae (cyclamen and scarlet pimpernel family) → Primulaceae
Family Pellicieraceae → Tetrameristaceae
Family Maesaceae → Primulaceae
Family Ternstroemiaceae → Pentaphylacaceae
Family Theophrastaceae → Primulaceae
These make up an early diverging group of asterids. Under the Cronquist system, the Ericales included a smaller group of plants, which were placed among the Dilleniidae:
Family Ericaceae
Family Cyrillaceae
Family Clethraceae
Family Grubbiaceae
Family Empetraceae
Family Epacridaceae
Family Pyrolaceae
Family Monotropaceae
| Biology and health sciences | Ericales | Plants |
9417 | https://en.wikipedia.org/wiki/Euclidean%20geometry | Euclidean geometry | Euclidean geometry is a mathematical system attributed to ancient Greek mathematician Euclid, which he described in his textbook on geometry, Elements. Euclid's approach consists in assuming a small set of intuitively appealing axioms (postulates) and deducing many other propositions (theorems) from these. Although many of Euclid's results had been stated earlier, Euclid was the first to organize these propositions into a logical system in which each result is proved from axioms and previously proved theorems.
The Elements begins with plane geometry, still taught in secondary school (high school) as the first axiomatic system and the first examples of mathematical proofs. It goes on to the solid geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory, explained in geometrical language.
For more than two thousand years, the adjective "Euclidean" was unnecessary because
Euclid's axioms seemed so intuitively obvious (with the possible exception of the parallel postulate) that theorems proved from them were deemed absolutely true, and thus no other sorts of geometry were possible. Today, however, many other self-consistent non-Euclidean geometries are known, the first ones having been discovered in the early 19th century. An implication of Albert Einstein's theory of general relativity is that physical space itself is not Euclidean, and Euclidean space is a good approximation for it only over short distances (relative to the strength of the gravitational field).
Euclidean geometry is an example of synthetic geometry, in that it proceeds logically from axioms describing basic properties of geometric objects such as points and lines, to propositions about those objects. This is in contrast to analytic geometry, introduced almost 2,000 years later by René Descartes, which uses coordinates to express geometric properties by means of algebraic formulas.
The Elements
The Elements is mainly a systematization of earlier knowledge of geometry. Its improvement over earlier treatments was rapidly recognized, with the result that there was little interest in preserving the earlier ones, and they are now nearly all lost.
There are 13 books in the Elements:
Books I–IV and VI discuss plane geometry. Many results about plane figures are proved, for example, "In any triangle, two angles taken together in any manner are less than two right angles." (Book I proposition 17) and the Pythagorean theorem "In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle." (Book I, proposition 47)
Books V and VII–X deal with number theory, with numbers treated geometrically as lengths of line segments or areas of surface regions. Notions such as prime numbers and rational and irrational numbers are introduced. It is proved that there are infinitely many prime numbers.
Books XI–XIII concern solid geometry. A typical result is the 1:3 ratio between the volume of a cone and a cylinder with the same height and base. The platonic solids are constructed.
Axioms
Euclidean geometry is an axiomatic system, in which all theorems ("true statements") are derived from a small number of simple axioms. Until the advent of non-Euclidean geometry, these axioms were considered to be obviously true in the physical world, so that all the theorems would be equally true. However, Euclid's reasoning from assumptions to conclusions remains valid independently from the physical reality.
Near the beginning of the first book of the Elements, Euclid gives five postulates (axioms) for plane geometry, stated in terms of constructions (as translated by Thomas Heath):
Let the following be postulated:
To draw a straight line from any point to any point.
To produce (extend) a finite straight line continuously in a straight line.
To describe a circle with any centre and distance (radius).
That all right angles are equal to one another.
[The parallel postulate]: That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which the angles are less than two right angles.
Although Euclid explicitly only asserts the existence of the constructed objects, in his reasoning he also implicitly assumes them to be unique.
The Elements also include the following five "":
Things that are equal to the same thing are also equal to one another (the transitive property of a Euclidean relation).
If equals are added to equals, then the wholes are equal (Addition property of equality).
If equals are subtracted from equals, then the differences are equal (subtraction property of equality).
Things that coincide with one another are equal to one another (reflexive property).
The whole is greater than the part.
Modern scholars agree that Euclid's postulates do not provide the complete logical foundation that Euclid required for his presentation. Modern treatments use more extensive and complete sets of axioms.
Parallel postulate
To the ancients, the parallel postulate seemed less obvious than the others. They aspired to create a system of absolutely certain propositions, and to them, it seemed as if the parallel line postulate required proof from simpler statements. It is now known that such a proof is impossible since one can construct consistent systems of geometry (obeying the other axioms) in which the parallel postulate is true, and others in which it is false. Euclid himself seems to have considered it as being qualitatively different from the others, as evidenced by the organization of the Elements: his first 28 propositions are those that can be proved without it.
Many alternative axioms can be formulated which are logically equivalent to the parallel postulate (in the context of the other axioms). For example, Playfair's axiom states:
In a plane, through a point not on a given straight line, at most one line can be drawn that never meets the given line.
The "at most" clause is all that is needed since it can be proved from the remaining axioms that at least one parallel line exists.
Methods of proof
Euclidean Geometry is constructive. Postulates 1, 2, 3, and 5 assert the existence and uniqueness of certain geometric figures, and these assertions are of a constructive nature: that is, we are not only told that certain things exist, but are also given methods for creating them with no more than a compass and an unmarked straightedge. In this sense, Euclidean geometry is more concrete than many modern axiomatic systems such as set theory, which often assert the existence of objects without saying how to construct them, or even assert the existence of objects that cannot be constructed within the theory. Strictly speaking, the lines on paper are models of the objects defined within the formal system, rather than instances of those objects. For example, a Euclidean straight line has no width, but any real drawn line will have. Though nearly all modern mathematicians consider nonconstructive proofs just as sound as constructive ones, they are often considered less elegant, intuitive, or practically useful. Euclid's constructive proofs often supplanted fallacious nonconstructive ones, e.g. some Pythagorean proofs that assumed all numbers are rational, usually requiring a statement such as "Find the greatest common measure of ..."
Euclid often used proof by contradiction.
Notation and terminology
Naming of points and figures
Points are customarily named using capital letters of the alphabet. Other figures, such as lines, triangles, or circles, are named by listing a sufficient number of points to pick them out unambiguously from the relevant figure, e.g., triangle ABC would typically be a triangle with vertices at points A, B, and C.
Complementary and supplementary angles
Angles whose sum is a right angle are called complementary. Complementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the right angle. The number of rays in between the two original rays is infinite.
Angles whose sum is a straight angle are supplementary. Supplementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the straight angle (180 degree angle). The number of rays in between the two original rays is infinite.
Modern versions of Euclid's notation
In modern terminology, angles would normally be measured in degrees or radians.
Modern school textbooks often define separate figures called lines (infinite), rays (semi-infinite), and line segments (of finite length). Euclid, rather than discussing a ray as an object that extends to infinity in one direction, would normally use locutions such as "if the line is extended to a sufficient length", although he occasionally referred to "infinite lines". A "line" for Euclid could be either straight or curved, and he used the more specific term "straight line" when necessary.
Some important or well known results
Pons asinorum
The pons asinorum (bridge of asses) states that in isosceles triangles the angles at the base equal one another, and, if the equal straight lines are produced further, then the angles under the base equal one another. Its name may be attributed to its frequent role as the first real test in the Elements of the intelligence of the reader and as a bridge to the harder propositions that followed. It might also be so named because of the geometrical figure's resemblance to a steep bridge that only a sure-footed donkey could cross.
Congruence of triangles
Triangles are congruent if they have all three sides equal (SSS), two sides and the angle between them equal (SAS), or two angles and a side equal (ASA) (Book I, propositions 4, 8, and 26). Triangles with three equal angles (AAA) are similar, but not necessarily congruent. Also, triangles with two equal sides and an adjacent angle are not necessarily equal or congruent.
Triangle angle sum
The sum of the angles of a triangle is equal to a straight angle (180 degrees). This causes an equilateral triangle to have three interior angles of 60 degrees. Also, it causes every triangle to have at least two acute angles and up to one obtuse or right angle.
Pythagorean theorem
The celebrated Pythagorean theorem (book I, proposition 47) states that in any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle).
Thales' theorem
Thales' theorem, named after Thales of Miletus states that if A, B, and C are points on a circle where the line AC is a diameter of the circle, then the angle ABC is a right angle. Cantor supposed that Thales proved his theorem by means of Euclid Book I, Prop. 32 after the manner of Euclid Book III, Prop. 31.
Scaling of area and volume
In modern terminology, the area of a plane figure is proportional to the square of any of its linear dimensions, , and the volume of a solid to the cube, . Euclid proved these results in various special cases such as the area of a circle and the volume of a parallelepipedal solid. Euclid determined some, but not all, of the relevant constants of proportionality. For instance, it was his successor Archimedes who proved that a sphere has 2/3 the volume of the circumscribing cylinder.
System of measurement and arithmetic
Euclidean geometry has two fundamental types of measurements: angle and distance. The angle scale is absolute, and Euclid uses the right angle as his basic unit, so that, for example, a 45-degree angle would be referred to as half of a right angle. The distance scale is relative; one arbitrarily picks a line segment with a certain nonzero length as the unit, and other distances are expressed in relation to it. Addition of distances is represented by a construction in which one line segment is copied onto the end of another line segment to extend its length, and similarly for subtraction.
Measurements of area and volume are derived from distances. For example, a rectangle with a width of 3 and a length of 4 has an area that represents the product, 12. Because this geometrical interpretation of multiplication was limited to three dimensions, there was no direct way of interpreting the product of four or more numbers, and Euclid avoided such products, although they are implied, for example in the proof of book IX, proposition 20.
Euclid refers to a pair of lines, or a pair of planar or solid figures, as "equal" (ἴσος) if their lengths, areas, or volumes are equal respectively, and similarly for angles. The stronger term "congruent" refers to the idea that an entire figure is the same size and shape as another figure. Alternatively, two figures are congruent if one can be moved on top of the other so that it matches up with it exactly. (Flipping it over is allowed.) Thus, for example, a 2x6 rectangle and a 3x4 rectangle are equal but not congruent, and the letter R is congruent to its mirror image. Figures that would be congruent except for their differing sizes are referred to as similar. Corresponding angles in a pair of similar shapes are equal and corresponding sides are in proportion to each other.
In engineering
Design and Analysis
Stress Analysis: Stress Analysis - Euclidean geometry is pivotal in determining stress distribution in mechanical components, which is essential for ensuring structural integrity and durability.
Gear Design: Gear - The design of gears, a crucial element in many mechanical systems, relies heavily on Euclidean geometry to ensure proper tooth shape and engagement for efficient power transmission.
Heat Exchanger Design: Heat exchanger - In thermal engineering, Euclidean geometry is used to design heat exchangers, where the geometric configuration greatly influences thermal efficiency. See shell-and-tube heat exchangers and plate heat exchangers for more details.
Lens Design: Lens - In optical engineering, Euclidean geometry is critical in the design of lenses, where precise geometric shapes determine the focusing properties. Geometric optics analyzes the focusing of light by lenses and mirrors.
Dynamics
Vibration Analysis: Vibration - Euclidean geometry is essential in analyzing and understanding the vibrations in mechanical systems, aiding in the design of systems that can withstand or utilize these vibrations effectively.
Wing Design: Aircraft Wing Design - The application of Euclidean geometry in aerodynamics is evident in aircraft wing design, airfoils, and hydrofoils where geometric shape directly impacts lift and drag characteristics.
Satellite Orbits: Satellite Orbits - Euclidean geometry helps in calculating and predicting the orbits of satellites, essential for successful space missions and satellite operations. Also see astrodynamics, celestial mechanics, and elliptic orbit.
CAD Systems
3D Modeling: In CAD (computer-aided design) systems, Euclidean geometry is fundamental for creating accurate 3D models of mechanical parts. These models are crucial for visualizing and testing designs before manufacturing.
Design and Manufacturing: Much of CAM (computer-aided manufacturing) relies on Euclidean geometry. The design geometry in CAD/CAM typically consists of shapes bounded by planes, cylinders, cones, tori, and other similar Euclidean forms. Today, CAD/CAM is essential in the design of a wide range of products, from cars and airplanes to ships and smartphones.
Evolution of Drafting Practices: Historically, advanced Euclidean geometry, including theorems like Pascal's theorem and Brianchon's theorem, was integral to drafting practices. However, with the advent of modern CAD systems, such in-depth knowledge of these theorems is less necessary in contemporary design and manufacturing processes.
Circuit Design
PCB Layouts: Printed Circuit Board (PCB) Design utilizes Euclidean geometry for the efficient placement and routing of components, ensuring functionality while optimizing space. Efficient layout of electronic components on PCBs is critical for minimizing signal interference and optimizing circuit performance.
Electromagnetic and Fluid Flow Fields
Antenna Design: Antenna Design - Euclidean geometry of antennas helps in designing antennas, where the spatial arrangement and dimensions directly affect antenna and array performance in transmitting and receiving electromagnetic waves.
Field Theory: Complex Potential Flow - In the study of inviscid flow fields and electromagnetic fields, Euclidean geometry aids in visualizing and solving potential flow problems. This is essential for understanding fluid velocity field and electromagnetic field interactions in three-dimensional space. The relationship of which is characterized by an irrotational solenoidal field or a conservative vector field.
Controls
Control System Analysis: Control Systems - The application of Euclidean geometry in control theory helps in the analysis and design of control systems, particularly in understanding and optimizing system stability and response.
Calculation Tools: Jacobian - Euclidean geometry is integral in using Jacobian matrices for transformations and control systems in both mechanical and electrical engineering fields, providing insights into system behavior and properties. The Jacobian serves as a linearized design matrix in statistical regression and curve fitting; see non-linear least squares. The Jacobian is also used in random matrices, moment, statistics, and diagnostics.
Other general applications
Because of Euclidean geometry's fundamental status in mathematics, it is impractical to give more than a representative sampling of applications here.
As suggested by the etymology of the word, one of the earliest reasons for interest in and also one of the most common current uses of geometry is surveying. In addition it has been used in classical mechanics and the cognitive and computational approaches to visual perception of objects. Certain practical results from Euclidean geometry (such as the right-angle property of the 3-4-5 triangle) were used long before they were proved formally. The fundamental types of measurements in Euclidean geometry are distances and angles, both of which can be measured directly by a surveyor. Historically, distances were often measured by chains, such as Gunter's chain, and angles using graduated circles and, later, the theodolite.
An application of Euclidean solid geometry is the determination of packing arrangements, such as the problem of finding the most efficient packing of spheres in n dimensions. This problem has applications in error detection and correction.
Geometry is used extensively in architecture.
Geometry can be used to design origami. Some classical construction problems of geometry are impossible using compass and straightedge, but can be solved using origami.
Later history
Archimedes and Apollonius
Archimedes (), a colorful figure about whom many historical anecdotes are recorded, is remembered along with Euclid as one of the greatest of ancient mathematicians. Although the foundations of his work were put in place by Euclid, his work, unlike Euclid's, is believed to have been entirely original. He proved equations for the volumes and areas of various figures in two and three dimensions, and enunciated the Archimedean property of finite numbers.
Apollonius of Perga () is mainly known for his investigation of conic sections.
17th century: Descartes
René Descartes (1596–1650) developed analytic geometry, an alternative method for formalizing geometry which focused on turning geometry into algebra.
In this approach, a point on a plane is represented by its Cartesian (x, y) coordinates, a line is represented by its equation, and so on.
In Euclid's original approach, the Pythagorean theorem follows from Euclid's axioms. In the Cartesian approach, the axioms are the axioms of algebra, and the equation expressing the Pythagorean theorem is then a definition of one of the terms in Euclid's axioms, which are now considered theorems.
The equation
defining the distance between two points P = (px, py) and Q = (qx, qy) is then known as the Euclidean metric, and other metrics define non-Euclidean geometries.
In terms of analytic geometry, the restriction of classical geometry to compass and straightedge constructions means a restriction to first- and second-order equations, e.g., y = 2x + 1 (a line), or x2 + y2 = 7 (a circle).
Also in the 17th century, Girard Desargues, motivated by the theory of perspective, introduced the concept of idealized points, lines, and planes at infinity. The result can be considered as a type of generalized geometry, projective geometry, but it can also be used to produce proofs in ordinary Euclidean geometry in which the number of special cases is reduced.
18th century
Geometers of the 18th century struggled to define the boundaries of the Euclidean system. Many tried in vain to prove the fifth postulate from the first four. By 1763, at least 28 different proofs had been published, but all were found incorrect.
Leading up to this period, geometers also tried to determine what constructions could be accomplished in Euclidean geometry. For example, the problem of trisecting an angle with a compass and straightedge is one that naturally occurs within the theory, since the axioms refer to constructive operations that can be carried out with those tools. However, centuries of efforts failed to find a solution to this problem, until Pierre Wantzel published a proof in 1837 that such a construction was impossible. Other constructions that were proved impossible include doubling the cube and squaring the circle. In the case of doubling the cube, the impossibility of the construction originates from the fact that the compass and straightedge method involve equations whose order is an integral power of two, while doubling a cube requires the solution of a third-order equation.
Euler discussed a generalization of Euclidean geometry called affine geometry, which retains the fifth postulate unmodified while weakening postulates three and four in a way that eliminates the notions of angle (whence right triangles become meaningless) and of equality of length of line segments in general (whence circles become meaningless) while retaining the notions of parallelism as an equivalence relation between lines, and equality of length of parallel line segments (so line segments continue to have a midpoint).
19th century
In the early 19th century, Carnot and Möbius systematically developed the use of signed angles and line segments as a way of simplifying and unifying results.
Higher dimensions
In the 1840s William Rowan Hamilton developed the quaternions, and John T. Graves and Arthur Cayley the octonions. These are normed algebras which extend the complex numbers. Later it was understood that the quaternions are also a Euclidean geometric system with four real Cartesian coordinates. Cayley used quaternions to study rotations in 4-dimensional Euclidean space.
At mid-century Ludwig Schläfli developed the general concept of Euclidean space, extending Euclidean geometry to higher dimensions. He defined polyschemes, later called polytopes, which are the higher-dimensional analogues of polygons and polyhedra. He developed their theory and discovered all the regular polytopes, i.e. the -dimensional analogues of regular polygons and Platonic solids. He found there are six regular convex polytopes in dimension four, and three in all higher dimensions.
Schläfli performed this work in relative obscurity and it was published in full only posthumously in 1901. It had little influence until it was rediscovered and fully documented in 1948 by H.S.M. Coxeter.
In 1878 William Kingdon Clifford introduced what is now termed geometric algebra, unifying Hamilton's quaternions with Hermann Grassmann's algebra and revealing the geometric nature of these systems, especially in four dimensions. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modeled to new positions. The Clifford torus on the surface of the 3-sphere is the simplest and most symmetric flat embedding of the Cartesian product of two circles (in the same sense that the surface of a cylinder is "flat").
Non-Euclidean geometry
The century's most influential development in geometry occurred when, around 1830, János Bolyai and Nikolai Ivanovich Lobachevsky separately published work on non-Euclidean geometry, in which the parallel postulate is not valid. Since non-Euclidean geometry is provably relatively consistent with Euclidean geometry, the parallel postulate cannot be proved from the other postulates.
In the 19th century, it was also realized that Euclid's ten axioms and common notions do not suffice to prove all of the theorems stated in the Elements. For example, Euclid assumed implicitly that any line contains at least two points, but this assumption cannot be proved from the other axioms, and therefore must be an axiom itself. The very first geometric proof in the Elements, shown in the figure above, is that any line segment is part of a triangle; Euclid constructs this in the usual way, by drawing circles around both endpoints and taking their intersection as the third vertex. His axioms, however, do not guarantee that the circles actually intersect, because they do not assert the geometrical property of continuity, which in Cartesian terms is equivalent to the completeness property of the real numbers. Starting with Moritz Pasch in 1882, many improved axiomatic systems for geometry have been proposed, the best known being those of Hilbert, George Birkhoff, and Tarski.
20th century and relativity
Einstein's theory of special relativity involves a four-dimensional space-time, the Minkowski space, which is non-Euclidean. This shows that non-Euclidean geometries, which had been introduced a few years earlier for showing that the parallel postulate cannot be proved, are also useful for describing the physical world.
However, the three-dimensional "space part" of the Minkowski space remains the space of Euclidean geometry. This is not the case with general relativity, for which the geometry of the space part of space-time is not Euclidean geometry. For example, if a triangle is constructed out of three rays of light, then in general the interior angles do not add up to 180 degrees due to gravity. A relatively weak gravitational field, such as the Earth's or the Sun's, is represented by a metric that is approximately, but not exactly, Euclidean. Until the 20th century, there was no technology capable of detecting these deviations in rays of light from Euclidean geometry, but Einstein predicted that such deviations would exist. They were later verified by observations such as the slight bending of starlight by the Sun during a solar eclipse in 1919, and such considerations are now an integral part of the software that runs the GPS system.
As a description of the structure of space
Euclid believed that his axioms were self-evident statements about physical reality. Euclid's proofs depend upon assumptions perhaps not obvious in Euclid's fundamental axioms, in particular that certain movements of figures do not change their geometrical properties such as the lengths of sides and interior angles, the so-called Euclidean motions, which include translations, reflections and rotations of figures. Taken as a physical description of space, postulate 2 (extending a line) asserts that space does not have holes or boundaries; postulate 4 (equality of right angles) says that space is isotropic and figures may be moved to any location while maintaining congruence; and postulate 5 (the parallel postulate) that space is flat (has no intrinsic curvature).
As discussed above, Albert Einstein's theory of relativity significantly modifies this view.
The ambiguous character of the axioms as originally formulated by Euclid makes it possible for different commentators to disagree about some of their other implications for the structure of space, such as whether or not it is infinite (see below) and what its topology is. Modern, more rigorous reformulations of the system typically aim for a cleaner separation of these issues. Interpreting Euclid's axioms in the spirit of this more modern approach, axioms 1–4 are consistent with either infinite or finite space (as in elliptic geometry), and all five axioms are consistent with a variety of topologies (e.g., a plane, a cylinder, or a torus for two-dimensional Euclidean geometry).
Treatment of infinity
Infinite objects
Euclid sometimes distinguished explicitly between "finite lines" (e.g., Postulate 2) and "infinite lines" (book I, proposition 12). However, he typically did not make such distinctions unless they were necessary. The postulates do not explicitly refer to infinite lines, although for example some commentators interpret postulate 3, existence of a circle with any radius, as implying that space is infinite.
The notion of infinitesimal quantities had previously been discussed extensively by the Eleatic School, but nobody had been able to put them on a firm logical basis, with paradoxes such as Zeno's paradox occurring that had not been resolved to universal satisfaction. Euclid used the method of exhaustion rather than infinitesimals.
Later ancient commentators, such as Proclus (410–485 CE), treated many questions about infinity as issues demanding proof and, e.g., Proclus claimed to prove the infinite divisibility of a line, based on a proof by contradiction in which he considered the cases of even and odd numbers of points constituting it.
At the turn of the 20th century, Otto Stolz, Paul du Bois-Reymond, Giuseppe Veronese, and others produced controversial work on non-Archimedean models of Euclidean geometry, in which the distance between two points may be infinite or infinitesimal, in the Newton–Leibniz sense. Fifty years later, Abraham Robinson provided a rigorous logical foundation for Veronese's work.
Infinite processes
Ancient geometers may have considered the parallel postulate – that two parallel lines do not ever intersect – less certain than the others because it makes a statement about infinitely remote regions of space, and so cannot be physically verified.
The modern formulation of proof by induction was not developed until the 17th century, but some later commentators consider it implicit in some of Euclid's proofs, e.g., the proof of the infinitude of primes.
Supposed paradoxes involving infinite series, such as Zeno's paradox, predated Euclid. Euclid avoided such discussions, giving, for example, the expression for the partial sums of the geometric series in IX.35 without commenting on the possibility of letting the number of terms become infinite.
Logical basis
Classical logic
Euclid frequently used the method of proof by contradiction, and therefore the traditional presentation of Euclidean geometry assumes classical logic, in which every proposition is either true or false, i.e., for any proposition P, the proposition "P or not P" is automatically true.
Modern standards of rigor
Placing Euclidean geometry on a solid axiomatic basis was a preoccupation of mathematicians for centuries. The role of primitive notions, or undefined concepts, was clearly put forward by Alessandro Padoa of the Peano delegation at the 1900 Paris conference:
That is, mathematics is context-independent knowledge within a hierarchical framework. As said by Bertrand Russell:
Such foundational approaches range between foundationalism and formalism.
Axiomatic formulations
Euclid's axioms: In his dissertation to Trinity College, Cambridge, Bertrand Russell summarized the changing role of Euclid's geometry in the minds of philosophers up to that time. It was a conflict between certain knowledge, independent of experiment, and empiricism, requiring experimental input. This issue became clear as it was discovered that the parallel postulate was not necessarily valid and its applicability was an empirical matter, deciding whether the applicable geometry was Euclidean or non-Euclidean.
Hilbert's axioms: Hilbert's axioms had the goal of identifying a simple and complete set of independent axioms from which the most important geometric theorems could be deduced. The outstanding objectives were to make Euclidean geometry rigorous (avoiding hidden assumptions) and to make clear the ramifications of the parallel postulate.
Birkhoff's axioms: Birkhoff proposed four postulates for Euclidean geometry that can be confirmed experimentally with scale and protractor. This system relies heavily on the properties of the real numbers. The notions of angle and distance become primitive concepts.
Tarski's axioms: Alfred Tarski (1902–1983) and his students defined elementary Euclidean geometry as the geometry that can be expressed in first-order logic and does not depend on set theory for its logical basis, in contrast to Hilbert's axioms, which involve point sets. Tarski proved that his axiomatic formulation of elementary Euclidean geometry is consistent and complete in a certain sense: there is an algorithm that, for every proposition, can be shown either true or false. (This does not violate Gödel's theorem, because Euclidean geometry cannot describe a sufficient amount of arithmetic for the theorem to apply.) This is equivalent to the decidability of real closed fields, of which elementary Euclidean geometry is a model.
| Mathematics | Geometry | null |
9419 | https://en.wikipedia.org/wiki/Eocene | Eocene | The Eocene ( ) is a geological epoch that lasted from about 56 to 33.9 million years ago (Ma). It is the second epoch of the Paleogene Period in the modern Cenozoic Era. The name Eocene comes from the Ancient Greek (Ēṓs, "Dawn") and (kainós, "new") and refers to the "dawn" of modern ('new') fauna that appeared during the epoch.
The Eocene spans the time from the end of the Paleocene Epoch to the beginning of the Oligocene Epoch. The start of the Eocene is marked by a brief period in which the concentration of the carbon isotope 13C in the atmosphere was exceptionally low in comparison with the more common isotope 12C. The average temperature of Earth at the beginning of the Eocene was about 27 degrees Celsius. The end is set at a major extinction event called the Grande Coupure (the "Great Break" in continuity) or the Eocene–Oligocene extinction event, which may be related to the impact of one or more large bolides in Siberia and in what is now Chesapeake Bay. As with other geologic periods, the strata that define the start and end of the epoch are well identified, though their exact dates are slightly uncertain.
Etymology
The term "Eocene" is derived from Ancient Greek (Ēṓs) meaning "Dawn", and kainos meaning "new" or "recent", as the epoch saw the dawn of recent, or modern, life.
Scottish geologist Charles Lyell (ignoring the Quaternary) divided the Tertiary Epoch into the Eocene, Miocene, Pliocene, and New Pliocene (Holocene) Periods in 1833. British geologist John Phillips proposed the Cenozoic in 1840 in place of the Tertiary, and Austrian paleontologist Moritz Hörnes introduced the Paleogene for the Eocene and Neogene for the Miocene and Pliocene in 1853. After decades of inconsistent usage, the newly formed International Commission on Stratigraphy (ICS), in 1969, standardized stratigraphy based on the prevailing opinions in Europe: the Cenozoic Era subdivided into the Tertiary and Quaternary sub-eras, and the Tertiary subdivided into the Paleogene and Neogene periods. In 1978, the Paleogene was officially defined as the Paleocene, Eocene, and Oligocene epochs; and the Neogene as the Miocene and Pliocene epochs. In 1989, Tertiary and Quaternary were removed from the time scale due to the arbitrary nature of their boundary, but Quaternary was reinstated in 2009.
Geology
Boundaries
The Eocene is a dynamic epoch that represents global climatic transitions between two climatic extremes, transitioning from the hot house to the cold house. The beginning of the Eocene is marked by the Paleocene–Eocene Thermal Maximum, a short period of intense warming and ocean acidification brought about by the release of carbon en masse into the atmosphere and ocean systems, which led to a mass extinction of 30–50% of benthic foraminifera (single-celled species which are used as bioindicators of the health of a marine ecosystem)—one of the largest in the Cenozoic. This event happened around 55.8 Ma, and was one of the most significant periods of global change during the Cenozoic.
The middle Eocene was characterized by the shift towards a cooler climate at the end of the EECO, around 47.8 Ma, which was briefly interrupted by another warming event called the middle Eocene climatic optimum (MECO). Lasting for about 400,000 years, the MECO was responsible for a globally uniform 4° to 6°C warming of both the surface and deep oceans, as inferred from foraminiferal stable oxygen isotope records. The resumption of a long-term gradual cooling trend resulted in a glacial maximum at the late Eocene/early Oligocene boundary.
The end of the Eocene was also marked by the Eocene–Oligocene extinction event, also known as the Grande Coupure.
Stratigraphy
The Eocene is conventionally divided into early (56–47.8 Ma), middle (47.8–38 Ma), and late (38–33.9 Ma) subdivisions. The corresponding rocks are referred to as lower, middle, and upper Eocene. The Ypresian Stage constitutes the lower, the Priabonian Stage the upper; and the Lutetian and Bartonian stages are united as the middle Eocene.
The Western North American floras of the Eocene were divided into four floral "stages" by Jack Wolfe (1968) based on work with the Puget Group fossils of King County, Washington. The four stages, Franklinian, Fultonian, Ravenian, and Kummerian covered the Early Eocene through early Oligocene, and three of the four were given informal early/late substages. Wolfe tentatively deemed the Franklinian as Early Eocene, the Fultonian as Middle Eocene, the Ravenian as Late, and the Kummerian as Early Oligocene. The beginning of the Kummerian was refined by Gregory Retallack et al (2004) as 40 Mya, with a refined end at the Eocene-Oligocene boundary where the younger Angoonian floral stage starts.
Palaeogeography and tectonics
During the Eocene, the continents continued to drift toward their present positions. At the beginning of the period, Australia and Antarctica remained connected, and warm equatorial currents may have mixed with colder Antarctic waters, distributing the heat around the planet and keeping global temperatures high. When Australia split from the southern continent around 45 Ma, the warm equatorial currents were routed away from Antarctica. An isolated cold water channel developed between the two continents. However, modeling results call into question the thermal isolation model for late Eocene cooling, and decreasing carbon dioxide levels in the atmosphere may have been more important. Once the Antarctic region began to cool down, the ocean surrounding Antarctica began to freeze, sending cold water and icefloes north and reinforcing the cooling.
The northern supercontinent of Laurasia began to fragment, as Europe, Greenland and North America drifted apart.
In western North America, the Laramide Orogeny came to an end in the Eocene, and compression was replaced with crustal extension that ultimately gave rise to the Basin and Range Province. The Kishenehn Basin, around 1.5 km in elevation during the Lutetian, was uplifted to an altitude of 2.5 km by the Priabonian. Huge lakes formed in the high flat basins among uplifts, resulting in the deposition of the Green River Formation lagerstätte.
At about 35 Ma, an asteroid impact on the eastern coast of North America formed the Chesapeake Bay impact crater.
The Tethys Ocean finally closed with the collision of Africa and Eurasia, while the uplift of the Alps isolated its final remnant, the Mediterranean, and created another shallow sea with island archipelagos to the north. Planktonic foraminifera in the northwestern Peri-Tethys are very similar to those of the Tethys in the middle Lutetian but become completely disparate in the Bartonian, indicating biogeographic separation. Though the North Atlantic was opening, a land connection appears to have remained between North America and Europe since the faunas of the two regions are very similar.
Eurasia was separated in three different landmasses 50 Ma; Western Europe, Balkanatolia and Asia. About 40 Ma, Balkanatolia and Asia were connected, while Europe was connected 34 Ma. The Fushun Basin contained large, suboxic lakes known as the paleo-Jijuntun Lakes.
India collided with Asia, folding to initiate formation of the Himalayas. The incipient subcontinent collided with the Kohistan–Ladakh Arc around 50.2 Ma and with Karakoram around 40.4 Ma, with the final collision between Asia and India occurring ~40 Ma.
Climate
The Eocene Epoch contained a wide variety of climate conditions that includes the warmest climate in the Cenozoic Era, and arguably the warmest time interval since the Permian-Triassic mass extinction and Early Triassic, and ends in an icehouse climate. The evolution of the Eocene climate began with warming after the end of the Paleocene–Eocene Thermal Maximum (PETM) at 56 Ma to a maximum during the Eocene Optimum at around 49 Ma. During this period of time, little to no ice was present on Earth with a smaller difference in temperature from the equator to the poles. Because of this the maximum sea level was 150 meters higher than current levels. Following the maximum was a descent into an icehouse climate from the Eocene Optimum to the Eocene–Oligocene transition at 34 Ma. During this decrease, ice began to reappear at the poles, and the Eocene–Oligocene transition is the period of time when the Antarctic ice sheet began to rapidly expand.
Early Eocene
Greenhouse gases, in particular carbon dioxide and methane, played a significant role during the Eocene in controlling the surface temperature. The end of the PETM was met with very large sequestration of carbon dioxide into the forms of methane clathrate, coal, and crude oil at the bottom of the Arctic Ocean, that reduced the atmospheric carbon dioxide. This event was similar in magnitude to the massive release of greenhouse gasses at the beginning of the PETM, and it is hypothesized that the sequestration was mainly due to organic carbon burial and weathering of silicates. For the early Eocene there is much discussion on how much carbon dioxide was in the atmosphere. This is due to numerous proxies representing different atmospheric carbon dioxide content. For example, diverse geochemical and paleontological proxies indicate that at the maximum of global warmth the atmospheric carbon dioxide values were at 700–900 ppm, while model simulations suggest a concentration of 1,680 ppm fits best with deep sea, sea surface, and near-surface air temperatures of the time. Other proxies such as pedogenic (soil building) carbonate and marine boron isotopes indicate large changes of carbon dioxide of over 2,000 ppm over periods of time of less than 1 million years. This large influx of carbon dioxide could be attributed to volcanic out-gassing due to North Atlantic rifting or oxidation of methane stored in large reservoirs deposited from the PETM event in the sea floor or wetland environments. For contrast, today the carbon dioxide levels are at 400 ppm or 0.04%.
During the early Eocene, methane was another greenhouse gas that had a drastic effect on the climate. Methane has 30 times more of a warming effect than carbon dioxide on a 100-year scale (i.e., methane has a global warming potential of 29.8±11). Most of the methane released to the atmosphere during this period of time would have been from wetlands, swamps, and forests. The atmospheric methane concentration today is 0.000179% or 1.79 ppmv. As a result of the warmer climate and the sea level rise associated with the early Eocene, more wetlands, more forests, and more coal deposits would have been available for methane release. If we compare the early Eocene production of methane to current levels of atmospheric methane, the early Eocene would have produced triple the amount of methane. The warm temperatures during the early Eocene could have increased methane production rates, and methane that is released into the atmosphere would in turn warm the troposphere, cool the stratosphere, and produce water vapor and carbon dioxide through oxidation. Biogenic production of methane produces carbon dioxide and water vapor along with the methane, as well as yielding infrared radiation. The breakdown of methane in an atmosphere containing oxygen produces carbon monoxide, water vapor and infrared radiation. The carbon monoxide is not stable, so it eventually becomes carbon dioxide and in doing so releases yet more infrared radiation. Water vapor traps more infrared than does carbon dioxide. At about the beginning of the Eocene Epoch (55.8–33.9 Ma) the amount of oxygen in the Earth's atmosphere more or less doubled.
During the warming in the early Eocene between 55 and 52 Ma, there were a series of short-term changes of carbon isotope composition in the ocean. These isotope changes occurred due to the release of carbon from the ocean into the atmosphere that led to a temperature increase of at the surface of the ocean. Recent analysis of and research into these hyperthermals in the early Eocene has led to hypotheses that the hyperthermals are based on orbital parameters, in particular eccentricity and obliquity. The hyperthermals in the early Eocene, notably the Palaeocene–Eocene Thermal Maximum (PETM), the Eocene Thermal Maximum 2 (ETM2), and the Eocene Thermal Maximum 3 (ETM3), were analyzed and found that orbital control may have had a role in triggering the ETM2 and ETM3. An enhancement of the biological pump proved effective at sequestering excess carbon during the recovery phases of these hyperthermals. These hyperthermals led to increased perturbations in planktonic and benthic foraminifera, with a higher rate of fluvial sedimentation as a consequence of the warmer temperatures. Unlike the PETM, the lesser hyperthermals of the Early Eocene had negligible consequences for terrestrial mammals. These Early Eocene hyperthermals produced a sustained period of extremely hot climate known as the Early Eocene Climatic Optimum (EECO). During the early and middle EECO, the superabundance of the euryhaline dinocyst Homotryblium in New Zealand indicates elevated ocean salinity in the region.
Equable climate problem
One of the unique features of the Eocene's climate as mentioned before was the equable and homogeneous climate that existed in the early parts of the Eocene. A multitude of proxies support the presence of a warmer equable climate being present during this period of time. A few of these proxies include the presence of fossils native to warm climates, such as crocodiles, located in the higher latitudes, the presence in the high latitudes of frost-intolerant flora such as palm trees which cannot survive during sustained freezes, and fossils of snakes found in the tropics that would require much higher average temperatures to sustain them. TEX86 BAYSPAR measurements indicate extremely high sea surface temperatures of to at low latitudes, although clumped isotope analyses point to a maximum low latitude sea surface temperature of ± during the EECO. Relative to present-day values, bottom water temperatures are higher according to isotope proxies. With these bottom water temperatures, temperatures in areas where deep water forms near the poles are unable to be much cooler than the bottom water temperatures.
An issue arises, however, when trying to model the Eocene and reproduce the results that are found with the proxy data. Using all different ranges of greenhouse gasses that occurred during the early Eocene, models were unable to produce the warming that was found at the poles and the reduced seasonality that occurs with winters at the poles being substantially warmer. The models, while accurately predicting the tropics, tend to produce significantly cooler temperatures of up to colder than the actual determined temperature at the poles. This error has been classified as the "equable climate problem". To solve this problem, the solution would involve finding a process to warm the poles without warming the tropics. Some hypotheses and tests which attempt to find the process are listed below.
Large lakes
Due to the nature of water as opposed to land, less temperature variability would be present if a large body of water is also present. In an attempt to try to mitigate the cooling polar temperatures, large lakes were proposed to mitigate seasonal climate changes. To replicate this case, a lake was inserted into North America and a climate model was run using varying carbon dioxide levels. The model runs concluded that while the lake did reduce the seasonality of the region greater than just an increase in carbon dioxide, the addition of a large lake was unable to reduce the seasonality to the levels shown by the floral and faunal data.
Ocean heat transport
The transport of heat from the tropics to the poles, much like how ocean heat transport functions in modern times, was considered a possibility for the increased temperature and reduced seasonality for the poles. With the increased sea surface temperatures and the increased temperature of the deep ocean water during the early Eocene, one common hypothesis was that due to these increases there would be a greater transport of heat from the tropics to the poles. Simulating these differences, the models produced lower heat transport due to the lower temperature gradients and were unsuccessful in producing an equable climate from only ocean heat transport.
Orbital parameters
While typically seen as a control on ice growth and seasonality, the orbital parameters were theorized as a possible control on continental temperatures and seasonality. Simulating the Eocene by using an ice free planet, eccentricity, obliquity, and precession were modified in different model runs to determine all the possible different scenarios that could occur and their effects on temperature. One particular case led to warmer winters and cooler summer by up to 30% in the North American continent, and it reduced the seasonal variation of temperature by up to 75%. While orbital parameters did not produce the warming at the poles, the parameters did show a great effect on seasonality and needed to be considered.
Polar stratospheric clouds
Another method considered for producing the warm polar temperatures were polar stratospheric clouds. Polar stratospheric clouds are clouds that occur in the lower stratosphere at very low temperatures. Polar stratospheric clouds have a great impact on radiative forcing. Due to their minimal albedo properties and their optical thickness, polar stratospheric clouds act similar to a greenhouse gas and trap outgoing longwave radiation. Different types of polar stratospheric clouds occur in the atmosphere: polar stratospheric clouds that are created due to interactions with nitric or sulfuric acid and water (Type I) or polar stratospheric clouds that are created with only water ice (Type II).
Methane is an important factor in the creation of the primary Type II polar stratospheric clouds that were created in the early Eocene. Since water vapor is the only supporting substance used in Type II polar stratospheric clouds, the presence of water vapor in the lower stratosphere is necessary where in most situations the presence of water vapor in the lower stratosphere is rare. When methane is oxidized, a significant amount of water vapor is released. Another requirement for polar stratospheric clouds is cold temperatures to ensure condensation and cloud production. Polar stratospheric cloud production, since it requires the cold temperatures, is usually limited to nighttime and winter conditions. With this combination of wetter and colder conditions in the lower stratosphere, polar stratospheric clouds could have formed over wide areas in Polar Regions.
To test the polar stratospheric clouds effects on the Eocene climate, models were run comparing the effects of polar stratospheric clouds at the poles to an increase in atmospheric carbon dioxide. The polar stratospheric clouds had a warming effect on the poles, increasing temperatures by up to 20 °C in the winter months. A multitude of feedbacks also occurred in the models due to the polar stratospheric clouds' presence. Any ice growth was slowed immensely and would lead to any present ice melting. Only the poles were affected with the change in temperature and the tropics were unaffected, which with an increase in atmospheric carbon dioxide would also cause the tropics to increase in temperature. Due to the warming of the troposphere from the increased greenhouse effect of the polar stratospheric clouds, the stratosphere would cool and would potentially increase the amount of polar stratospheric clouds.
While the polar stratospheric clouds could explain the reduction of the equator to pole temperature gradient and the increased temperatures at the poles during the early Eocene, there are a few drawbacks to maintaining polar stratospheric clouds for an extended period of time. Separate model runs were used to determine the sustainability of the polar stratospheric clouds. It was determined that in order to maintain the lower stratospheric water vapor, methane would need to be continually released and sustained. In addition, the amounts of ice and condensation nuclei would need to be high in order for the polar stratospheric cloud to sustain itself and eventually expand.
Middle Eocene
The Eocene is not only known for containing the warmest period during the Cenozoic; it also marked the decline into an icehouse climate and the rapid expansion of the Antarctic ice sheet. The transition from a warming climate into a cooling climate began at around 49 Ma. Isotopes of carbon and oxygen indicate a shift to a global cooling climate. The cause of the cooling has been attributed to a significant decrease of >2,000 ppm in atmospheric carbon dioxide concentrations. One proposed cause of the reduction in carbon dioxide during the warming to cooling transition was the azolla event. With the equable climate during the early Eocene, warm temperatures in the arctic allowed for the growth of azolla, which is a floating aquatic fern, on the Arctic Ocean. The significantly high amounts of carbon dioxide also acted to facilitate azolla blooms across the Arctic Ocean. Compared to current carbon dioxide levels, these azolla grew rapidly in the enhanced carbon dioxide levels found in the early Eocene. The isolation of the Arctic Ocean, evidenced by euxinia that occurred at this time, led to stagnant waters and as the azolla sank to the sea floor, they became part of the sediments on the seabed and effectively sequestered the carbon by locking it out of the atmosphere for good. The ability for the azolla to sequester carbon is exceptional, and the enhanced burial of azolla could have had a significant effect on the world atmospheric carbon content and may have been the event to begin the transition into an ice house climate. The azolla event could have led to a draw down of atmospheric carbon dioxide of up to 470 ppm. Assuming the carbon dioxide concentrations were at 900 ppmv prior to the Azolla Event they would have dropped to 430 ppmv, or 30 ppmv more than they are today, after the Azolla Event. This cooling trend at the end of the EECO has also been proposed to have been caused by increased siliceous plankton productivity and marine carbon burial, which also helped draw carbon dioxide out of the atmosphere. Cooling after this event, part of a trend known as the Middle-Late Eocene Cooling (MLEC), continued due to continual decrease in atmospheric carbon dioxide from organic productivity and weathering from mountain building. Many regions of the world became more arid and cold over the course of the stage, such as the Fushun Basin. In East Asia, lake level changes were in sync with global sea level changes over the course of the MLEC.
Global cooling continued until there was a major reversal from cooling to warming in the Bartonian. This warming event, signifying a sudden and temporary reversal of the cooling conditions, is known as the Middle Eocene Climatic Optimum (MECO). At around 41.5 Ma, stable isotopic analysis of samples from Southern Ocean drilling sites indicated a warming event for 600,000 years. A similar shift in carbon isotopes is known from the Northern Hemisphere in the Scaglia Limestones of Italy. Oxygen isotope analysis showed a large negative change in the proportion of heavier oxygen isotopes to lighter oxygen isotopes, which indicates an increase in global temperatures. The warming is considered to be primarily due to carbon dioxide increases, because carbon isotope signatures rule out major methane release during this short-term warming. A sharp increase in atmospheric carbon dioxide was observed with a maximum of 4,000 ppm: the highest amount of atmospheric carbon dioxide detected during the Eocene. Other studies suggest a more modest rise in carbon dioxide levels. The increase in atmospheric carbon dioxide has also been hypothesised to have been driven by increased seafloor spreading rates and metamorphic decarbonation reactions between Australia and Antarctica and increased amounts of volcanism in the region. One possible cause of atmospheric carbon dioxide increase could have been a sudden increase due to metamorphic release due to continental drift and collision of India with Asia and the resulting formation of the Himalayas; however, data on the exact timing of metamorphic release of atmospheric carbon dioxide is not well resolved in the data. Recent studies have mentioned, however, that the removal of the ocean between Asia and India could have released significant amounts of carbon dioxide. Another hypothesis still implicates a diminished negative feedback of silicate weathering as a result of continental rocks having become less weatherable during the warm Early and Middle Eocene, allowing volcanically released carbon dioxide to persist in the atmosphere for longer. Yet another explanation hypothesises that MECO warming was caused by the simultaneous occurrence of minima in both the 400 kyr and 2.4 Myr eccentricity cycles. During the MECO, sea surface temperatures in the Tethys Ocean jumped to 32–36 °C, and Tethyan seawater became more dysoxic. A decline in carbonate accumulation at ocean depths of greater than three kilometres took place synchronously with the peak of the MECO, signifying ocean acidification took place in the deep ocean. On top of that, MECO warming caused an increase in the respiration rates of pelagic heterotrophs, leading to a decreased proportion of primary productivity making its way down to the seafloor and causing a corresponding decline in populations of benthic foraminifera. An abrupt decrease in lakewater salinity in western North America occurred during this warming interval. This warming is short lived, as benthic oxygen isotope records indicate a return to cooling at ~40 Ma.
Late Eocene
At the end of the MECO, the MLEC resumed. Cooling and the carbon dioxide drawdown continued through the late Eocene and into the Eocene–Oligocene transition around 34 Ma. The post-MECO cooling brought with it a major aridification trend in Asia, enhanced by retreating seas. A monsoonal climate remained predominant in East Asia. The cooling during the initial stages of the opening of the Drake Passage ~38.5 Ma was not global, as evidenced by an absence of cooling in the North Atlantic. During the cooling period, benthic oxygen isotopes show the possibility of ice creation and ice increase during this later cooling. The end of the Eocene and beginning of the Oligocene is marked with the massive expansion of area of the Antarctic ice sheet that was a major step into the icehouse climate. Multiple proxies, such as oxygen isotopes and alkenones, indicate that at the Eocene–Oligocene transition, the atmospheric carbon dioxide concentration had decreased to around 750–800 ppm, approximately twice that of present levels. Along with the decrease of atmospheric carbon dioxide reducing the global temperature, orbital factors in ice creation can be seen with 100,000-year and 400,000-year fluctuations in benthic oxygen isotope records. Another major contribution to the expansion of the ice sheet was the creation of the Antarctic Circumpolar Current. The creation of the Antarctic circumpolar current would isolate the cold water around the Antarctic, which would reduce heat transport to the Antarctic along with creating ocean gyres that result in the upwelling of colder bottom waters. The issue with this hypothesis of the consideration of this being a factor for the Eocene-Oligocene transition is the timing of the creation of the circulation is uncertain. For Drake Passage, sediments indicate the opening occurred ~41 Ma while tectonics indicate that this occurred ~32 Ma. Solar activity did not change significantly during the greenhouse-icehouse transition across the Eocene-Oligocene boundary.
Flora
During the early-middle Eocene, forests covered most of the Earth including the poles. Tropical forests extended across much of modern Africa, South America, Central America, India, South-east Asia and China. Paratropical forests grew over North America, Europe and Russia, with broad-leafed evergreen and broad-leafed deciduous forests at higher latitudes.
Polar forests were quite extensive. Fossils and even preserved remains of trees such as swamp cypress and dawn redwood from the Eocene have been found on Ellesmere Island in the Arctic. Even at that time, Ellesmere Island was only a few degrees in latitude further south than it is today. Fossils of subtropical and even tropical trees and plants from the Eocene also have been found in Greenland and Alaska. Tropical rainforests grew as far north as northern North America and Europe.
Palm trees were growing as far north as Alaska and northern Europe during the early Eocene, although they became less abundant as the climate cooled. Dawn redwoods were far more extensive as well.
The earliest definitive Eucalyptus fossils were dated from 51.9 Ma, and were found in the Laguna del Hunco deposit in Chubut province in Argentina.
Cooling began mid-period, and by the end of the Eocene continental interiors had begun to dry, with forests thinning considerably in some areas. The newly evolved grasses were still confined to river banks and lake shores, and had not yet expanded into plains and savannas.
The cooling also brought seasonal changes. Deciduous trees, better able to cope with large temperature changes, began to overtake evergreen tropical species. By the end of the period, deciduous forests covered large parts of the northern continents, including North America, Eurasia and the Arctic, and rainforests held on only in equatorial South America, Africa, India and Australia.
Antarctica began the Eocene fringed with a warm temperate to sub-tropical rainforest. Pollen found in Prydz Bay from the Eocene suggest taiga forest existed there. It became much colder as the period progressed; the heat-loving tropical flora was wiped out, and by the beginning of the Oligocene, the continent hosted deciduous forests and vast stretches of tundra.
Fauna
During the Eocene, plants and marine faunas became quite modern. Many modern bird orders first appeared in the Eocene. The Eocene oceans were warm and teeming with fish and other sea life.
Vertebrates
Mammals
The oldest known fossils of most of the modern mammal orders appear within a brief period during the early Eocene. At the beginning of the Eocene, several new mammal groups arrived in North America. These modern mammals, like artiodactyls, perissodactyls, and primates, had features like long, thin legs, feet, and hands capable of grasping, as well as differentiated teeth adapted for chewing. Dwarf forms reigned. All the members of the new mammal orders were small, under 10 kg; based on comparisons of tooth size, Eocene mammals were only 60% of the size of the primitive Palaeocene mammals that preceded them. They were also smaller than the mammals that followed them. It is assumed that the hot Eocene temperatures favored smaller animals that were better able to manage the heat.
Rodents were widespread. East Asian rodent faunas declined in diversity when they shifted from ctenodactyloid-dominant to cricetid–dipodid-dominant after the MECO.
Both groups of modern ungulates (hoofed animals) became prevalent because of a major radiation between Europe and North America, along with carnivorous ungulates like Mesonyx. Early forms of many other modern mammalian orders appeared, including horses (most notably the Eohippus), bats, proboscidians (elephants), primates, and rodents. Older primitive forms of mammals declined in variety and importance. Important Eocene land fauna fossil remains have been found in western North America, Europe, Patagonia, Egypt, and southeast Asia. Marine fauna are best known from South Asia and the southeast United States.
After the Paleocene–Eocene Thermal Maximum, members of the Equoidea arose in North America and Europe, giving rise to some of the earliest equids such as Sifrhippus and basal European equoids such as the palaeothere Hyracotherium. Some of the later equoids were especially species-rich; Palaeotherium, ranging from small to very large in size, is known from as many as 16 species.
Established large-sized mammals of the Eocene include the Uintatherium, Arsinoitherium, and brontotheres, in which the former two, unlike the latter, did not belong to ungulates but groups that became extinct shortly after their establishments.
Large terrestrial mammalian predators had already existed since the Paleocene, but new forms now arose like Hyaenodon and Daphoenus (the earliest lineage of a once-successful predatory family known as bear dogs). Entelodonts meanwhile established themselves as some of the largest omnivores. The first nimravids, including Dinictis, established themselves as amongst the first feliforms to appear. Their groups became highly successful and continued to live past the Eocene.
Basilosaurus is a well-known Eocene whale, but whales as a group had become very diverse during the Eocene, which is when the major transitions from being terrestrial to fully aquatic in cetaceans occurred. The first sirenians were evolving at this time, and would eventually evolve into the extant manatees and dugongs.
Birds
Eocene birds include some enigmatic groups with resemblances to modern forms, some of which continued from the Paleocene. Bird taxa of the Eocene include carnivorous psittaciforms, such as Messelasturidae, Halcyornithidae, large flightless forms such as Gastornis and Eleutherornis, long legged falcon Masillaraptor, ancient galliforms such as Gallinuloides, putative rail relatives of the family Songziidae, various pseudotooth birds such as Gigantornis, the ibis relative Rhynchaeites, primitive swifts of the genus Aegialornis, and primitive penguins such as Archaeospheniscus and Inkayacu.
Many Eocene birds in Central Europe evolved tuberculate vertebrae as an adaptation against predation, with flightless birds facing low predation pressure during this time as a result.
Fishes
Fishes, both Chondrichthyes such as sharks and rays, and Osteichthyes (bony fishes), are abundant in the London Clay.
Reptiles
Reptile fossils from this time, such as fossils of pythons and turtles, are abundant.
Molluscs
Arthropods
Several rich fossil insect faunas are known from the Eocene, notably the Baltic amber found mainly along the south coast of the Baltic Sea, amber from the Paris Basin, France, the Fur Formation, Denmark, and the Bembridge Marls from the Isle of Wight, England. Insects found in Eocene deposits mostly belong to genera that exist today, though their range has often shifted since the Eocene. For instance the bibionid genus Plecia is common in fossil faunas from presently temperate areas, but only lives in the tropics and subtropics today. Platypleurin cicadas diversified during the Eocene. Ostracods flourished in the oceans.
Other phyla
Microbes
Calcareous nannoplankton were a prominent feature of Eocene marine ecosystems.
| Physical sciences | Geological timescale | Earth science |
9425 | https://en.wikipedia.org/wiki/Ethology | Ethology | Ethology is a branch of zoology that studies the behaviour of non-human animals. It has its scientific roots in the work of Charles Darwin and of American and German ornithologists of the late 19th and early 20th century, including Charles O. Whitman, Oskar Heinroth, and Wallace Craig. The modern discipline of ethology is generally considered to have begun during the 1930s with the work of the Dutch biologist Nikolaas Tinbergen and the Austrian biologists Konrad Lorenz and Karl von Frisch, the three winners of the 1973 Nobel Prize in Physiology or Medicine. Ethology combines laboratory and field science, with a strong relation to neuroanatomy, ecology, and evolutionary biology.
Etymology
The modern term ethology derives from the Greek language: ἦθος, ethos meaning "character" and , -logia meaning "the study of". The term was first popularized by the American entomologist William Morton Wheeler in 1902.
History
The beginnings of ethology
Ethologists have been concerned particularly with the evolution of behaviour and its understanding in terms of natural selection. In one sense, the first modern ethologist was Charles Darwin, whose 1872 book The Expression of the Emotions in Man and Animals influenced many ethologists. He pursued his interest in behaviour by encouraging his protégé George Romanes, who investigated animal learning and intelligence using an anthropomorphic method, anecdotal cognitivism, that did not gain scientific support.
Other early ethologists, such as Eugène Marais, Charles O. Whitman, Oskar Heinroth, Wallace Craig and Julian Huxley, instead concentrated on behaviours that can be called instinctive in that they occur in all members of a species under specified circumstances. Their starting point for studying the behaviour of a new species was to construct an ethogram, a description of the main types of behaviour with their frequencies of occurrence. This provided an objective, cumulative database of behaviour.
Growth of the field
Due to the work of Konrad Lorenz and Niko Tinbergen, ethology developed strongly in continental Europe during the years prior to World War II. After the war, Tinbergen moved to the University of Oxford, and ethology became stronger in the UK, with the additional influence of William Thorpe, Robert Hinde, and Patrick Bateson at the University of Cambridge.
Lorenz, Tinbergen, and von Frisch were jointly awarded the Nobel Prize in Physiology or Medicine in 1973 for their work of developing ethology.
Ethology is now a well-recognized scientific discipline, with its own journals such as Animal Behaviour, Applied Animal Behaviour Science, Animal Cognition, Behaviour, Behavioral Ecology and Ethology. In 1972, the International Society for Human Ethology was founded along with its journal, Human Ethology.
Social ethology
In 1972, the English ethologist John H. Crook distinguished comparative ethology from social ethology, and argued that much of the ethology that had existed so far was really comparative ethology—examining animals as individuals—whereas, in the future, ethologists would need to concentrate on the behaviour of social groups of animals and the social structure within them.
E. O. Wilson's book Sociobiology: The New Synthesis appeared in 1975, and since that time, the study of behaviour has been much more concerned with social aspects. It has been driven by the Darwinism associated with Wilson, Robert Trivers, and W. D. Hamilton. The related development of behavioural ecology has helped transform ethology. Furthermore, a substantial rapprochement with comparative psychology has occurred, so the modern scientific study of behaviour offers a spectrum of approaches. In 2020, Tobias Starzak and Albert Newen from the Institute of Philosophy II at the Ruhr University Bochum postulated that animals may have beliefs.
Determinants of behaviour
Behaviour is determined by three major factors, namely inborn instincts, learning, and environmental factors. The latter include abiotic and biotic factors. Abiotic factors such as temperature or light conditions have dramatic effects on animals, especially if they are ectothermic or nocturnal. Biotic factors include members of the same species (e.g. sexual behavior), predators (fight or flight), or parasites and diseases.
Instinct
Webster's Dictionary defines instinct as "A largely inheritable and unalterable tendency of an organism to make a complex and specific response to environmental stimuli without involving reason". This covers fixed action patterns like beak movements of bird chicks, and the waggle dance of honeybees.
Fixed action patterns
An important development, associated with the name of Konrad Lorenz though probably due more to his teacher, Oskar Heinroth, was the identification of fixed action patterns. Lorenz popularized these as instinctive responses that would occur reliably in the presence of identifiable stimuli called sign stimuli or "releasing stimuli". Fixed action patterns are now considered to be instinctive behavioural sequences that are relatively invariant within the species and that almost inevitably run to completion.
One example of a releaser is the beak movements of many bird species performed by newly hatched chicks, which stimulates the mother to regurgitate food for her offspring. Other examples are the classic studies by Tinbergen on the egg-retrieval behaviour and the effects of a "supernormal stimulus" on the behaviour of graylag geese.
One investigation of this kind was the study of the waggle dance ("dance language") in bee communication by Karl von Frisch.
Learning
Habituation
Habituation is a simple form of learning and occurs in many animal taxa. It is the process whereby an animal ceases responding to a stimulus. Often, the response is an innate behavior. Essentially, the animal learns not to respond to irrelevant stimuli. For example, prairie dogs (Cynomys ludovicianus) give alarm calls when predators approach, causing all individuals in the group to quickly scramble down burrows. When prairie dog towns are located near trails used by humans, giving alarm calls every time a person walks by is expensive in terms of time and energy. Habituation to humans is therefore an important behavior in this context.
Associative learning
Associative learning in animal behaviour is any learning process in which a new response becomes associated with a particular stimulus. The first studies of associative learning were made by the Russian physiologist Ivan Pavlov, who observed that dogs trained to associate food with the ringing of a bell would salivate on hearing the bell.
Imprinting
Imprinting enables the young to discriminate the members of their own species, vital for reproductive success. This important type of learning only takes place in a very limited period of time. Konrad Lorenz observed that the young of birds such as geese and chickens followed their mothers spontaneously from almost the first day after they were hatched, and he discovered that this response could be imitated by an arbitrary stimulus if the eggs were incubated artificially and the stimulus were presented during a critical period that continued for a few days after hatching.
Cultural learning
Observational learning
Imitation
Imitation is an advanced behavior whereby an animal observes and exactly replicates the behavior of another.
The National Institutes of Health reported that capuchin monkeys preferred the company of researchers who imitated them to that of researchers who did not. The monkeys not only spent more time with their imitators but also preferred to engage in a simple task with them even when provided with the option of performing the same task with a non-imitator. Imitation has been observed in recent research on chimpanzees; not only did these chimps copy the actions of another individual, when given a choice, the chimps preferred to imitate the actions of the higher-ranking elder chimpanzee as opposed to the lower-ranking young chimpanzee.
Stimulus and local enhancement
Animals can learn using observational learning but without the process of imitation. One way is stimulus enhancement in which individuals become interested in an object as the result of observing others interacting with the object. Increased interest in an object can result in object manipulation which allows for new object-related behaviours by trial-and-error learning. Haggerty (1909) devised an experiment in which a monkey climbed up the side of a cage, placed its arm into a wooden chute, and pulled a rope in the chute to release food. Another monkey was provided an opportunity to obtain the food after watching a monkey go through this process on four occasions. The monkey performed a different method and finally succeeded after trial-and-error. In local enhancement, a demonstrator attracts an observer's attention to a particular location. Local enhancement has been observed to transmit foraging information among birds, rats and pigs. The stingless bee (Trigona corvina) uses local enhancement to locate other members of their colony and food resources.
Social transmission
A well-documented example of social transmission of a behaviour occurred in a group of macaques on Hachijojima Island, Japan. The macaques lived in the inland forest until the 1960s, when a group of researchers started giving them potatoes on the beach: soon, they started venturing onto the beach, picking the potatoes from the sand, and cleaning and eating them. About one year later, an individual was observed bringing a potato to the sea, putting it into the water with one hand, and cleaning it with the other. This behaviour was soon expressed by the individuals living in contact with her; when they gave birth, this behaviour was also expressed by their young—a form of social transmission.
Teaching
Teaching is a highly specialized aspect of learning in which the "teacher" (demonstrator) adjusts their behaviour to increase the probability of the "pupil" (observer) achieving the desired end-result of the behaviour. For example, orcas are known to intentionally beach themselves to catch pinniped prey. Mother orcas teach their young to catch pinnipeds by pushing them onto the shore and encouraging them to attack the prey. Because the mother orca is altering her behaviour to help her offspring learn to catch prey, this is evidence of teaching. Teaching is not limited to mammals. Many insects, for example, have been observed demonstrating various forms of teaching to obtain food. Ants, for example, will guide each other to food sources through a process called "tandem running," in which an ant will guide a companion ant to a source of food. It has been suggested that the pupil ant is able to learn this route to obtain food in the future or teach the route to other ants. This behaviour of teaching is also exemplified by crows, specifically New Caledonian crows. The adults (whether individual or in families) teach their young adolescent offspring how to construct and utilize tools. For example, Pandanus branches are used to extract insects and other larvae from holes within trees.
Mating and the fight for supremacy
Individual reproduction is the most important phase in the proliferation of individuals or genes within a species: for this reason, there exist complex mating rituals, which can be very complex even if they are often regarded as fixed action patterns. The stickleback's complex mating ritual, studied by Tinbergen, is regarded as a notable example.
Often in social life, animals fight for the right to reproduce, as well as social supremacy. A common example of fighting for social and sexual supremacy is the so-called pecking order among poultry. Every time a group of poultry cohabitate for a certain time length, they establish a pecking order. In these groups, one chicken dominates the others and can peck without being pecked. A second chicken can peck all the others except the first, and so on. Chickens higher in the pecking order may at times be distinguished by their healthier appearance when compared to lower level chickens. While the pecking order is establishing, frequent and violent fights can happen, but once established, it is broken only when other individuals enter the group, in which case the pecking order re-establishes from scratch.
Social behaviour
Several animal species, including humans, tend to live in groups. Group size is a major aspect of their social environment. Social life is probably a complex and effective survival strategy. It may be regarded as a sort of symbiosis among individuals of the same species: a society is composed of a group of individuals belonging to the same species living within well-defined rules on food management, role assignments and reciprocal dependence.
When biologists interested in evolution theory first started examining social behaviour, some apparently unanswerable questions arose, such as how the birth of sterile castes, like in bees, could be explained through an evolving mechanism that emphasizes the reproductive success of as many individuals as possible, or why, amongst animals living in small groups like squirrels, an individual would risk its own life to save the rest of the group. These behaviours may be examples of altruism. Not all behaviours are altruistic, as indicated by the table below. For example, revengeful behaviour was at one point claimed to have been observed exclusively in Homo sapiens. However, other species have been reported to be vengeful including chimpanzees, as well as anecdotal reports of vengeful camels.
Altruistic behaviour has been explained by the gene-centred view of evolution.
Benefits and costs of group living
One advantage of group living is decreased predation. If the number of predator attacks stays the same despite increasing prey group size, each prey has a reduced risk of predator attacks through the dilution effect. Further, according to the selfish herd theory, the fitness benefits associated with group living vary depending on the location of an individual within the group. The theory suggests that conspecifics positioned at the centre of a group will reduce the likelihood predations while those at the periphery will become more vulnerable to attack. In groups, prey can also actively reduce their predation risk through more effective defence tactics, or through earlier detection of predators through increased vigilance.
Another advantage of group living is an increased ability to forage for food. Group members may exchange information about food sources, facilitating the process of resource location. Honeybees are a notable example of this, using the waggle dance to communicate the location of flowers to the rest of their hive. Predators also receive benefits from hunting in groups, through using better strategies and being able to take down larger prey.
Some disadvantages accompany living in groups. Living in close proximity to other animals can facilitate the transmission of parasites and disease, and groups that are too large may also experience greater competition for resources and mates.
Group size
Theoretically, social animals should have optimal group sizes that maximize the benefits and minimize the costs of group living. However, in nature, most groups are stable at slightly larger than optimal sizes. Because it generally benefits an individual to join an optimally-sized group, despite slightly decreasing the advantage for all members, groups may continue to increase in size until it is more advantageous to remain alone than to join an overly full group.
Tinbergen's four questions for ethologists
Tinbergen argued that ethology needed to include four kinds of explanation in any instance of behaviour:
Function – How does the behaviour affect the animal's chances of survival and reproduction? Why does the animal respond that way instead of some other way?
Causation – What are the stimuli that elicit the response, and how has it been modified by recent learning?
Development – How does the behaviour change with age, and what early experiences are necessary for the animal to display the behaviour?
Evolutionary history – How does the behaviour compare with similar behaviour in related species, and how might it have begun through the process of phylogeny?
These explanations are complementary rather than mutually exclusive—all instances of behaviour require an explanation at each of these four levels. For example, the function of eating is to acquire nutrients (which ultimately aids survival and reproduction), but the immediate cause of eating is hunger (causation). Hunger and eating are evolutionarily ancient and are found in many species (evolutionary history), and develop early within an organism's lifespan (development). It is easy to confuse such questions—for example, to argue that people eat because they are hungry and not to acquire nutrients—without realizing that the reason people experience hunger is because it causes them to acquire nutrients.
| Biology and health sciences | Ethology | null |
9426 | https://en.wikipedia.org/wiki/Electromagnetic%20radiation | Electromagnetic radiation | In physics, electromagnetic radiation (EMR) is the set of waves of an electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy.
Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c. There, depending on the frequency of oscillation, different wavelengths of electromagnetic spectrum are produced. In homogeneous, isotropic media, the oscillations of the two fields are on average perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave.
Electromagnetic radiation is commonly referred to as "light", EM, EMR, or electromagnetic waves.
The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength, the electromagnetic spectrum includes: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays.
Electromagnetic waves are emitted by electrically charged particles undergoing acceleration, and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum, and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field, while the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena.
In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions. Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation. The energy of an individual photon is quantized and proportional to frequency according to Planck's equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is the Planck constant. Thus, higher frequency photons have more energy. For example, a gamma ray photon has times the energy of a extremely low frequency radio wave photon.
The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of lower energy ultraviolet or lower frequencies (i.e., near ultraviolet, visible light, infrared, microwaves, and radio waves) is non-ionizing because its photons do not individually have enough energy to ionize atoms or molecules or to break chemical bonds. The effect of non-ionizing radiation on chemical systems and living tissue is primarily simply heating, through the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are ionizing – individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. Ionizing radiation can cause chemical reactions and damage living cells beyond simply heating, and can be a health hazard and dangerous.
Physics
Theory
Maxwell's equations
James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves.
Near and far fields
Maxwell's equations established that some charges and currents (sources) produce local electromagnetic fields near them that do not radiate. Currents directly produce magnetic fields, but such fields of a magnetic-dipole–type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric-dipole–type electrical field, but this also declines with distance. These fields make up the near field. Neither of these behaviours is responsible for EM radiation. Instead, they only efficiently transfer energy to a receiver very close to the source, such as inside a transformer. The near field has strong effects its source, with any energy withdrawn by a receiver causing increased load (decreased electrical reactance) on the source. The near field does not propagate freely into space, carrying energy away without a distance limit, but rather oscillates, returning its energy to the transmitter if it is not absorbed by a receiver.
By contrast, the far field is composed of radiation that is free of the transmitter, in the sense that the transmitter requires the same power to send changes in the field out regardless of whether anything absorbs the signal, e.g. a radio station does not need to increase its power when more receivers use the signal. This far part of the electromagnetic field is electromagnetic radiation. The far fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation from an isotropic source decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field, the near field, which varies in intensity according to an inverse cube power law, and thus does not transport a conserved amount of energy over distances but instead fades with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil).
In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity are both associated with the near field, and do not comprise electromagnetic radiation.
Properties
Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves.
The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect.
In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount.
EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter. Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair.
Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once.
A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics.
Electromagnetic waves can be polarized, reflected, refracted, or diffracted, and can interfere with each other.
Wave model
In homogeneous, isotropic media, electromagnetic radiation is a transverse wave, meaning that its oscillations are perpendicular to the direction of energy transfer and travel. It comes from the following equations:These equations predicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation.
The electric and magnetic parts of the field in an electromagnetic wave stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). In the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a time-change in one type of field is proportional to the curl of the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below).
An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion.
A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation:
where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant.
Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization.
Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation.
The energy in electromagnetic waves is sometimes called radiant energy.
Particle model and quantum theory
An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years, and it later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by
where h is the Planck constant, is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave.
Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength:
The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect.
As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence.
Wave–particle duality
The modern theory that explains the nature of light includes the notion of wave–particle duality.
Wave and particle effects of electromagnetic radiation
Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the light beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature.
These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift.
Propagation speed
When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current.
As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hf, where E is the energy of the photon, h is the Planck constant, 6.626 × 10−34 J·s, and f is the frequency of the wave.
In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum.
History of discovery
Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared.
In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions.
In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves.
Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties.
The last portion of the EM spectrum to be discovered was associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers.
Electromagnetic spectrum
EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, visible, ultraviolet, X-rays and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal waves (monochromatic radiation), which in turn can each be classified into these regions of the EMR spectrum.
For certain classes of EM waves, the waveform is most usefully treated as random, and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their power content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the Zero point wave field of the electromagnetic vacuum.
The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe.
Radio and microwave
When radio waves impinge upon a conductor, they couple to the conductor, travel along it and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge.
Electromagnetic radiation phenomena with wavelengths ranging from one meter to one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz.
At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both.
Infrared
Like radio and microwave, infrared (IR) is reflected by metals (and also most EMR, well into the ultraviolet range). However, unlike lower-frequency radio and microwave radiation, Infrared EMR commonly interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. It is consequently absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. The same process, run in reverse, causes bulk substances to radiate in the infrared spontaneously (see thermal radiation section below).
Infrared radiation is divided into spectral subregions. While different subdivision schemes exist, the spectrum is commonly divided as near-infrared (0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), mid-wavelength infrared (3–8 μm), long-wavelength infrared (8–15 μm) and far infrared (15–1000 μm).
Visible light
Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light.
As frequency increases into the visible range, photons have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the visible range, as the mechanism of vision involves the change in bonding of a single molecule, retinal, which absorbs a single photon. The change in retinal causes a change in the shape of the rhodopsin protein it is contained in, which starts the biochemical process that causes the retina of the human eye to sense the light.
Photosynthesis becomes possible in this range as well, for the same reason. A single molecule of chlorophyll is excited by a single photon. In plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, to prevent reactions that would otherwise interfere with photosynthesis at high light levels.
Animals that detect infrared make use of small packets of water that change temperature, in an essentially thermal process that involves many photons.
Infrared, microwaves and radio waves are known to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation.
Visible light is able to affect only a tiny percentage of all molecules. Usually not in a permanent or damaging way, rather the photon excites an electron which then emits another photon when returning to its original position. This is the source of color produced by most dyes. Retinal is an exception. When a photon is absorbed, the retinal permanently changes structure from cis to trans, and requires a protein to convert it back, i.e. reset it to be able to function as a light detector again.
Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A.
Ultraviolet
As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects.
At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV". Ionizing UV is strongly filtered by the Earth's atmosphere.
X-rays and gamma rays
Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter.
Atmosphere and magnetosphere
Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted.
Visible light is well transmitted in air, a property known as an atmospheric window, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor and CO2.
Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves.
Finally, at radio wavelengths longer than 10 m or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 m or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 m).
Thermal and electromagnetic radiation as a form of heat
The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire.
Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects.
Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, "heat" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can "heat" (in the sense of increase the thermal energy temperature of) a material, when it is absorbed.
The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material.
The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy.
Biological effects
Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms. The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to near ultraviolet) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important as it affects the intensity of the radiation and penetration into the organism (for example, microwaves penetrate better than infrared). It is widely accepted that low frequency fields that are too weak to cause significant heating could not possibly have any biological effect.
Some research suggests that weaker non-thermal electromagnetic fields (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation) and modulated RF and microwave fields can have biological effects, though the significance of this is unclear.
The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B – possibly carcinogenic. This group contains possible carcinogens such as lead, DDT, and styrene.
At higher frequencies (some of visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules. All UV frequencies have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.
Thus, at UV frequencies and higher, electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum.
Use as a weapon
The heat ray is an application of EMR that makes use of microwave frequencies to create an unpleasant heating effect in the upper layer of the skin. A publicly known heat ray weapon called the Active Denial System was developed by the US military as an experimental weapon to deny the enemy access to an area. A death ray is a theoretical weapon that delivers heat ray based on electromagnetic energy at levels that are capable of injuring human tissue. An inventor of a death ray, Harry Grindell Matthews, claimed to have lost sight in his left eye while working on his death ray weapon based on a microwave magnetron from the 1920s (a normal microwave oven creates a tissue damaging cooking effect inside the oven at around 2 kV/m).
Derivation from electromagnetic theory
Electromagnetic waves are predicted by the classical laws of electricity and magnetism, known as Maxwell's equations. There are nontrivial solutions of the homogeneous Maxwell's equations (without charges or currents), describing waves of changing electric and magnetic fields. Beginning with Maxwell's equations in free space:
where
and are the electric field (measured in V/m or N/C) and the magnetic field (measured in T or Wb/m2), respectively;
yields the divergence and the curl of a vector field
and are partial derivatives (rate of change in time, with location fixed) of the magnetic and electric field;
is the permeability of a vacuum (4 × 10−7 H/m), and is the permittivity of a vacuum (8.85 × 10−12 F/m);
Besides the trivial solution
useful solutions can be derived with the following vector identity, valid for all vectors in some vector field:
Taking the curl of the second Maxwell equation () yields:
Evaluating the left hand side of () with the above identity and simplifying using (), yields:
Evaluating the right hand side of () by exchanging the sequence of derivatives and inserting the fourth yields:
Combining () and () again, gives a vector-valued differential equation for the electric field, solving the homogeneous Maxwell equations:
Taking the curl of the fourth Maxwell equation () results in a similar differential equation for a magnetic field solving the homogeneous Maxwell equations:
Both differential equations have the form of the general wave equation for waves propagating with speed where is a function of time and location, which gives the amplitude of the wave at some time at a certain location:
This is also written as:
where denotes the so-called d'Alembert operator, which in Cartesian coordinates is given as:
Comparing the terms for the speed of propagation, yields in the case of the electric and magnetic fields:
This is the speed of light in vacuum. Thus Maxwell's equations connect the vacuum permittivity , the vacuum permeability , and the speed of light, c0, via the above equation. This relationship had been discovered by Wilhelm Eduard Weber and Rudolf Kohlrausch prior to the development of Maxwell's electrodynamics, however Maxwell was the first to produce a field theory consistent with waves traveling at the speed of light.
These are only two equations versus the original four, so more information pertains to these waves hidden within Maxwell's equations. A generic vector wave for the electric field has the form
Here, is a constant vector, is any second differentiable function, is a unit vector in the direction of propagation, and is a position vector. is a generic solution to the wave equation. In other words,
for a generic wave traveling in the direction.
From the first of Maxwell's equations, we get
Thus,
which implies that the electric field is orthogonal to the direction the wave propagates. The second of Maxwell's equations yields the magnetic field, namely,
Thus,
The remaining equations will be satisfied by this choice of .
The electric and magnetic field waves in the far-field travel at the speed of light. They have a special restricted orientation and proportional magnitudes, , which can be seen immediately from the Poynting vector. The electric field, magnetic field, and direction of wave propagation are all orthogonal, and the wave propagates in the same direction as . Also, E and B far-fields in free space, which as wave solutions depend primarily on these two Maxwell equations, are in-phase with each other. This is guaranteed since the generic wave solution is first order in both space and time, and the curl operator on one side of these equations results in first-order spatial derivatives of the wave solution, while the time-derivative on the other side of the equations, which gives the other field, is first-order in time, resulting in the same phase shift for both fields in each mathematical operation.
From the viewpoint of an electromagnetic wave traveling forward, the electric field might be oscillating up and down, while the magnetic field oscillates right and left. This picture can be rotated with the electric field oscillating right and left and the magnetic field oscillating down and up. This is a different solution that is traveling in the same direction. This arbitrariness in the orientation with respect to propagation direction is known as polarization. On a quantum level, it is described as photon polarization. The direction of the polarization is defined as the direction of the electric field.
More general forms of the second-order wave equations given above are available, allowing for both non-vacuum propagation media and sources. Many competing derivations exist, all with varying levels of approximation and intended applications. One very general example is a form of the electric field equation, which was factorized into a pair of explicitly directional wave equations, and then efficiently reduced into a single uni-directional wave equation by means of a simple slow-evolution approximation.
| Physical sciences | Physics | null |
9450 | https://en.wikipedia.org/wiki/Electrical%20telegraph | Electrical telegraph | Electrical telegraphy is a point-to-point text messaging system, primarily used from the 1840s until the late 20th century. It was the first electrical telecommunications system and the most widely used of a number of early messaging systems called telegraphs, that were devised to send text messages more quickly than physically carrying them. Electrical telegraphy can be considered the first example of electrical engineering.
Text telegraphy consisted of two or more geographically separated stations, called telegraph offices. The offices were connected by wires, usually supported overhead on utility poles. Many electrical telegraph systems were invented that operated in different ways, but the ones that became widespread fit into two broad categories. First are the needle telegraphs, in which electric current sent down the telegraph line produces electromagnetic force to move a needle-shaped pointer into position over a printed list. Early needle telegraph models used multiple needles, thus requiring multiple wires to be installed between stations. The first commercial needle telegraph system and the most widely used of its type was the Cooke and Wheatstone telegraph, invented in 1837. The second category are armature systems, in which the current activates a telegraph sounder that makes a click; communication on this type of system relies on sending clicks in coded rhythmic patterns. The archetype of this category was the Morse system and the code associated with it, both invented by Samuel Morse in 1838. In 1865, the Morse system became the standard for international communication, using a modified form of Morse's code that had been developed for German railways.
Electrical telegraphs were used by the emerging railway companies to provide signals for train control systems, minimizing the chances of trains colliding with each other. This was built around the signalling block system in which signal boxes along the line communicate with neighbouring boxes by telegraphic sounding of single-stroke bells and three-position needle telegraph instruments.
In the 1840s, the electrical telegraph superseded optical telegraph systems such as semaphores, becoming the standard way to send urgent messages. By the latter half of the century, most developed nations had commercial telegraph networks with local telegraph offices in most cities and towns, allowing the public to send messages (called telegrams) addressed to any person in the country, for a fee.
Beginning in 1850, submarine telegraph cables allowed for the first rapid communication between people on different continents. The telegraph's nearly-instant transmission of messages across continents and between continents had widespread social and economic impacts. The electric telegraph led to Guglielmo Marconi's invention of wireless telegraphy, the first means of radiowave telecommunication, which he began in 1894.
In the early 20th century, manual operation of telegraph machines was slowly replaced by teleprinter networks. Increasing use of the telephone pushed telegraphy into only a few specialist uses; its use by the general public dwindled to greetings for special occasions. The rise of the Internet and email in the 1990s largely made dedicated telegraphy networks obsolete.
History
Precursors
Prior to the electric telegraph, visual systems were used, including beacons, smoke signals, flag semaphore, and optical telegraphs for visual signals to communicate over distances of land.
An auditory predecessor was West African talking drums. In the 19th century, Yoruba drummers used talking drums to mimic human tonal language to communicate complex messages – usually regarding news of birth, ceremonies, and military conflict – over 4–5 mile distances.
Early work
From early studies of electricity, electrical phenomena were known to travel with great speed, and many experimenters worked on the application of electricity to communications at a distance. All the known effects of electricitysuch as sparks, electrostatic attraction, chemical changes, electric shocks, and later electromagnetismwere applied to the problems of detecting controlled transmissions of electricity at various distances.
In 1753, an anonymous writer in the Scots Magazine suggested an electrostatic telegraph. Using one wire for each letter of the alphabet, a message could be transmitted by connecting the wire terminals in turn to an electrostatic machine, and observing the deflection of pith balls at the far end. The writer has never been positively identified, but the letter was signed C.M. and posted from Renfrew leading to a Charles Marshall of Renfrew being suggested. Telegraphs employing electrostatic attraction were the basis of early experiments in electrical telegraphy in Europe, but were abandoned as being impractical and were never developed into a useful communication system.
In 1774, Georges-Louis Le Sage realised an early electric telegraph. The telegraph had a separate wire for each of the 26 letters of the alphabet and its range was only between two rooms of his home.
In 1800, Alessandro Volta invented the voltaic pile, providing a continuous current of electricity for experimentation. This became a source of a low-voltage current that could be used to produce more distinct effects, and which was far less limited than the momentary discharge of an electrostatic machine, which with Leyden jars were the only previously known human-made sources of electricity.
Another very early experiment in electrical telegraphy was an "electrochemical telegraph" created by the German physician, anatomist and inventor Samuel Thomas von Sömmering in 1809, based on an earlier 1804 design by Spanish polymath and scientist Francisco Salva Campillo. Both their designs employed multiple wires (up to 35) to represent almost all Latin letters and numerals. Thus, messages could be conveyed electrically up to a few kilometers (in von Sömmering's design), with each of the telegraph receiver's wires immersed in a separate glass tube of acid. An electric current was sequentially applied by the sender through the various wires representing each letter of a message; at the recipient's end, the currents electrolysed the acid in the tubes in sequence, releasing streams of hydrogen bubbles next to each associated letter or numeral. The telegraph receiver's operator would watch the bubbles and could then record the transmitted message. This is in contrast to later telegraphs that used a single wire (with ground return).
Hans Christian Ørsted discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle. In the same year Johann Schweigger invented the galvanometer, with a coil of wire around a compass, that could be used as a sensitive indicator for an electric current. Also that year, André-Marie Ampère suggested that telegraphy could be achieved by placing small magnets under the ends of a set of wires, one pair of wires for each letter of the alphabet. He was apparently unaware of Schweigger's invention at the time, which would have made his system much more sensitive. In 1825, Peter Barlow tried Ampère's idea but only got it to work over and declared it impractical. In 1830 William Ritchie improved on Ampère's design by placing the magnetic needles inside a coil of wire connected to each pair of conductors. He successfully demonstrated it, showing the feasibility of the electromagnetic telegraph, but only within a lecture hall.
In 1825, William Sturgeon invented the electromagnet, with a single winding of uninsulated wire on a piece of varnished iron, which increased the magnetic force produced by electric current. Joseph Henry improved it in 1828 by placing several windings of insulated wire around the bar, creating a much more powerful electromagnet which could operate a telegraph through the high resistance of long telegraph wires. During his tenure at The Albany Academy from 1826 to 1832, Henry first demonstrated the theory of the 'magnetic telegraph' by ringing a bell through of wire strung around the room in 1831.
In 1835, Joseph Henry and Edward Davy independently invented the mercury dipping electrical relay, in which a magnetic needle is dipped into a pot of mercury when an electric current passes through the surrounding coil. In 1837, Davy invented the much more practical metallic make-and-break relay which became the relay of choice in telegraph systems and a key component for periodically renewing weak signals. Davy demonstrated his telegraph system in Regent's Park in 1837 and was granted a patent on 4 July 1838. Davy also invented a printing telegraph which used the electric current from the telegraph signal to mark a ribbon of calico infused with potassium iodide and calcium hypochlorite.
First working systems
The first working telegraph was built by the English inventor Francis Ronalds in 1816 and used static electricity. At the family home on Hammersmith Mall, he set up a complete subterranean system in a long trench as well as an long overhead telegraph. The lines were connected at both ends to revolving dials marked with the letters of the alphabet and electrical impulses sent along the wire were used to transmit messages. Offering his invention to the Admiralty in July 1816, it was rejected as "wholly unnecessary". His account of the scheme and the possibilities of rapid global communication in Descriptions of an Electrical Telegraph and of some other Electrical Apparatus was the first published work on electric telegraphy and even described the risk of signal retardation due to induction. Elements of Ronalds' design were utilised in the subsequent commercialisation of the telegraph over 20 years later.
The Schilling telegraph, invented by Baron Schilling von Canstatt in 1832, was an early needle telegraph. It had a transmitting device that consisted of a keyboard with 16 black-and-white keys. These served for switching the electric current. The receiving instrument consisted of six galvanometers with magnetic needles, suspended from silk threads. The two stations of Schilling's telegraph were connected by eight wires; six were connected with the galvanometers, one served for the return current and one for a signal bell. When at the starting station the operator pressed a key, the corresponding pointer was deflected at the receiving station. Different positions of black and white flags on different disks gave combinations which corresponded to the letters or numbers. Pavel Schilling subsequently improved its apparatus by reducing the number of connecting wires from eight to two.
On 21 October 1832, Schilling managed a short-distance transmission of signals between two telegraphs in different rooms of his apartment. In 1836, the British government attempted to buy the design but Schilling instead accepted overtures from Nicholas I of Russia. Schilling's telegraph was tested on a experimental underground and underwater cable, laid around the building of the main Admiralty in Saint Petersburg and was approved for a telegraph between the imperial palace at Peterhof and the naval base at Kronstadt. However, the project was cancelled following Schilling's death in 1837. Schilling was also one of the first to put into practice the idea of the binary system of signal transmission. His work was taken over and developed by Moritz von Jacobi who invented telegraph equipment that was used by Tsar Alexander III to connect the Imperial palace at Tsarskoye Selo and Kronstadt Naval Base.
In 1833, Carl Friedrich Gauss, together with the physics professor Wilhelm Weber in Göttingen, installed a wire above the town's roofs. Gauss combined the Poggendorff-Schweigger multiplicator with his magnetometer to build a more sensitive device, the galvanometer. To change the direction of the electric current, he constructed a commutator of his own. As a result, he was able to make the distant needle move in the direction set by the commutator on the other end of the line.
At first, Gauss and Weber used the telegraph to coordinate time, but soon they developed other signals and finally, their own alphabet. The alphabet was encoded in a binary code that was transmitted by positive or negative voltage pulses which were generated by means of moving an induction coil up and down over a permanent magnet and connecting the coil with the transmission wires by means of the commutator. The page of Gauss's laboratory notebook containing both his code and the first message transmitted, as well as a replica of the telegraph made in the 1850s under the instructions of Weber are kept in the faculty of physics at the University of Göttingen, in Germany.
Gauss was convinced that this communication would be of help to his kingdom's towns. Later in the same year, instead of a voltaic pile, Gauss used an induction pulse, enabling him to transmit seven letters a minute instead of two. The inventors and university did not have the funds to develop the telegraph on their own, but they received funding from Alexander von Humboldt. Carl August Steinheil in Munich was able to build a telegraph network within the city in 1835–1836. In 1838, Steinheil installed a telegraph along the Nuremberg–Fürth railway line, built in 1835 as the first German railroad, which was the first earth-return telegraph put into service.
By 1837, William Fothergill Cooke and Charles Wheatstone had co-developed a telegraph system which used a number of needles on a board that could be moved to point to letters of the alphabet. Any number of needles could be used, depending on the number of characters it was required to code. In May 1837 they patented their system. The patent recommended five needles, which coded twenty of the alphabet's 26 letters.
Samuel Morse independently developed and patented a recording electric telegraph in 1837. Morse's assistant Alfred Vail developed an instrument that was called the register for recording the received messages. It embossed dots and dashes on a moving paper tape by a stylus which was operated by an electromagnet. Morse and Vail developed the Morse code signalling alphabet.
On 24 May 1844, Morse sent to Vail the historic first message “WHAT HATH GOD WROUGHT" from the Capitol in Washington to the old Mt. Clare Depot in Baltimore.
Commercial telegraphy
Cooke and Wheatstone system
The first commercial electrical telegraph was the Cooke and Wheatstone system. A demonstration four-needle system was installed on the Euston to Camden Town section of Robert Stephenson's London and Birmingham Railway in 1837 for signalling rope-hauling of locomotives. It was rejected in favour of pneumatic whistles. Cooke and Wheatstone had their first commercial success with a system installed on the Great Western Railway over the from Paddington station to West Drayton in 1838. This was a five-needle, six-wire system, and had the major advantage of displaying the letter being sent so operators did not need to learn a code. The insulation failed on the underground cables between Paddington and West Drayton, and when the line was extended to Slough in 1843, the system was converted to a one-needle, two-wire configuration with uninsulated wires on poles. The cost of installing wires was ultimately more economically significant than the cost of training operators. The one-needle telegraph proved highly successful on British railways, and 15,000 sets were in use at the end of the nineteenth century; some remained in service in the 1930s. The Electric Telegraph Company, the world's first public telegraphy company, was formed in 1845 by financier John Lewis Ricardo and Cooke.
Wheatstone ABC telegraph
Wheatstone developed a practical alphabetical system in 1840 called the A.B.C. System, used mostly on private wires. This consisted of a "communicator" at the sending end and an "indicator" at the receiving end. The communicator consisted of a circular dial with a pointer and the 26 letters of the alphabet (and four punctuation marks) around its circumference. Against each letter was a key that could be pressed. A transmission would begin with the pointers on the dials at both ends set to the start position. The transmitting operator would then press down the key corresponding to the letter to be transmitted. In the base of the communicator was a magneto actuated by a handle on the front. This would be turned to apply an alternating voltage to the line. Each half cycle of the current would advance the pointers at both ends by one position. When the pointer reached the position of the depressed key, it would stop and the magneto would be disconnected from the line. The communicator's pointer was geared to the magneto mechanism. The indicator's pointer was moved by a polarised electromagnet whose armature was coupled to it through an escapement. Thus the alternating line voltage moved the indicator's pointer on to the position of the depressed key on the communicator. Pressing another key would then release the pointer and the previous key, and re-connect the magneto to the line. These machines were very robust and simple to operate, and they stayed in use in Britain until well into the 20th century.
Morse system
The Morse system uses a single wire between offices. At the sending station, an operator taps on a switch called a telegraph key, spelling out text messages in Morse code. Originally, the armature was intended to make marks on paper tape, but operators learned to interpret the clicks and it was more efficient to write down the message directly.
In 1851, a conference in Vienna of countries in the German-Austrian Telegraph Union (which included many central European countries) adopted the Morse telegraph as the system for international communications. The international Morse code adopted was considerably modified from the original American Morse code, and was based on a code used on Hamburg railways (Gerke, 1848). A common code was a necessary step to allow direct telegraph connection between countries. With different codes, additional operators were required to translate and retransmit the message. In 1865, a conference in Paris adopted Gerke's code as the International Morse code and was henceforth the international standard. The US, however, continued to use American Morse code internally for some time, hence international messages required retransmission in both directions.
In the United States, the Morse/Vail telegraph was quickly deployed in the two decades following the first demonstration in 1844. The overland telegraph connected the west coast of the continent to the east coast by 24 October 1861, bringing an end to the Pony Express.
Foy–Breguet system
France was slow to adopt the electrical telegraph, because of the extensive optical telegraph system built during the Napoleonic era. There was also serious concern that an electrical telegraph could be quickly put out of action by enemy saboteurs, something that was much more difficult to do with optical telegraphs which had no exposed hardware between stations. The Foy-Breguet telegraph was eventually adopted. This was a two-needle system using two signal wires but displayed in a uniquely different way to other needle telegraphs. The needles made symbols similar to the Chappe optical system symbols, making it more familiar to the telegraph operators. The optical system was decommissioned starting in 1846, but not completely until 1855. In that year the Foy-Breguet system was replaced with the Morse system.
Expansion
As well as the rapid expansion of the use of the telegraphs along the railways, they soon spread into the field of mass communication with the instruments being installed in post offices. The era of mass personal communication had begun. Telegraph networks were expensive to build, but financing was readily available, especially from London bankers. By 1852, National systems were in operation in major countries:
The New York and Mississippi Valley Printing Telegraph Company, for example, was created in 1852 in Rochester, New York and eventually became the Western Union Telegraph Company. Although many countries had telegraph networks, there was no worldwide interconnection. Message by post was still the primary means of communication to countries outside Europe.
Telegraphy was introduced in Central Asia during the 1870s.
Telegraphic improvements
A continuing goal in telegraphy was to reduce the cost per message by reducing hand-work, or increasing the sending rate. There were many experiments with moving pointers, and various electrical encodings. However, most systems were too complicated and unreliable. A successful expedient to reduce the cost per message was the development of telegraphese.
The first system that did not require skilled technicians to operate was Charles Wheatstone's ABC system in 1840 in which the letters of the alphabet were arranged around a clock-face, and the signal caused a needle to indicate the letter. This early system required the receiver to be present in real time to record the message and it reached speeds of up to 15 words a minute.
In 1846, Alexander Bain patented a chemical telegraph in Edinburgh. The signal current moved an iron pen across a moving paper tape soaked in a mixture of ammonium nitrate and potassium ferrocyanide, decomposing the chemical and producing readable blue marks in Morse code. The speed of the printing telegraph was 16 and a half words per minute, but messages still required translation into English by live copyists. Chemical telegraphy came to an end in the US in 1851, when the Morse group defeated the Bain patent in the US District Court.
For a brief period, starting with the New York–Boston line in 1848, some telegraph networks began to employ sound operators, who were trained to understand Morse code aurally. Gradually, the use of sound operators eliminated the need for telegraph receivers to include register and tape. Instead, the receiving instrument was developed into a "sounder", an electromagnet that was energized by a current and attracted a small iron lever. When the sounding key was opened or closed, the sounder lever struck an anvil. The Morse operator distinguished a dot and a dash by the short or long interval between the two clicks. The message was then written out in long-hand.
Royal Earl House developed and patented a letter-printing telegraph system in 1846 which employed an alphabetic keyboard for the transmitter and automatically printed the letters on paper at the receiver, and followed this up with a steam-powered version in 1852. Advocates of printing telegraphy said it would eliminate Morse operators' errors. The House machine was used on four main American telegraph lines by 1852. The speed of the House machine was announced as 2600 words an hour.
David Edward Hughes invented the printing telegraph in 1855; it used a keyboard of 26 keys for the alphabet and a spinning type wheel that determined the letter being transmitted by the length of time that had elapsed since the previous transmission. The system allowed for automatic recording on the receiving end. The system was very stable and accurate and became accepted around the world.
The next improvement was the Baudot code of 1874. French engineer Émile Baudot patented a printing telegraph in which the signals were translated automatically into typographic characters. Each character was assigned a five-bit code, mechanically interpreted from the state of five on/off switches. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute.
By this point, reception had been automated, but the speed and accuracy of the transmission were still limited to the skill of the human operator. The first practical automated system was patented by Charles Wheatstone. The message (in Morse code) was typed onto a piece of perforated tape using a keyboard-like device called the 'Stick Punch'. The transmitter automatically ran the tape through and transmitted the message at the then exceptionally high speed of 70 words per minute.
Teleprinters
An early successful teleprinter was invented by Frederick G. Creed. In Glasgow he created his first keyboard perforator, which used compressed air to punch the holes. He also created a reperforator (receiving perforator) and a printer. The reperforator punched incoming Morse signals onto paper tape and the printer decoded this tape to produce alphanumeric characters on plain paper. This was the origin of the Creed High Speed Automatic Printing System, which could run at an unprecedented 200 words per minute. His system was adopted by the Daily Mail for daily transmission of the newspaper contents.
With the invention of the teletypewriter, telegraphic encoding became fully automated. Early teletypewriters used the ITA-1 Baudot code, a five-bit code. This yielded only thirty-two codes, so it was over-defined into two "shifts", "letters" and "figures". An explicit, unshared shift code prefaced each set of letters and figures. In 1901, Baudot's code was modified by Donald Murray.
In the 1930s, teleprinters were produced by Teletype in the US, Creed in Britain and Siemens in Germany.
By 1935, message routing was the last great barrier to full automation. Large telegraphy providers began to develop systems that used telephone-like rotary dialling to connect teletypewriters. These resulting systems were called "Telex" (TELegraph EXchange). Telex machines first performed rotary-telephone-style pulse dialling for circuit switching, and then sent data by ITA2. This "type A" Telex routing functionally automated message routing.
The first wide-coverage Telex network was implemented in Germany during the 1930s as a network used to communicate within the government.
At the rate of 45.45 (±0.5%) baud – considered speedy at the time – up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication.
Automatic teleprinter exchange service was introduced into Canada by CPR Telegraphs and CN Telegraph in July 1957 and in 1958, Western Union started to build a Telex network in the United States.
The harmonic telegraph
The most expensive aspect of a telegraph system was the installation – the laying of the wire, which was often very long. The costs would be better covered by finding a way to send more than one message at a time through the single wire, thus increasing revenue per wire. Early devices included the duplex and the quadruplex which allowed, respectively, one or two telegraph transmissions in each direction. However, an even greater number of channels was desired on the busiest lines. In the latter half of the 1800s, several inventors worked towards creating a method for doing just that, including Charles Bourseul, Thomas Edison, Elisha Gray, and Alexander Graham Bell.
One approach was to have resonators of several different frequencies act as carriers of a modulated on-off signal. This was the harmonic telegraph, a form of frequency-division multiplexing. These various frequencies, referred to as harmonics, could then be combined into one complex signal and sent down the single wire. On the receiving end, the frequencies would be separated with a matching set of resonators.
With a set of frequencies being carried down a single wire, it was realized that the human voice itself could be transmitted electrically through the wire. This effort led to the invention of the telephone. (While the work toward packing multiple telegraph signals onto one wire led to telephony, later advances would pack multiple voice signals onto one wire by increasing the bandwidth by modulating frequencies much higher than human hearing. Eventually, the bandwidth was widened much further by using laser light signals sent through fiber optic cables. Fiber optic transmission can carry 25,000 telephone signals simultaneously down a single fiber.)
Oceanic telegraph cables
Soon after the first successful telegraph systems were operational, the possibility of transmitting messages across the sea by way of submarine communications cables was first proposed. One of the primary technical challenges was to sufficiently insulate the submarine cable to prevent the electric current from leaking out into the water. In 1842, a Scottish surgeon William Montgomerie introduced gutta-percha, the adhesive juice of the Palaquium gutta tree, to Europe. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. Gutta-percha was used as insulation on a wire laid across the Rhine between Deutz and Cologne. In 1849, C. V. Walker, electrician to the South Eastern Railway, submerged a wire coated with gutta-percha off the coast from Folkestone, which was tested successfully.
John Watkins Brett, an engineer from Bristol, sought and obtained permission from Louis-Philippe in 1847 to establish telegraphic communication between France and England. The first undersea cable was laid in 1850, connecting the two countries and was followed by connections to Ireland and the Low Countries.
The Atlantic Telegraph Company was formed in London in 1856 to undertake to construct a commercial telegraph cable across the Atlantic Ocean. It was successfully completed on 18 July 1866 by the ship SS Great Eastern, captained by Sir James Anderson, after many mishaps along the way. John Pender, one of the men on the Great Eastern, later founded several telecommunications companies primarily laying cables between Britain and Southeast Asia. Earlier transatlantic submarine cables installations were attempted in 1857, 1858 and 1865. The 1857 cable only operated intermittently for a few days or weeks before it failed. The study of underwater telegraph cables accelerated interest in mathematical analysis of very long transmission lines. The telegraph lines from Britain to India were connected in 1870. (Those several companies combined to form the Eastern Telegraph Company in 1872.) The HMS Challenger expedition in 1873–1876 mapped the ocean floor for future underwater telegraph cables.
Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. This brought news reports from the rest of the world. The telegraph across the Pacific was completed in 1902, finally encircling the world.
From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent.
Cable and Wireless Company
Cable & Wireless was a British telecommunications company that traced its origins back to the 1860s, with Sir John Pender as the founder, although the name was only adopted in 1934. It was formed from successive mergers including:
The Falmouth, Malta, Gibraltar Telegraph Company
The British Indian Submarine Telegraph Company
The Marseilles, Algiers and Malta Telegraph Company
The Eastern Telegraph Company
The Eastern Extension Australasia and China Telegraph Company
The Eastern and Associated Telegraph Companies
Telegraphy and longitude
Main article § Section: .
The telegraph was very important for sending time signals to determine longitude, providing greater accuracy than previously available. Longitude was measured by comparing local time (for example local noon occurs when the sun is at its highest above the horizon) with absolute time (a time that is the same for an observer anywhere on earth). If the local times of two places differ by one hour, the difference in longitude between them is 15° (360°/24h). Before telegraphy, absolute time could be obtained from astronomical events, such as eclipses, occultations or lunar distances, or by transporting an accurate clock (a chronometer) from one location to the other.
The idea of using the telegraph to transmit a time signal for longitude determination was suggested by François Arago to Samuel Morse in 1837, and the first test of this idea was made by Capt. Wilkes of the U.S. Navy in 1844, over Morse's line between Washington and Baltimore. The method was soon in practical use for longitude determination, in particular by the U.S. Coast Survey, and over longer and longer distances as the telegraph network spread across North America and the world, and as technical developments improved accuracy and productivity
The "telegraphic longitude net" soon became worldwide. Transatlantic links between Europe and North America were established in 1866 and 1870. The US Navy extended observations into the West Indies and Central and South America with an additional transatlantic link from South America to Lisbon between 1874 and 1890. British, Russian and US observations created a chain from Europe through Suez, Aden, Madras, Singapore, China and Japan, to Vladivostok, thence to Saint Petersburg and back to Western Europe.
Australia's telegraph network was linked to Singapore's via Java in 1871, and the net circled the globe in 1902 with the connection of the Australia and New Zealand networks to Canada's via the All Red Line. The two determinations of longitudes, one transmitted from east to west and the other from west to east, agreed within one second of arc ( second of time – less than 30 metres).
Telegraphy in war
The ability to send telegrams brought obvious advantages to those conducting war. Secret messages were encoded, so interception alone would not be sufficient for the opposing side to gain an advantage. There were also geographical constraints on intercepting the telegraph cables that improved security, however once radio telegraphy was developed interception became far more widespread.
Crimean War
The Crimean War was one of the first conflicts to use telegraphs and was one of the first to be documented extensively. In 1854, the government in London created a military Telegraph Detachment for the Army commanded by an officer of the Royal Engineers. It was to comprise twenty-five men from the Royal Corps of Sappers & Miners trained by the Electric Telegraph Company to construct and work the first field electric telegraph.
Journalistic recording of the war was provided by William Howard Russell (writing for The Times newspaper) with photographs by Roger Fenton. News from war correspondents kept the public of the nations involved in the war informed of the day-to-day events in a way that had not been possible in any previous war. After the French extended their telegraph lines to the coast of the Black Sea in late 1854, war news began reaching London in two days. When the British laid an underwater cable to the Crimean peninsula in April 1855, news reached London in a few hours. These prompt daily news reports energised British public opinion on the war, which brought down the government and led to Lord Palmerston becoming prime minister.
American Civil War
During the American Civil War the telegraph proved its value as a tactical, operational, and strategic communication medium and an important contributor to Union victory. By contrast the Confederacy failed to make effective use of the South's much smaller telegraph network. Prior to the War the telegraph systems were primarily used in the commercial sector. Government buildings were not inter-connected with telegraph lines, but relied on runners to carry messages back and forth. Before the war the Government saw no need to connect lines within city limits, however, they did see the use in connections between cities. Washington D.C. being the hub of government, it had the most connections, but there were only a few lines running north and south out of the city. It was not until the Civil War that the government saw the true potential of the telegraph system. Soon after the shelling of Fort Sumter, the South cut telegraph lines running into D.C., which put the city in a state of panic because they feared an immediate Southern invasion.
Within 6 months of the start of the war, the U.S. Military Telegraph Corps (USMT) had laid approximately of line. By war's end they had laid approximately of line, 8,000 for military and 5,000 for commercial use, and had handled approximately 6.5 million messages. The telegraph was not only important for communication within the armed forces, but also in the civilian sector, helping political leaders to maintain control over their districts.
Even before the war, the American Telegraph Company censored suspect messages informally to block aid to the secession movement. During the war, Secretary of War Simon Cameron, and later Edwin Stanton, wanted control over the telegraph lines to maintain the flow of information. Early in the war, one of Stanton's first acts as Secretary of War was to move telegraph lines from ending at McClellan's headquarters to terminating at the War Department. Stanton himself said "[telegraphy] is my right arm". Telegraphy assisted Northern victories, including the Battle of Antietam (1862), the Battle of Chickamauga (1863), and Sherman's March to the Sea (1864).
The telegraph system still had its flaws. The USMT, while the main source of telegraphers and cable, was still a civilian agency. Most operators were first hired by the telegraph companies and then contracted out to the War Department. This created tension between generals and their operators. One source of irritation was that USMT operators did not have to follow military authority. Usually they performed without hesitation, but they were not required to, so Albert Myer created a U.S. Army Signal Corps in February 1863. As the new head of the Signal Corps, Myer tried to get all telegraph and flag signaling under his command, and therefore subject to military discipline. After creating the Signal Corps, Myer pushed to further develop new telegraph systems. While the USMT relied primarily on civilian lines and operators, the Signal Corp's new field telegraph could be deployed and dismantled faster than USMT's system.
First World War
During World War I, Britain's telegraph communications were almost completely uninterrupted, while it was able to quickly cut Germany's cables worldwide. The British government censored telegraph cable companies in an effort to root out espionage and restrict financial transactions with Central Powers nations. British access to transatlantic cables and its codebreaking expertise led to the Zimmermann Telegram incident that contributed to the US joining the war. Despite British acquisition of German colonies and expansion into the Middle East, debt from the war led to Britain's control over telegraph cables to weaken while US control grew.
Second World War
World War II revived the 'cable war' of 1914–1918. In 1939, German-owned cables across the Atlantic were cut once again, and, in 1940, Italian cables to South America and Spain were cut in retaliation for Italian action against two of the five British cables linking Gibraltar and Malta. Electra House, Cable & Wireless's head office and central cable station, was damaged by German bombing in 1941.
Resistance movements in occupied Europe sabotaged communications facilities such as telegraph lines, forcing the Germans to use wireless telegraphy, which could then be intercepted by Britain.
The Germans developed a highly complex teleprinter attachment (German: Schlüssel-Zusatz, "cipher attachment") that was used for enciphering telegrams, using the Lorenz cipher, between German High Command (OKW) and the army groups in the field. These contained situation reports, battle plans, and discussions of strategy and tactics. Britain intercepted these signals, diagnosed how the encrypting machine worked, and decrypted a large amount of teleprinter traffic.
End of the telegraph era
In America, the end of the telegraph era can be associated with the fall of the Western Union Telegraph Company. Western Union was the leading telegraph provider for America and was seen as the best competition for the National Bell Telephone Company. Western Union and Bell were both invested in telegraphy and telephone technology. Western Union's decision to allow Bell to gain the advantage in telephone technology was the result of Western Union's upper management's failure to foresee the surpassing of the telephone over the, at the time, dominant telegraph system. Western Union soon lost the legal battle for the rights to their telephone copyrights. This led to Western Union agreeing to a lesser position in the telephone competition, which in turn led to the lessening of the telegraph.
While the telegraph was not the focus of the legal battles that occurred around 1878, the companies that were affected by the effects of the battle were the main powers of telegraphy at the time. Western Union thought that the agreement of 1878 would solidify telegraphy as the long-range communication of choice. However, due to the underestimates of telegraph's future and poor contracts, Western Union found itself declining. AT&T acquired working control of Western Union in 1909 but relinquished it in 1914 under threat of antitrust action. AT&T bought Western Union's electronic mail and Telex businesses in 1990.
Although commercial "telegraph" services are still available in many countries, transmission is usually done via a computer network rather than a dedicated wired connection.
| Technology | Telecommunications | null |
9476 | https://en.wikipedia.org/wiki/Electron | Electron | The electron (, or in nuclear reactions) is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, . Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy.
Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated.
Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators.
Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.
In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge "electron" in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment.
Electrons participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.
History
Discovery of effect of electric force
The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise , the English scientist William Gilbert coined the Neo-Latin term , to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin (also the root of the alloy of the same name), which came from the Greek word for amber, ().
Discovery of two kinds of charges
In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, vitreous fluid from glass rubbed with silk and resinous fluid from amber rubbed with wool. These two fluids can neutralize each other when combined. American scientist Ebenezer Kinnersley later also independently reached the same conclusion. A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (−). He gave them the modern charge nomenclature of positive and negative respectively. Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.
Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist Wilhelm Eduard Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".
Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron. The word electron is a combination of the words electric and ion. The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.
Discovery of free electrons outside matter
While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plücker observed the radiation emitted from the cathode caused phosphorescent light to appear on the tube wall near the cathode; and the region of the phosphorescent light could be moved by application of a magnetic field. In 1869, Plücker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. Furthermore, he also discovered that these rays are deflected by magnets just like lines of current.
In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays. Decades of experimental and theoretical research involving cathode rays were important in J. J. Thomson's eventual discovery of electrons. Goldstein also experimented with double cathodes and hypothesized that one ray may repulse another, although he didn't believe that any particles might be involved.
During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode-ray tube to have a high vacuum inside. He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in a fourth state of matter, in which the mean free path of the particles is so long that collisions may be ignored.
In 1883, not yet well-known German physicist Heinrich Hertz tried to prove that cathode rays are electrically neutral and got what he interpreted as a confident absence of deflection in electrostatic, as opposed to magnetic, field. However, as J. J. Thomson explained in 1897, Hertz placed the deflecting electrodes in a highly-conductive area of the tube, resulting in a strong screening effect close to their surface.
The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given electric and magnetic field, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time. This is because it was assumed that the charge carriers were much heavier hydrogen or nitrogen atoms. Schuster's estimates would subsequently turn out to be largely correct.
In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.
While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms.
In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. By 1899 he showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. Thomson measured m/e for cathode ray "corpuscles", and made good estimates of the charge e, leading to value for the mass m, finding a value 1400 times less massive than the least massive ion known: hydrogen. In the same year Emil Wiechert and Walter Kaufmann also calculated the e/m ratio but did not take the step of interpreting their results as showing a new particle, while J. J. Thomson would subsequently in 1899 give estimates for the electron charge and mass as well: e ~ and m ~
The name "electron" was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz. The term was originally coined by George Johnstone Stoney in 1891 as a tentative name for the basic unit of electrical charge (which had then yet to be discovered).
The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913. However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.
Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.
Atomic theory
By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.
Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law.
In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.
Quantum mechanics
In his 1924 dissertation (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson and Alexander Reid discovered the interference effect was produced when a beam of electrons was passed through thin celluloid foils and later metal films, and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.
De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.
In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons and using electron as a generic term to describe both the positively and negatively charged variants.
In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and
Richard Feynman in the late 1940s.
Particle accelerators
With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.
With a beam energy of 1.5 GeV, the first high-energy
particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.
Confinement of individual electrons
Individual electrons can now be easily confined in ultra small (, ) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective-mass tensor.
Characteristics
Classification
In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions because they all have half-odd integer spin; the electron has spin .
Fundamental properties
The invariant mass of an electron is approximately , or . Due to mass–energy equivalence, this corresponds to a rest energy of . The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe.
Electrons have an electric charge of coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. The electron is commonly symbolized by , and the positron is symbolized by .
The electron has an intrinsic angular momentum or spin of . This property is usually stated by referring to the electron as a spin-1/2 particle. For such particles the spin magnitude is , while the result of the measurement of a projection of the spin on any axis can only be ±. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant that is equal to The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.
The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles.
The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters.
The upper bound of the electron radius of 10−18 meters can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.
There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level.
Quantum properties
As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment.
The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density.
Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, , where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead.
In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit.
Virtual particles
In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, . Thus, for a virtual electron, Δt is at most .
While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron.
The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.
The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance.
Interaction
An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère–Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).
When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.
Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The deceleration of the electron results in the emission of Bremsstrahlung radiation.
An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength. For an electron, it has a value of . When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering.
The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by which is approximately equal to .
When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.
In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino–electron elastic scattering.
Atoms and molecules
An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number.
Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.
The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital, called paired electrons, cancel each other out.
The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.<ref
></ref>
Conductivity
If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect.
Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.
At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons.
Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.
Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.
When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain.
Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge.
Motion and energy
According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.
The effects of special relativity are based on a quantity known as the Lorentz factor, defined as where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is:
where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.
Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum. For the 51 GeV electron above, the wavelength is about , small enough to explore structures well below the size of an atomic nucleus.
Formation
The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron–electron pairs annihilated each other and emitted energetic photons:
+ ↔ +
An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.
For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron–positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe. The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes. Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,
→ + +
For about the next –, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.
Roughly one million years after the big bang, the first generation of stars began to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 ().
At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.
When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass–energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.
Cosmic rays are particles traveling through space with high energies. Energy events as high as have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion.
→ +
A muon, in turn, can decay to form an electron or positron.
→ + +
Observation
Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.
The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct dark lines appear in the spectrum of transmitted radiation in places where the corresponding frequency is absorbed by the atom's electrons. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. When detected, spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.
In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months. The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.
The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time.
The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.
Plasma applications
Particle beams
Electron beams are used in welding. They allow energy densities up to across a narrow focus diameter of and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.
Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer. This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.
Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products. Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.
Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.
Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect. Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies.
Imaging
Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV. The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.
The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material. In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm. By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential. The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms. This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.
Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.
Other applications
In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.
Electrons are important in cathode-ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse. Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.
| Physical sciences | Physics | null |
9477 | https://en.wikipedia.org/wiki/Europium | Europium | Europium is a chemical element; it has symbol Eu and atomic number 63. Europium is a silvery-white metal of the lanthanide series that reacts readily with air to form a dark oxide coating. It is the most chemically reactive, least dense, and softest of the lanthanide elements. It is soft enough to be cut with a knife. Europium was isolated in 1901 and named after the continent of Europe. Europium usually assumes the oxidation state +3, like other members of the lanthanide series, but compounds having oxidation state +2 are also common. All europium compounds with oxidation state +2 are slightly reducing. Europium has no significant biological role and is relatively non-toxic compared to other heavy metals. Most applications of europium exploit the phosphorescence of europium compounds. Europium is one of the rarest of the rare-earth elements on Earth.
Etymology
Its discoverer, Eugène-Anatole Demarçay named the element after the continent of Europe.
Characteristics
Physical properties
Europium is a ductile metal with a hardness similar to that of lead. It crystallizes in a body-centered cubic lattice. Some properties of europium are strongly influenced by its half-filled electron shell. Europium has the second lowest melting point and the lowest density of all lanthanides.
Chemical properties
Europium is the most reactive rare-earth element. It rapidly oxidizes in air, so that bulk oxidation of a centimeter-sized sample occurs within several days. Its reactivity with water is comparable to that of calcium, and the reaction is
2 Eu + 6 H2O → 2 Eu(OH)3 + 3 H2
Because of the high reactivity, samples of solid europium rarely have the shiny appearance of the fresh metal, even when coated with a protective layer of mineral oil. Europium ignites in air at 150 to 180 °C to form europium(III) oxide:
4 Eu + 3 O2 → 2 Eu2O3
Europium dissolves readily in dilute sulfuric acid to form pale pink solutions of [Eu(H2O)9]3+:
2 Eu + 3 H2SO4 + 18 H2O → 2 [Eu(H2O)9]3+ + 3 + 3 H2
Eu(II) vs. Eu(III)
Although usually trivalent, europium readily forms divalent compounds. This behavior is unusual for most lanthanides, which almost exclusively form compounds with an oxidation state of +3. The +2 state has an electron configuration 4f7 because the half-filled f-shell provides more stability. In terms of size and coordination number, europium(II) and barium(II) are similar. The sulfates of both barium and europium(II) are also highly insoluble in water. Divalent europium is a mild reducing agent, oxidizing in air to form Eu(III) compounds. In anaerobic, and particularly geothermal conditions, the divalent form is sufficiently stable that it tends to be incorporated into minerals of calcium and the other alkaline earths. This ion-exchange process is the basis of the "negative europium anomaly", the low europium content in many lanthanide minerals such as monazite, relative to the chondritic abundance. Bastnäsite tends to show less of a negative europium anomaly than does monazite, and hence is the major source of europium today. The development of easy methods to separate divalent europium from the other (trivalent) lanthanides made europium accessible even when present in low concentration, as it usually is.
Isotopes
Naturally occurring europium is composed of two isotopes, 151Eu and 153Eu, which occur in almost equal proportions; 153Eu is slightly more abundant (52.2% natural abundance). While 153Eu is stable, 151Eu was found to be unstable to alpha decay with a half-life of in 2007, giving about one alpha decay per two minutes in every kilogram of natural europium. This value is in reasonable agreement with theoretical predictions. Besides the natural radioisotope 151Eu, 35 artificial radioisotopes have been characterized, the most stable being 150Eu with a half-life of 36.9 years, 152Eu with a half-life of 13.516 years, and 154Eu with a half-life of 8.593 years. All the remaining radioactive isotopes have half-lives shorter than 4.7612 years, and the majority of these have half-lives shorter than 12.2 seconds; the known isotopes of europium range from 130Eu to 170Eu. This element also has 17 meta states, with the most stable being 150mEu (t1/2=12.8 hours), 152m1Eu (t1/2=9.3116 hours) and 152m2Eu (t1/2=96 minutes).
The primary decay mode for isotopes lighter than 153Eu is electron capture, and the primary mode for heavier isotopes is beta minus decay. The primary decay products before 153Eu are isotopes of samarium (Sm) and the primary products after are isotopes of gadolinium (Gd).
Europium as a nuclear fission product
Europium is produced by nuclear fission;
155Eu (half-life 4.7612 years) has a fission yield of 330 parts per million (ppm) for uranium-235 and thermal neutrons.
The fission product yields of europium isotopes are low near the top of the mass range for fission products.
As with other lanthanides, many isotopes of europium, especially those that have odd mass numbers or are neutron-poor like 152Eu, have high cross sections for neutron capture, often high enough to be neutron poisons.
151Eu is the beta decay product of samarium-151, but since this has a long decay half-life and short mean time to neutron absorption, most 151Sm instead ends up as 152Sm.
152Eu (half-life 13.516 years) and 154Eu (half-life 8.593 years) cannot be beta decay products because 152Sm and 154Sm are non-radioactive, but 154Eu is the only long-lived "shielded" nuclide, other than 134Cs, to have a fission yield of more than 2.5 parts per million fissions. A larger amount of 154Eu is produced by neutron activation of a significant portion of the non-radioactive 153Eu; however, much of this is further converted to 155Eu.
Occurrence
Europium is not found in nature as a free element. Many minerals contain europium, with the most important sources being bastnäsite, monazite, xenotime and loparite-(Ce). No europium-dominant minerals are known yet, despite a single find of a tiny possible Eu–O or Eu–O–C system phase in the Moon's regolith.
Depletion or enrichment of europium in minerals relative to other rare-earth elements is known as the europium anomaly. Europium is commonly included in trace element studies in geochemistry and petrology to understand the processes that form igneous rocks (rocks that cooled from magma or lava). The nature of the europium anomaly found helps reconstruct the relationships within a suite of igneous rocks. The median crustal abundance of europium is 2 ppm; values of the less abundant elements may vary with location by several orders of magnitude.
Divalent europium (Eu2+) in small amounts is the activator of the bright blue fluorescence of some samples of the mineral fluorite (CaF2). The reduction from Eu3+ to Eu2+ is induced by irradiation with energetic particles. The most outstanding examples of this originated around Weardale and adjacent parts of northern England; it was the fluorite found here that fluorescence was named after in 1852, although it was not until much later that europium was determined to be the cause.
In astrophysics, the signature of europium in stellar spectra can be used to classify stars and inform theories of how or where a particular star was born. For instance, astronomers used the relative levels of europium to iron within the star LAMOST J112456.61+453531.3 to propose that the accretion process for star occurred late.
Production
Europium is associated with the other rare-earth elements and is, therefore, mined together with them. Separation of the rare-earth elements occurs during later processing. Rare-earth elements are found in the minerals bastnäsite, loparite-(Ce), xenotime, and monazite in mineable quantities. Bastnäsite is a group of related fluorocarbonates, Ln(CO3)(F,OH). Monazite is a group of related of orthophosphate minerals (Ln denotes a mixture of all the lanthanides except promethium), loparite-(Ce) is an oxide, and xenotime is an orthophosphate (Y,Yb,Er,...)PO4. Monazite also contains thorium and yttrium, which complicates handling because thorium and its decay products are radioactive. For the extraction from the ore and the isolation of individual lanthanides, several methods have been developed. The choice of method is based on the concentration and composition of the ore and on the distribution of the individual lanthanides in the resulting concentrate. Roasting the ore, followed by acidic and basic leaching, is used mostly to produce a concentrate of lanthanides. If cerium is the dominant lanthanide, then it is converted from cerium(III) to cerium(IV) and then precipitated. Further separation by solvent extractions or ion exchange chromatography yields a fraction which is enriched in europium. This fraction is reduced with zinc, zinc/amalgam, electrolysis or other methods converting the europium(III) to europium(II). Europium(II) reacts in a way similar to that of alkaline earth metals and therefore it can be precipitated as a carbonate or co-precipitated with barium sulfate. Europium metal is available through the electrolysis of a mixture of molten EuCl3 and NaCl (or CaCl2) in a graphite cell, which serves as cathode, using graphite as anode. The other product is chlorine gas.
A few large deposits produce or produced a significant amount of the world production. The Bayan Obo iron ore deposit in Inner Mongolia contains significant amounts of bastnäsite and monazite and is, with an estimated 36 million tonnes of rare-earth element oxides, the largest known deposit. The mining operations at the Bayan Obo deposit made China the largest supplier of rare-earth elements in the 1990s. Only 0.2% of the rare-earth element content is europium. The second large source for rare-earth elements between 1965 and its closure in the late 1990s was the Mountain Pass rare earth mine in California. The bastnäsite mined there is especially rich in the light rare-earth elements (La-Gd, Sc, and Y) and contains only 0.1% of europium. Another large source for rare-earth elements is the loparite found on the Kola peninsula. It contains besides niobium, tantalum and titanium up to 30% rare-earth elements and is the largest source for these elements in Russia.
Compounds
Europium compounds tend to exist in a trivalent oxidation state under most conditions. Commonly these compounds feature Eu(III) bound by 6–9 oxygenic ligands. The Eu(III) sulfates, nitrates and chlorides are soluble in water or polar organic solvents. Lipophilic europium complexes often feature acetylacetonate-like ligands, such as EuFOD.
Halides
Europium metal reacts with all the halogens:
2 Eu + 3 X2 → 2 EuX3 (X = F, Cl, Br, I)
This route gives white europium(III) fluoride (EuF3), yellow europium(III) chloride (EuCl3), gray europium(III) bromide (EuBr3), and colorless europium(III) iodide (EuI3). Europium also forms the corresponding dihalides: yellow-green europium(II) fluoride (EuF2), colorless europium(II) chloride (EuCl2) (although it has a bright blue fluorescence under UV light), colorless europium(II) bromide (EuBr2), and green europium(II) iodide (EuI2).
Chalcogenides and pnictides
Europium forms stable compounds with all of the chalcogens, but the heavier chalcogens (S, Se, and Te) stabilize the lower oxidation state. Three oxides are known: europium(II) oxide (EuO), europium(III) oxide (Eu2O3), and the mixed-valence oxide Eu3O4, consisting of both Eu(II) and Eu(III). Otherwise, the main chalcogenides are europium(II) sulfide (EuS), europium(II) selenide (EuSe) and europium(II) telluride (EuTe): all three of these are black solids. Europium(II) sulfide is prepared by sulfiding the oxide at temperatures sufficiently high to decompose the Eu2O3:
Eu2O3 + 3 H2S → 2 EuS + 3 H2O + S
The main nitride of europium is europium(III) nitride (EuN).
History
Although europium is present in most of the minerals containing the other rare elements, due to the difficulties in separating the elements it was not until the late 1800s that the element was isolated. William Crookes observed the phosphorescent spectra of the rare elements including those eventually assigned to europium.
Europium was first found in 1892 by Paul Émile Lecoq de Boisbaudran, who obtained basic fractions from samarium-gadolinium concentrates which had spectral lines not accounted for by samarium or gadolinium. However, the discovery of europium is generally credited to French chemist Eugène-Anatole Demarçay, who suspected samples of the recently discovered element samarium were contaminated with an unknown element in 1896 and who was able to isolate it in 1901; he then named it europium.
When the europium-doped yttrium orthovanadate red phosphor was discovered in the early 1960s, and understood to be about to cause a revolution in the color television industry, there was a scramble for the limited supply of europium on hand among the monazite processors, as the typical europium content in monazite is about 0.05%. However, the Molycorp bastnäsite deposit at the Mountain Pass rare earth mine, California, whose lanthanides had an unusually high europium content of 0.1%, was about to come on-line and provide sufficient europium to sustain the industry. Prior to europium, the color-TV red phosphor was very weak, and the other phosphor colors had to be muted, to maintain color balance. With the brilliant red europium phosphor, it was no longer necessary to mute the other colors, and a much brighter color TV picture was the result. Europium has continued to be in use in the TV industry ever since as well as in computer monitors. Californian bastnäsite now faces stiff competition from Bayan Obo, China, with an even "richer" europium content of 0.2%.
Frank Spedding, celebrated for his development of the ion-exchange technology that revolutionized the rare-earth industry in the mid-1950s, once related the story of how he was lecturing on the rare earths in the 1930s, when an elderly gentleman approached him with an offer of a gift of several pounds of europium oxide. This was an unheard-of quantity at the time, and Spedding did not take the man seriously. However, a package duly arrived in the mail, containing several pounds of genuine europium oxide. The elderly gentleman had turned out to be Herbert Newby McCoy, who had developed a famous method of europium purification involving redox chemistry.
Applications
Relative to most other elements, commercial applications for europium are few and rather specialized. Almost invariably, its phosphorescence is exploited, either in the +2 or +3 oxidation state.
It is a dopant in some types of glass in lasers and other optoelectronic devices. Europium oxide (Eu2O3) is widely used as a red phosphor in television sets and fluorescent lamps, and as an activator for yttrium-based phosphors. Color TV screens contain between 0.5 and 1 g of europium oxide. Whereas trivalent europium gives red phosphors, the luminescence of divalent europium depends strongly on the composition of the host structure. UV to deep red luminescence can be achieved. The two classes of europium-based phosphor (red and blue), combined with the yellow/green terbium phosphors give "white" light, the color temperature of which can be varied by altering the proportion or specific composition of the individual phosphors. This phosphor system is typically encountered in helical fluorescent light bulbs. Combining the same three classes is one way to make trichromatic systems in TV and computer screens, but as an additive, it can be particularly effective in improving the intensity of red phosphor. Europium is also used in the manufacture of fluorescent glass, increasing the general efficiency of fluorescent lamps. One of the more common persistent after-glow phosphors besides copper-doped zinc sulfide is europium-doped strontium aluminate. Europium fluorescence is used to interrogate biomolecular interactions in drug-discovery screens. It is also used in the anti-counterfeiting phosphors in euro banknotes.
An application that has almost fallen out of use with the introduction of affordable superconducting magnets is the use of europium complexes, such as Eu(fod)3, as shift reagents in NMR spectroscopy. Chiral shift reagents, such as Eu(hfc)3, are still used to determine enantiomeric purity.
Europium compounds are used to label antibodies for sensitive detection of antigens in body fluids, a form of immunoassay. When these europium-labeled antibodies bind to specific antigens, the resulting complex can be detected with laser excited fluorescence.
Precautions
There are no clear indications that europium is particularly toxic compared to other heavy metals. Europium chloride, nitrate and oxide have been tested for toxicity: europium chloride shows an acute intraperitoneal LD50 toxicity of 550 mg/kg and the acute oral LD50 toxicity is 5000 mg/kg. Europium nitrate shows a slightly higher intraperitoneal LD50 toxicity of 320 mg/kg, while the oral toxicity is above 5000 mg/kg. The metal dust presents a fire and explosion hazard.
| Physical sciences | Chemical elements_2 | null |
9478 | https://en.wikipedia.org/wiki/Erbium | Erbium | Erbium is a chemical element; it has symbol Er and atomic number 68. A silvery-white solid metal when artificially isolated, natural erbium is always found in chemical combination with other elements. It is a lanthanide, a rare-earth element, originally found in the gadolinite mine in Ytterby, Sweden, which is the source of the element's name.
Erbium's principal uses involve its pink-colored Er3+ ions, which have optical fluorescent properties particularly useful in certain laser applications. Erbium-doped glasses or crystals can be used as optical amplification media, where Er3+ ions are optically pumped at around 980 or and then radiate light at in stimulated emission. This process results in an unusually mechanically simple laser optical amplifier for signals transmitted by fiber optics. The wavelength is especially important for optical communications because standard single mode optical fibers have minimal loss at this particular wavelength.
In addition to optical fiber amplifier-lasers, a large variety of medical applications (e.g. dermatology, dentistry) rely on the erbium ion's emission (see Er:YAG laser) when lit at another wavelength, which is highly absorbed in water in tissues, making its effect very superficial. Such shallow tissue deposition of laser energy is helpful in laser surgery, and for the efficient production of steam which produces enamel ablation by common types of dental laser.
Characteristics
Physical properties
A trivalent element, pure erbium metal is malleable (or easily shaped), soft yet stable in air, and does not oxidize as quickly as some other rare-earth metals. Its salts are rose-colored, and the element has characteristic sharp absorption spectra bands in visible light, ultraviolet, and near infrared. Otherwise it looks much like the other rare earths. Its sesquioxide is called erbia. Erbium's properties are to a degree dictated by the kind and amount of impurities present. Erbium does not play any known biological role, but is thought to be able to stimulate metabolism.
Erbium is ferromagnetic below 19 K, antiferromagnetic between 19 and 80 K and paramagnetic above 80 K.
Erbium can form propeller-shaped atomic clusters Er3N, where the distance between the erbium atoms is 0.35 nm. Those clusters can be isolated by encapsulating them into fullerene molecules, as confirmed by transmission electron microscopy.
Like most rare-earth elements, erbium is usually found in the +3 oxidation state. However, it is possible for erbium to also be found in the 0, +1 and +2 oxidation states.
Chemical properties
Erbium metal retains its luster in dry air, however will tarnish slowly in moist air and burns readily to form erbium(III) oxide:
4 Er + 3 O2 → 2 Er2O3
Erbium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form erbium hydroxide:
2 Er (s) + 6 H2O (l) → 2 Er(OH)3 (aq) + 3 H2 (g)
Erbium metal reacts with all the halogens:
2 Er (s) + 3 F2 (g) → 2 ErF3 (s) [pink]
2 Er (s) + 3 Cl2 (g) → 2 ErCl3 (s) [violet]
2 Er (s) + 3 Br2 (g) → 2 ErBr3 (s) [violet]
2 Er (s) + 3 I2 (g) → 2 ErI3 (s) [violet]
Erbium dissolves readily in dilute sulfuric acid to form solutions containing hydrated Er(III) ions, which exist as rose red [Er(OH2)9]3+ hydration complexes:
2 Er (s) + 3 H2SO4 (aq) → 2 Er3+ (aq) + 3 (aq) + 3 H2 (g)
Isotopes
Naturally occurring erbium is composed of 6 stable isotopes, Er, Er, Er, Er, Er, and Er, with Er being the most abundant (33.503% natural abundance). 32 radioisotopes have been characterized, with the most stable being Er with a half-life of , Er with a half-life of , Er with a half-life of , Er with a half-life of , and Er with a half-life of . All of the remaining radioactive isotopes have half-lives that are less than , and the majority of these have half-lives that are less than 4 minutes. This element also has 26 meta states, with the most stable being Er with a half-life of .
The isotopes of erbium range in Er to Er. The primary decay mode before the most abundant stable isotope, Er, is electron capture, and the primary mode after is beta decay. The primary decay products before Er are element 67 (holmium) isotopes, and the primary products after are element 69 (thulium) isotopes.
Er has been identified as useful for use in Auger therapy, as it decays via electron capture and emits no gamma radiation. It can also be used as a radioactive tracer to label antibodies and peptides, though it cannot be detected by any kind of imaging for the study of its biological distribution. The isotope can be produced via the bombardment of Er with Tm or Er with Ho, the latter of which is more convenient due to Ho being a stable primordial isotope, though it requires an initial supply of Er.
Compounds
Oxides
Erbium(III) oxide (also known as erbia) is the only known oxide of erbium, first isolated by Carl Gustaf Mosander in 1843, and first obtained in pure form in 1905 by Georges Urbain and Charles James. It has a cubic structure resembling the bixbyite motif. The Er3+ centers are octahedral. The formation of erbium oxide is accomplished by burning erbium metal, erbium oxalate or other oxyacid salts of erbium. Erbium oxide is insoluble in water and slightly soluble in heated mineral acids. The pink-colored compound is used as a phosphor activator and to produce infrared-absorbing glass.
Halides
Erbium(III) fluoride is a pinkish powder that can be produced by reacting erbium(III) nitrate and ammonium fluoride. It can be used to make infrared light-transmitting materials and up-converting luminescent materials, and is an intermediate in the production of erbium metal prior to its reduction with calcium. Erbium(III) chloride is a violet compounds that can be formed by first heating erbium(III) oxide and ammonium chloride to produce the ammonium salt of the pentachloride ([NH4]2ErCl5) then heating it in a vacuum at 350-400 °C. It forms crystals of the type, with monoclinic crystals and the point group C2/m. Erbium(III) chloride hexahydrate also forms monoclinic crystals with the point group of P2/n (P2/c) - C42h. In this compound, erbium is octa-coordinated to form ions with the isolated completing the structure.
Erbium(III) bromide is a violet solid. It is used, like other metal bromide compounds, in water treatment, chemical analysis and for certain crystal growth applications. Erbium(III) iodide is a slightly pink compound that is insoluble in water. It can be prepared by directly reacting erbium with iodine.
Organoerbium compounds
Organoerbium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric.
History
Erbium (for Ytterby, a village in Sweden) was discovered by Carl Gustaf Mosander in 1843. Mosander was working with a sample of what was thought to be the single metal oxide yttria, derived from the mineral gadolinite. He discovered that the sample contained at least two metal oxides in addition to pure yttria, which he named "erbia" and "terbia" after the village of Ytterby where the gadolinite had been found. Mosander was not certain of the purity of the oxides and later tests confirmed his uncertainty. Not only did the "yttria" contain yttrium, erbium, and terbium; in the ensuing years, chemists, geologists and spectroscopists discovered five additional elements: ytterbium, scandium, thulium, holmium, and gadolinium.
Erbia and terbia, however, were confused at this time. Marc Delafontaine, a Swiss spectroscopist, mistakenly switched the names of the two elements in his work separating the oxides erbia and terbia. After 1860, terbia was renamed erbia and after 1877 what had been known as erbia was renamed terbia. Fairly pure Er2O3 was independently isolated in 1905 by Georges Urbain and Charles James. Reasonably pure erbium metal was not produced until 1934 when Wilhelm Klemm and Heinrich Bommer reduced the anhydrous chloride with potassium vapor.
Occurrence
The concentration of erbium in the Earth crust is about 2.8 mg/kg and in seawater 0.9 ng/L. (Concentration of less abundant elements may vary with location by several orders of magnitude making the relative abundance unreliable). Like other rare earths, this element is never found as a free element in nature but is found in monazite and bastnäsite ores. It has historically been very difficult and expensive to separate rare earths from each other in their ores but ion-exchange chromatography methods developed in the late 20th century have greatly reduced the cost of production of all rare-earth metals and their chemical compounds.
The principal commercial sources of erbium are from the minerals xenotime and euxenite, and most recently, the ion adsorption clays of southern China. Consequently, China has now become the principal global supplier of this element. In the high-yttrium versions of these ore concentrates, yttrium is about two-thirds of the total by weight, and erbia is about 4–5%. When the concentrate is dissolved in acid, the erbia liberates enough erbium ion to impart a distinct and characteristic pink color to the solution. This color behavior is similar to what Mosander and the other early workers in the lanthanides saw in their extracts from the gadolinite minerals of Ytterby.
Production
Crushed minerals are attacked by hydrochloric or sulfuric acid that transforms insoluble rare-earth oxides into soluble chlorides or sulfates. The acidic filtrates are partially neutralized with caustic soda (sodium hydroxide) to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. The solution is treated with magnesium nitrate to produce a crystallized mixture of double salts of rare-earth metals. The salts are separated by ion exchange. In this process, rare-earth ions are sorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agent. Erbium metal is obtained from its oxide or salts by heating with calcium at under argon atmosphere.
Applications
Lasers and optics
A large variety of medical applications (i.e., dermatology, dentistry) utilize erbium ion's emission (see Er:YAG laser), which is highly absorbed in water (absorption coefficient about ). Such shallow tissue deposition of laser energy is necessary for laser surgery, and the efficient production of steam for laser enamel ablation in dentistry. Common applications of erbium lasers in dentistry include ceramic cosmetic dentistry and removal of brackets in orthodontic braces; such laser applications have been noted as more time-efficient than performing the same procedures with rotary dental instruments.
Erbium-doped optical silica-glass fibers are the active element in erbium-doped fiber amplifiers (EDFAs), which are widely used in optical communications. The same fibers can be used to create fiber lasers. In order to work efficiently, erbium-doped fiber is usually co-doped with glass modifiers/homogenizers, often aluminium or phosphorus. These dopants help prevent clustering of Er ions and transfer the energy more efficiently between excitation light (also known as optical pump) and the signal. Co-doping of optical fiber with Er and Yb is used in high-power Er/Yb fiber lasers. Erbium can also be used in erbium-doped waveguide amplifiers.
Other applications
When added to vanadium as an alloy, erbium lowers hardness and improves workability. An erbium-nickel alloy Er3Ni has an unusually high specific heat capacity at liquid-helium temperatures and is used in cryocoolers; a mixture of 65% Er3Co and 35% Er0.9Yb0.1Ni by volume improves the specific heat capacity even more.
Erbium oxide has a pink color, and is sometimes used as a colorant for glass, cubic zirconia and porcelain. The glass is then often used in sunglasses and jewellery, or where infrared absorption is needed.
Erbium is used in nuclear technology in neutron-absorbing control rods. or as a burnable poison in nuclear fuel design.
Biological role and precautions
Erbium does not have a biological role, but erbium salts can stimulate metabolism. Humans consume 1 milligram of erbium a year on average. The highest concentration of erbium in humans is in the bones, but there is also erbium in the human kidneys and liver.
Erbium is slightly toxic if ingested, but erbium compounds are generally not toxic. Ionic erbium behaves similar to ionic calcium, and can potentially bind to proteins such as calmodulin. When introduced into the body, nitrates of erbium, similar to other rare earth nitrates, increase triglyceride levels in the liver and cause leakage of hepatic (liver-related) enzymes to the blood, though they uniquely (along with gadolinium and dysprosium nitrates) increase RNA polymerase II activity. Ingestion and inhalation are the main routes of exposure to erbium and other rare earths, as they do not diffuse through unbroken skin.
Metallic erbium in dust form presents a fire and explosion hazard.
| Physical sciences | Chemical elements_2 | null |
9479 | https://en.wikipedia.org/wiki/Einsteinium | Einsteinium | Einsteinium is a synthetic chemical element; it has symbol Es and atomic number 99. It is named after Albert Einstein and is a member of the actinide series and the seventh transuranium element.
Einsteinium was discovered as a component of the debris of the first hydrogen bomb explosion in 1952. Its most common isotope, einsteinium-253 (Es; half-life 20.47 days), is produced artificially from decay of californium-253 in a few dedicated high-power nuclear reactors with a total yield on the order of one milligram per year. The reactor synthesis is followed by a complex process of separating einsteinium-253 from other actinides and products of their decay. Other isotopes are synthesized in various laboratories, but in much smaller amounts, by bombarding heavy actinide elements with light ions. Due to the small amounts of einsteinium produced and the short half-life of its most common isotope, there are no practical applications for it except basic scientific research. In particular, einsteinium was used to synthesize, for the first time, 17 atoms of the new element mendelevium in 1955.
Einsteinium is a soft, silvery, paramagnetic metal. Its chemistry is typical of the late actinides, with a preponderance of the +3 oxidation state; the +2 oxidation state is also accessible, especially in solids. The high radioactivity of Es produces a visible glow and rapidly damages its crystalline metal lattice, with released heat of about 1000 watts per gram. Studying its properties is difficult due to Es's decay to berkelium-249 and then californium-249 at a rate of about 3% per day. The longest-lived isotope of einsteinium, Es (half-life 471.7 days) would be more suitable for investigation of physical properties, but it has proven far more difficult to produce and is available only in minute quantities, not in bulk. Einsteinium is the element with the highest atomic number which has been observed in macroscopic quantities in its pure form as einsteinium-253.
Like all synthetic transuranium elements, isotopes of einsteinium are very radioactive and are considered highly dangerous to health on ingestion.
History
Einsteinium was first identified in December 1952 by Albert Ghiorso and co-workers at University of California, Berkeley in collaboration with the Argonne and Los Alamos National Laboratories, in the fallout from the Ivy Mike nuclear test. The test was done on November 1, 1952, at Enewetak Atoll in the Pacific Ocean and was the first successful test of a thermonuclear weapon. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, , which could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two beta decays.
^{238}_{92}U ->[\ce{+ 6(n,\gamma)}][-2\ \beta^-]{} ^{244}_{94}Pu
At the time, the multiple neutron absorption was thought to be an extremely rare process, but the identification of Pu indicated that still more neutrons could have been captured by the uranium, producing new elements heavier than californium.
Ghiorso and co-workers analyzed filter papers which had been flown through the explosion cloud on airplanes (the same sampling technique that had been used to discover Pu). Larger amounts of radioactive material were later isolated from coral debris of the atoll, and these were delivered to the U.S. The separation of suspected new elements was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH ≈ 3.5), using ion exchange at elevated temperatures; fewer than 200 atoms of einsteinium were recovered in the end. Nevertheless, element 99, einsteinium, and in particular Es, could be detected via its characteristic high-energy alpha decay at 6.6 MeV. It was produced by the capture of 15 neutrons by uranium-238 nuclei followed by seven beta decays, and had a half-life of 20.5 days. Such multiple neutron absorption was made possible by the high neutron flux density during the detonation, so that newly generated heavy isotopes had plenty of available neutrons to absorb before they could disintegrate into lighter elements. Neutron capture initially raised the mass number without changing the atomic number of the nuclide, and the concomitant beta-decays resulted in a gradual increase in the atomic number:
^{238}_{92}U ->[\ce{+15n}][6 \beta^-] ^{253}_{98}Cf ->[\beta^-] ^{253}_{99}Es
Some U atoms, however, could absorb two additional neutrons (for a total of 17), resulting in Es, as well as in the Fm isotope of another new element, fermium. The discovery of the new elements and the associated new data on multiple neutron capture were initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions and competition with Soviet Union in nuclear technologies. However, the rapid capture of so many neutrons would provide needed direct experimental confirmation of the r-process multi-neutron absorption needed to explain the cosmic nucleosynthesis (production) of certain heavy elements (heavier than nickel) in supernovas, before beta decay. Such a process is needed to explain the existence of many stable elements in the universe.
Meanwhile, isotopes of element 99 (as well as of new element 100, fermium) were produced in the Berkeley and Argonne laboratories, in a nuclear reaction between nitrogen-14 and uranium-238, and later by intense neutron irradiation of plutonium or californium:
^{252}_{98}Cf ->[\ce{(n,\gamma)}] ^{253}_{98}Cf ->[\beta^-][17.81 \ce{d}] ^{253}_{99}Es ->[\ce{(n,\gamma)}] ^{254}_{99}Es ->[\beta^-] ^{254}_{100}Fm
These results were published in several articles in 1954 with the disclaimer that these were not the first studies that had been carried out on the elements. The Berkeley team also reported some results on the chemical properties of einsteinium and fermium. The Ivy Mike results were declassified and published in 1955.
In their discovery of elements 99 and 100, the American teams had competed with a group at the Nobel Institute for Physics, Stockholm, Sweden. In late 1953 – early 1954, the Swedish group succeeded in synthesizing light isotopes of element 100, in particular Fm, by bombarding uranium with oxygen nuclei. These results were also published in 1954. Nevertheless, the priority of the Berkeley team was generally recognized, as its publications preceded the Swedish article, and they were based on the previously undisclosed results of the 1952 thermonuclear explosion; thus the Berkeley team was given the privilege to name the new elements. As the effort which had led to the design of Ivy Mike was codenamed Project PANDA, element 99 had been jokingly nicknamed "Pandemonium" but the official names suggested by the Berkeley group derived from two prominent scientists, Einstein and Fermi: "We suggest for the name for the element with the atomic number 99, einsteinium (symbol E) after Albert Einstein and for the name for the element with atomic number 100, fermium (symbol Fm), after Enrico Fermi." Both Einstein and Fermi died between the time the names were originally proposed and when they were announced. The discovery of these new elements was announced by Albert Ghiorso at the first Geneva Atomic Conference held on 8–20 August 1955. The symbol for einsteinium was first given as "E" and later changed to "Es" by IUPAC.
Characteristics
Physical
Einsteinium is a synthetic, silvery, radioactive metal. In the periodic table, it is located to the right of the actinide californium, to the left of the actinide fermium and below the lanthanide holmium with which it shares many similarities in physical and chemical properties. Its density of 8.84 g/cm is lower than that of californium (15.1 g/cm) and is nearly the same as that of holmium (8.79 g/cm), despite einsteinium being much heavier per atom than holmium. Einsteinium's melting point (860 °C) is also relatively low – below californium (900 °C), fermium (1527 °C) and holmium (1461 °C). Einsteinium is a soft metal, with a bulk modulus of only 15 GPa, one of the lowest among non-alkali metals.
Unlike the lighter actinides californium, berkelium, curium and americium, which crystallize in a double hexagonal structure at ambient conditions; einsteinium is believed to have a face-centered cubic (fcc) symmetry with the space group Fmm and the lattice constant . However, there is a report of room-temperature hexagonal einsteinium metal with and , which converted to the fcc phase upon heating to 300 °C.
The self-damage induced by the radioactivity of einsteinium is so strong that it rapidly destroys the crystal lattice, and the energy release during this process, 1000 watts per gram of 253Es, induces a visible glow. These processes may contribute to the relatively low density and melting point of einsteinium. Further, due to the small size of available samples, the melting point of einsteinium was often deduced by observing the sample being heated inside an electron microscope. Thus, surface effects in small samples could reduce the melting point.
The metal is trivalent and has a noticeably high volatility. In order to reduce the self-radiation damage, most measurements of solid einsteinium and its compounds are performed right after thermal annealing. Also, some compounds are studied under the atmosphere of the reductant gas, for example HO+HCl for EsOCl so that the sample is partly regrown during its decomposition.
Apart from the self-destruction of solid einsteinium and its compounds, other intrinsic difficulties in studying this element include scarcity – the most common Es isotope is available only once or twice a year in sub-milligram amounts – and self-contamination due to rapid conversion of einsteinium to berkelium and then to californium at a rate of about 3.3% per day:
^{253}_{99}Es ->[\alpha][20 \ce{d}] ^{249}_{97}Bk ->[\beta^-][314 \ce{d}] ^{249}_{98}Cf
Thus, most einsteinium samples are contaminated, and their intrinsic properties are often deduced by extrapolating back experimental data accumulated over time. Other experimental techniques to circumvent the contamination problem include selective optical excitation of einsteinium ions by a tunable laser, such as in studying its luminescence properties.
Magnetic properties have been studied for einsteinium metal, its oxide and fluoride. All three materials showed Curie–Weiss paramagnetic behavior from liquid helium to room temperature. The effective magnetic moments were deduced as for EsO and for the EsF, which are the highest values among actinides, and the corresponding Curie temperatures are 53 and 37 K.
Chemical
Like all actinides, einsteinium is rather reactive. Its trivalent oxidation state is most stable in solids and aqueous solution where it induces a pale pink color. The existence of divalent einsteinium is firmly established, especially in the solid phase; such +2 state is not observed in many other actinides, including protactinium, uranium, neptunium, plutonium, curium and berkelium. Einsteinium(II) compounds can be obtained, for example, by reducing einsteinium(III) with samarium(II) chloride.
Isotopes
Eighteen isotopes and four nuclear isomers are known for einsteinium, with mass numbers 240–257. All are radioactive; the most stable one, Es, has half-life 471.7 days. The next most stable isotopes are Es (half-life 275.7 days), Es (39.8 days), and Es (20.47 days). All the other isotopes have half-lives shorter than 40 hours, most shorter than 30 minutes. Of the five isomers, the most stable is Es with a half-life of 39.3 hours.
Nuclear fission
Einsteinium has a high rate of nuclear fission that results in a low critical mass. This mass is 9.89 kilograms for a bare sphere of Es, and can be lowered to 2.9 kg by adding a 30-centimeter-thick steel neutron reflector, or even to 2.26 kg with a 20-cm-thick reflector made of water. However, even this small critical mass far exceeds the total amount of einsteinium isolated so far, especially of the rare Es.
Natural occurrence
Due to the short half-life of all isotopes of einsteinium, any primordial einsteinium—that is, einsteinium that could have been present on Earth at its formation—has long since decayed. Synthesis of einsteinium from naturally-occurring uranium and thorium in the Earth's crust requires multiple neutron capture, an extremely unlikely event. Therefore, all einsteinium on Earth is produced in laboratories, high-power nuclear reactors, or nuclear testing, and exists only within a few years from the time of the synthesis.
The transuranic elements americium to fermium, including einsteinium, were once created in the natural nuclear fission reactor at Oklo, but any quantities produced then would have long since decayed away.
Einsteinium was theoretically observed in the spectrum of Przybylski's Star. However, the lead author of the studies finding einsteinium and other short-lived actinides in Przybylski's Star, Vera F. Gopka, admitted that "the position of lines of the radioactive elements under search were simply visualized in synthetic spectrum as vertical markers because there are not any atomic data for these lines except for their wavelengths (Sansonetti et al. 2004), enabling one to calculate their profiles with more or less real intensities." The signature spectra of einsteinium's isotopes have since been comprehensively analyzed experimentally (in 2021), though there is no published research confirming whether the theorized einsteinium signatures proposed to be found in the star's spectrum match the lab-determined results.
Synthesis and extraction
Einsteinium is produced in minute quantities by bombarding lighter actinides with neutrons in dedicated high-flux nuclear reactors. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL), Tennessee, U.S., and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium (Z>96) elements. These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, though the quantities produced at NIIAR are not widely reported. In a "typical processing campaign" at ORNL, tens of grams of curium are irradiated to produce decigram quantities of californium, milligrams of berkelium (Bk) and einsteinium and picograms of fermium.
The first microscopic sample of Es sample weighing about 10 nanograms was prepared in 1961 at HFIR. A special magnetic balance was designed to estimate its weight. Larger batches were produced later starting from several kilograms of plutonium with the einsteinium yields (mostly Es) of 0.48 milligram in 1967–1970, 3.2 milligrams in 1971–1973, followed by steady production of about 3 milligrams per year between 1974 and 1978. These quantities however refer to the integral amount in the target right after irradiation. Subsequent separation procedures reduced the amount of isotopically pure einsteinium roughly tenfold.
Laboratory synthesis
Heavy neutron irradiation of plutonium results in four major isotopes of einsteinium: Es (α-emitter; half-life 20.47 days, spontaneous fission half-life 7×10 years); Es (β-emitter, half-life 39.3 hours), Es (α-emitter, half-life 276 days) and Es (β-emitter, half-life 39.8 days). An alternative route involves bombardment of uranium-238 with high-intensity nitrogen or oxygen ion beams.
Es (half-life 4.55 min) was produced by irradiating Am with carbon or U with nitrogen ions. The latter reaction was first realized in 1967 in Dubna, Russia, and the involved scientists were awarded the Lenin Komsomol Prize.
Es was produced by irradiating Cf with deuterium ions. It mainly β-decays to Cf with a half-life of minutes, but also releases 6.87-MeV α-particles; the ratio of β's to α-particles is about 400.
Es were obtained by bombarding Bk with α-particles. One to four neutrons are released, so four different isotopes are formed in one reaction.
^{249}_{97}Bk ->[+\alpha] ^{249,250,251,252}_{99}Es
Es was produced by irradiating a 0.1–0.2 milligram Cf target with a thermal neutron flux of (2–5)×10 neutrons/(cm·s) for 500–900 hours:
^{252}_{98}Cf ->[\ce{(n,\gamma)}] ^{253}_{98}Cf ->[\beta^-][17.81 \ce{d}] ^{253}_{99}Es
In 2020, scientists at ORNL created about 200 nanograms of Es; allowing some chemical properties of the element to be studied for the first time.
Synthesis in nuclear explosions
The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project. One of the goals was studying the efficiency of production of transuranic elements in high-power nuclear explosions. The motive for these experiments was that synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful man-made neutron sources, providing densities of the order 10 neutrons/cm within a microsecond, or about 10 neutrons/(cm·s). In comparison, the flux of HFIR is 5 neutrons/(cm·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the mainland U.S. The laboratory was receiving samples for analysis as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, none of these were found even after a series of megaton explosions conducted between 1954 and 1956 at the atoll.
The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions in a confined space might give improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge, but they were less successful in terms of yield and was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Product isolation was problematic as the explosions were spreading debris through melting and vaporizing the surrounding rocks at depths of 300–600 meters. Drilling to such depths to extract the products was both slow and inefficient in terms of collected volumes.
Of the nine underground tests between 1962 and 1969, the last one was the most powerful and had the highest yield of transuranics. Milligrams of einsteinium that would normally take a year of irradiation in a high-power reactor, were produced within a microsecond. However, the major practical problem of the entire proposal was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only ~4 of the total amount, and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only ~1 of the total charge. The amount of transuranic elements in this 500-kg batch was only 30 times higher than in a 0.4-kg rock picked up 7 days after the test which showed the highly non-linear dependence of the transuranics yield on the amount of retrieved radioactive rock. Shafts were drilled at the site before the test in order to accelerate sample collection after explosion, so that explosion would expel radioactive material from the epicenter through the shafts and to collecting volumes near the surface. This method was tried in two tests and instantly provided hundreds of kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling. Whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides.
Though no new elements (except einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranics were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories.
Separation
Separation procedure of einsteinium depends on the synthesis method. In the case of light-ion bombardment inside a cyclotron, the heavy ion target is attached to a thin foil, and the generated einsteinium is simply washed off the foil after the irradiation. However, the produced amounts in such experiments are relatively low. The yields are much higher for reactor irradiation, but there, the product is a mixture of various actinide isotopes, as well as lanthanides produced in the nuclear fission decays. In this case, isolation of einsteinium is a tedious procedure which involves several repeating steps of cation exchange, at elevated temperature and pressure, and chromatography. Separation from berkelium is important, because the most common einsteinium isotope produced in nuclear reactors, Es, decays with a half-life of only 20 days to Bk, which is fast on the timescale of most experiments. Such separation relies on the fact that berkelium easily oxidizes to the solid +4 state and precipitates, whereas other actinides, including einsteinium, remain in their +3 state in solutions.
Trivalent actinides can be separated from lanthanide fission products by a cation-exchange resin column using a 90% water/10% ethanol solution saturated with hydrochloric acid (HCl) as eluant. It is usually followed by anion-exchange chromatography using 6 molar HCl as eluant. A cation-exchange resin column (Dowex-50 exchange column) treated with ammonium salts is then used to separate fractions containing elements 99, 100 and 101. These elements can be then identified simply based on their elution position/time, using α-hydroxyisobutyrate solution (α-HIB), for example, as eluant.
The 3+ actinides can also be separated via solvent extraction chromatography, using bis-(2-ethylhexyl) phosphoric acid (abbreviated as HDEHP) as the stationary organic phase, and nitric acid as the mobile aqueous phase. The actinide elution sequence is reversed from that of the cation-exchange resin column. The einsteinium separated by this method has the advantage to be free of organic complexing agent, as compared to the separation using a resin column.
Preparation of the metal
Einsteinium is highly reactive, so strong reducing agents are required to obtain the pure metal from its compounds. This can be achieved by reduction of einsteinium(III) fluoride with metallic lithium:
EsF + 3 Li → Es + 3 LiF
However, owing to its low melting point and high rate of self-radiation damage, einsteinium has a higher vapor pressure than lithium fluoride. This makes this reduction reaction rather inefficient. It was tried in the early preparation attempts and quickly abandoned in favor of reduction of einsteinium(III) oxide with lanthanum metal:
EsO + 2 La → 2 Es + LaO
Chemical compounds
Oxides
Einsteinium(III) oxide (EsO) was obtained by burning einsteinium(III) nitrate. It forms colorless cubic crystals, which were first characterized from microgram samples sized about 30 nanometers. Two other phases, monoclinic and hexagonal, are known for this oxide. The formation of a certain EsO phase depends on the preparation technique and sample history, and there is no clear phase diagram. Interconversions between the three phases can occur spontaneously, as a result of self-irradiation or self-heating. The hexagonal phase is isotypic with lanthanum oxide where the Es ion is surrounded by a 6-coordinated group of O ions.
Halides
Einsteinium halides are known for the oxidation states +2 and +3. The most stable state is +3 for all halides from fluoride to iodide.
Einsteinium(III) fluoride (EsF) can be precipitated from Es(III) chloride solutions upon reaction with fluoride ions. An alternative preparation procedure is to exposure Es(III) oxide to chlorine trifluoride (ClF) or F gas at a pressure of 1–2 atmospheres and temperature 300–400°C. The EsF crystal structure is hexagonal, as in californium(III) fluoride (CfF) where the Es ions are 8-fold coordinated by fluorine ions in a bicapped trigonal prism arrangement.
Es(III) chloride (EsCl) can be prepared by annealing Es(III) oxide in the atmosphere of dry hydrogen chloride vapors at about 500°C for some 20 minutes. It crystallizes upon cooling at about 425°C into an orange solid with a hexagonal structure of UCl type, where einsteinium atoms are 9-fold coordinated by chlorine atoms in a tricapped trigonal prism geometry. Einsteinium(III) bromide (EsBr) is a pale-yellow solid with a monoclinic structure of AlCl type, where the einsteinium atoms are octahedrally coordinated by bromine (coordination number 6).
The divalent compounds of einsteinium are obtained by reducing the trivalent halides with hydrogen:
2 EsX + H → 2 EsX + 2 HX; X = F, Cl, Br, I
Einsteinium(II) chloride (EsCl), einsteinium(II) bromide (EsBr), and einsteinium(II) iodide (EsI) have been produced and characterized by optical absorption, with no structural information available yet.
Known oxyhalides of einsteinium include EsOCl, EsOBr and EsOI. These salts are synthesized by treating a trihalide with a vapor mixture of water and the corresponding hydrogen halide: for example, EsCl + HO/HCl to obtain EsOCl.
Organoeinsteinium compounds
Einsteinium's high radioactivity has a potential use in radiation therapy, and organometallic complexes have been synthesized in order to deliver einsteinium to an appropriate organ in the body. Experiments have been performed on injecting einsteinium citrate (as well as fermium compounds) to dogs. Einsteinium(III) was also incorporated into β-diketone chelate complexes, since analogous complexes with lanthanides previously showed strongest UV-excited luminescence among metallorganic compounds. When preparing einsteinium complexes, the Es ions were 1000 times diluted with Gd ions. This allowed reducing the radiation damage so that the compounds did not disintegrate during the 20 minutes required for the measurements. The resulting luminescence from Es was much too weak to be detected. This was explained by the unfavorable relative energies of the individual constituents of the compound that hindered efficient energy transfer from the chelate matrix to Es ions. Similar conclusion was drawn for americium, berkelium and fermium.
Luminescence of Es ions was however observed in inorganic hydrochloric acid solutions as well as in organic solution with di(2-ethylhexyl)orthophosphoric acid. It shows a broad peak at about 1064 nanometers (half-width about 100 nm) which can be resonantly excited by green light (ca. 495 nm wavelength). The luminescence has a lifetime of several microseconds and the quantum yield below 0.1%. The relatively high, compared to lanthanides, non-radiative decay rates in Es were associated with the stronger interaction of f-electrons with the inner Es electrons.
Applications
There is almost no use for any isotope of einsteinium outside basic scientific research aiming at production of higher transuranium elements and superheavy elements.
In 1955, mendelevium was synthesized by irradiating a target consisting of about 10 atoms of Es in the 60-inch cyclotron at Berkeley Laboratory. The resulting Es(α,n)Md reaction yielded 17 atoms of the new element with the atomic number of 101.
The rare isotope Es is favored for production of superheavy elements due to its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms. Hence Es was used as a target in the attempted synthesis of ununennium (element 119) in 1985 by bombarding it with calcium-48 ions at the superHILAC linear particle accelerator at Berkeley, California. No atoms were identified, setting an upper limit for the cross section of this reaction at 300 nanobarns.
{^{254}_{99}Es} + {^{48}_{20}Ca} -> {^{302}_{119}Uue^\ast} -> no\ atoms
Es was used as the calibration marker in the chemical analysis spectrometer ("alpha-scattering surface analyzer") of the Surveyor 5 lunar probe. The large mass of this isotope reduced the spectral overlap between signals from the marker and the studied lighter elements of the lunar surface.
Safety
Most of the available einsteinium toxicity data is from research on animals. Upon ingestion by rats, only ~0.01% of it ends in the bloodstream. From there, about 65% goes to the bones, where it would remain for ~50 years if not for its radioactive decay, not to speak of the 3-year maximum lifespan of rats, 25% to the lungs (biological half-life ~20 years, though this is again rendered irrelevant by the short half-life of einsteinium), 0.035% to the testicles or 0.01% to the ovaries – where einsteinium stays indefinitely. About 10% of the ingested amount is excreted. The distribution of einsteinium over bone surfaces is uniform and is similar to that of plutonium.
| Physical sciences | Actinides | Chemistry |
9499 | https://en.wikipedia.org/wiki/Ethernet | Ethernet | Ethernet ( ) is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since been refined to support higher bit rates, a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET.
The original 10BASE5 Ethernet uses a thick coaxial cable as a shared medium. This was largely superseded by 10BASE2, which used a thinner and more flexible cable that was both cheaper and easier to use. More modern Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original to the latest , with rates up to under development. The Ethernet standards include several wiring and signaling variants of the OSI physical layer.
Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames. Per the OSI model, Ethernet provides services up to and including the data link layer. The 48-bit MAC address was adopted by other IEEE 802 networking standards, including IEEE 802.11 (Wi-Fi), as well as by FDDI. EtherType values are also used in Subnetwork Access Protocol (SNAP) headers.
Ethernet is widely used in homes and industry, and interworks well with wireless Wi-Fi technologies. The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet.
History
Ethernet was developed at Xerox PARC between 1973 and 1974 as a means to allow Alto computers to communicate with each other. It was inspired by ALOHAnet, which Robert Metcalfe had studied as part of his PhD dissertation and was originally called the Alto Aloha Network. Metcalfe's idea was essentially to limit the Aloha-like signals inside a cable, instead of broadcasting into the air. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely passive medium for the propagation of electromagnetic waves."
In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, and Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper. Ron Crane, Yogen Dalal, Robert Garner, Hal Murray, Roy Ogus, Dave Redell and John Shoch facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation (DEC), Intel, and Xerox to work together to promote Ethernet as a standard. As part of that process Xerox agreed to relinquish their 'Ethernet' trademark. The first standard was published on September 30, 1980, as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications". This so-called DIX standard (Digital Intel Xerox) specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983.
Ethernet initially competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market needs, and with 10BASE2 shift to inexpensive thin coaxial cable, and from 1990 to the now-ubiquitous twisted pair with 10BASE-T. By the end of the 1980s, Ethernet was clearly the dominant network technology. In the process, 3Com became a major company. 3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, and that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed quickly by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. In the 1980s, IBM's own PC Network product competed with Ethernet for the PC, and through the 1980s, LAN hardware, in general, was not common on PCs. However, in the mid to late 1980s, PC networking did become popular in offices and schools for printer and fileserver sharing, and among the many diverse competing LAN technologies of that decade, Ethernet was one of the most popular. Parallel port based Ethernet adapters were produced for a time, with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that Ethernet ports began to appear on some PCs and most workstations. This process was greatly sped up with the introduction of 10BASE-T and its relatively small modular connector, at which point Ethernet ports appeared even on low-end motherboards.
Since then, Ethernet technology has evolved to meet new bandwidth and market requirements. In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is quickly replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year.
Standardization
In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The DIX group with Gary Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox) submitted the so-called Blue Book CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported by General Motors) were also considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, and standardization proceeded separately for each proposal.
Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens' representative to IEEE 802, quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group "Local Networks" within the European standards body ECMA TC24. In March 1982, ECMA TC24 with its corporate members reached an agreement on a standard for CSMA/CD based on the IEEE 802 draft. Because the DIX proposal was most technically complete and because of the speedy action taken by ECMA which decisively contributed to the conciliation of opinions within IEEE, the IEEE 802.3 CSMA/CD standard was approved in December 1982. IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985.
Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Fromm as the liaison officer working to integrate with International Electrotechnical Commission (IEC) Technical Committee 83 and International Organization for Standardization (ISO) Technical Committee 97 Sub Committee 6. The ISO 8802-3 standard was published in 1989.
Evolution
Ethernet has evolved to include higher bandwidth, improved medium access control methods, and different physical media. The multidrop coaxial cable was replaced with physical point-to-point links connected by Ethernet repeaters or switches.
Ethernet stations communicate by sending each other data packets: blocks of data individually sent and delivered. As with other IEEE 802 LANs, adapters come programmed with globally unique 48-bit MAC address so that each Ethernet station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Ethernet establishes link-level connections, which can be defined using both the destination and source addresses. On reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Ethernet stations.
An EtherType field in each frame is used by the operating system on the receiving station to select the appropriate protocol module (e.g., an Internet Protocol version such as IPv4). Ethernet frames are said to be self-identifying, because of the EtherType field. Self-identifying frames make it possible to intermix multiple protocols on the same physical network and allow a single computer to use multiple protocols together. Despite the evolution of Ethernet technology, all generations of Ethernet (excluding early experimental versions) use the same frame formats. Mixed-speed networks can be built using Ethernet switches and repeaters supporting the desired Ethernet variants.
Due to the ubiquity of Ethernet, and the ever-decreasing cost of the hardware needed to support it, by 2004 most manufacturers built Ethernet interfaces directly into PC motherboards, eliminating the need for a separate network card.
Shared medium
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The method used was similar to those used in radio systems, with the common cable providing the communication channel likened to the Luminiferous aether in 19th-century physics, and it was from this reference that the name Ethernet was derived.
Original Ethernet's shared coaxial cable (the shared medium) traversed a building or campus to every attached machine. A scheme known as carrier-sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than competing Token Ring or Token Bus technologies. Computers are connected to an Attachment Unit Interface (AUI) transceiver, which is in turn connected to the cable (with thin Ethernet the transceiver is usually integrated into the network adapter). While a simple passive wire is highly reliable for small networks, it is not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, can make the whole Ethernet segment unusable.
Through the first half of the 1980s, Ethernet's 10BASE5 implementation used a coaxial cable in diameter, later called thick Ethernet or thicknet. Its successor, 10BASE2, called thin Ethernet or thinnet, used the RG-58 coaxial cable. The emphasis was on making installation of the cable easier and less costly.
Since all communication happens on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. Use of a single cable also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are simultaneously active.
A collision happens when two stations attempt to transmit at the same time. They corrupt transmitted data and require stations to re-transmit. The lost data and re-transmission reduces throughput. In the worst case, where multiple active hosts connected with maximum allowed cable length attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 studied performance of an existing Ethernet installation under both normal and artificially generated heavy load. The report claimed that 98% throughput on the LAN was observed. This is in contrast with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. This report was controversial, as modeling showed that collision-based networks theoretically became unstable under loads as low as 37% of nominal capacity. Many early researchers failed to understand these results. Performance on real networks is significantly better.
In a modern Ethernet, the stations do not all share one channel through a shared cable or a simple repeater hub; instead, each station communicates with a switch, which in turn forwards that traffic to the destination station. In this topology, collisions are only possible if station and switch attempt to communicate with each other at the same time, and collisions are limited to this link. Furthermore, the 10BASE-T standard introduced a full duplex mode of operation which became common with Fast Ethernet and the de facto standard with Gigabit Ethernet. In full duplex, switch and station can send and receive simultaneously, and therefore modern Ethernets are completely collision-free.
Repeaters and hubs
For signal degradation and timing reasons, coaxial Ethernet segments have a restricted size. Somewhat larger networks can be built by using an Ethernet repeater. Early repeaters had only two ports, allowing, at most, a doubling of network size. Once repeaters with more than two ports became available, it was possible to wire the network in a star topology. Early experiments with star topologies (called Fibernet) using optical fiber were published by 1978.
Shared cable Ethernet is always hard to install in offices because its bus topology is in conflict with the star topology cable plans designed into buildings for telephony. Modifying Ethernet to conform to twisted-pair telephone wiring already installed in commercial buildings provided another opportunity to lower costs, expand the installed base, and leverage building design, and, thus, twisted-pair Ethernet was the next logical development in the mid-1980s.
Ethernet on unshielded twisted-pair cables (UTP) began with StarLAN at 1 Mbit/s in the mid-1980s. In 1987 SynOptics introduced the first twisted-pair Ethernet at 10 Mbit/s in a star-wired cabling topology with a central hub, later called LattisNet. These evolved into 10BASE-T, which was designed for point-to-point links only, and all termination was built into the device. This changed repeaters from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks easier to maintain by preventing most faults with one peer or its associated cable from affecting other devices on the network.
Despite the physical star topology and the presence of separate transmit and receive channels in the twisted pair and fiber media, repeater-based Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the repeater, primarily generation of the jam signal in dealing with packet collisions. Every packet is sent to every other port on the repeater, so bandwidth and security problems are not addressed. The total throughput of the repeater is limited to that of a single link, and all links must operate at the same speed.
Bridging and switching
While repeaters can isolate some aspects of Ethernet segments, such as cable breakages, they still forward all traffic to all Ethernet devices. The entire network is one collision domain, and all hosts have to be able to detect collisions anywhere on the network. This limits the number of repeaters between the farthest nodes and creates practical limits on how many machines can communicate on an Ethernet network. Segments joined by repeaters have to all operate at the same speed, making phased-in upgrades impossible.
To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed Ethernet packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. At initial startup, Ethernet bridges work somewhat like Ethernet repeaters, passing all traffic between segments. By observing the source addresses of incoming frames, the bridge then builds an address table associating addresses to segments. Once an address is learned, the bridge forwards network traffic destined for that address only to the associated segment, improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcome the limits on total segments between two hosts and allow the mixing of speeds, both of which are critical to the incremental deployment of faster Ethernet variants.
In 1989, Motorola Codex introduced their 6310 EtherSpan, and Kalpana introduced their EtherSwitch; these were examples of the first commercial Ethernet switches. Early switches such as this used cut-through switching where only the header of the incoming packet is examined before it is either dropped or forwarded to another segment. This reduces the forwarding latency. One drawback of this method is that it does not readily allow a mixture of different link speeds. Another is that packets that have been corrupted are still propagated through the network. The eventual remedy for this was a return to the original store and forward approach of bridging, where the packet is read into a buffer on the switch in its entirety, its frame check sequence verified and only then the packet is forwarded. In modern network equipment, this process is typically done using application-specific integrated circuits allowing packets to be forwarded at wire speed.
When a twisted pair or fiber link segment is used and neither end is connected to a repeater, full-duplex Ethernet becomes possible over that segment. In full-duplex mode, both devices can transmit and receive to and from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (for example, 200 Mbit/s for Fast Ethernet). The elimination of the collision domain for these connections also means that all the link's bandwidth can be used by the two devices on that segment and that segment length is not limited by the constraints of collision detection.
Since packets are typically delivered only to the port they are intended for, traffic on a switched Ethernet is less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding.
The bandwidth advantages, the improved isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.
Advanced networking
Simple switched Ethernet networks, while a great improvement over repeater-based Ethernet, suffer from single points of failure, attacks that trick switches or hosts into sending data to a machine even if it is not intended for it, scalability and security issues with regard to switching loops, broadcast radiation, and multicast traffic.
Advanced networking features in switches use Shortest Path Bridging (SPB) or the Spanning Tree Protocol (STP) to maintain a loop-free, meshed network, allowing physical loops for redundancy (STP) or load-balancing (SPB). Shortest Path Bridging includes the use of the link-state routing protocol IS-IS to allow larger networks with shortest path routes between devices.
Advanced networking features also ensure port security, provide protection features such as MAC lockdown and broadcast radiation filtering, use VLANs to keep different classes of users separate while using the same physical infrastructure, employ multilayer switching to route between different classes, and use link aggregation to add bandwidth to overloaded links and to provide some redundancy.
In 2016, Ethernet replaced InfiniBand as the most popular system interconnect of TOP500 supercomputers.
Varieties
The Ethernet physical layer evolved over a considerable time span and encompasses coaxial, twisted pair and fiber-optic physical media interfaces, with speeds from to . The first introduction of twisted-pair CSMA/CD was StarLAN, standardized as 802.3 1BASE5. While 1BASE5 had little market penetration, it defined the physical apparatus (wire, plug/jack, pin-out, and wiring plan) that would be carried over to 10BASE-T through 10GBASE-T.
The most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three use twisted-pair cables and 8P8C modular connectors. They run at , , and , respectively.
Fiber optic variants of Ethernet (that commonly use SFP modules) are also very popular in larger networks, offering high performance, better electrical isolation and longer distance (tens of kilometers with some versions). In general, network protocol stack software will work similarly on all varieties.
Frame structure
In IEEE 802.3, a datagram is called a packet or frame. Packet is used to describe the overall transmission unit and includes the preamble, start frame delimiter (SFD) and carrier extension (if present). The frame begins after the start frame delimiter with a frame header featuring source and destination MAC addresses and the EtherType field giving either the protocol type for the payload protocol or the length of the payload. The middle section of the frame consists of payload data including any headers for other protocols (for example, Internet Protocol) carried in the frame. The frame ends with a 32-bit cyclic redundancy check, which is used to detect corruption of data in transit. Notably, Ethernet packets have no time-to-live field, leading to possible problems in the presence of a switching loop.
Autonegotiation
Autonegotiation is the procedure by which two connected devices choose common transmission parameters, e.g. speed and duplex mode. Autonegotiation was initially an optional feature, first introduced with 100BASE-TX (1995 IEEE 802.3u Fast Ethernet standard), and is backward compatible with 10BASE-T. The specification was improved in the 1998 release of IEEE 802.3. Autonegotiation is mandatory for 1000BASE-T and faster.
Error conditions
Switching loop
A switching loop or bridge loop occurs in computer networks when there is more than one Layer 2 (OSI model) path between two endpoints (e.g. multiple connections between two network switches or two ports on the same switch connected to each other). The loop creates broadcast storms as broadcasts and multicasts are forwarded by switches out every port, the switch or switches will repeatedly rebroadcast the broadcast messages flooding the network. Since the Layer 2 header does not support a time to live (TTL) value, if a frame is sent into a looped topology, it can loop forever.
A physical topology that contains switching or bridge loops is attractive for redundancy reasons, yet a switched network must not have loops. The solution is to allow physical loops, but create a loop-free logical topology using the SPB protocol or the older STP on the network switches.
Jabber
A node that is sending longer than the maximum transmission window for an Ethernet packet is considered to be jabbering. Depending on the physical topology, jabber detection and remedy differ somewhat.
An MAU is required to detect and stop abnormally long transmission from the DTE (longer than 20–150 ms) in order to prevent permanent network disruption.
On an electrically shared medium (10BASE5, 10BASE2, 1BASE5), jabber can only be detected by each end node, stopping reception. No further remedy is possible.
A repeater/repeater hub uses a jabber timer that ends retransmission to the other ports when it expires. The timer runs for 25,000 to 50,000 bit times for 1 Mbit/s, 40,000 to 75,000 bit times for 10 and 100 Mbit/s, and 80,000 to 150,000 bit times for 1 Gbit/s. Jabbering ports are partitioned off the network until a carrier is no longer detected.
End nodes utilizing a MAC layer will usually detect an oversized Ethernet frame and cease receiving. A bridge/switch will not forward the frame.
A non-uniform frame size configuration in the network using jumbo frames may be detected as jabber by end nodes. Jumbo frames are not part of the official IEEE 802.3 Ethernet standard.
A packet detected as jabber by an upstream repeater and subsequently cut off has an invalid frame check sequence and is dropped.
Runt frames
Runts are packets or frames smaller than the minimum allowed size. They are dropped and not propagated.
| Technology | Networks | null |
9531 | https://en.wikipedia.org/wiki/Electrical%20engineering | Electrical engineering | Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems that use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use.
Electrical engineering is divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science.
Electrical engineers typically hold a degree in electrical engineering, electronic or electrical and electronic engineering. Practicing engineers may have professional certification and be members of a professional body or an international standards organization. These include the International Electrotechnical Commission (IEC), the National Society of Professional Engineers (NSPE), the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET, formerly the IEE).
Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from circuit theory to the management skills of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to sophisticated design and manufacturing software.
History
Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, and was the first to draw a clear distinction between magnetism and static electricity. He is credited with establishing the term "electricity". He also designed the versorium: a device that detects the presence of statically charged objects. In 1762 Swedish professor Johan Wilcke invented a device later named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery.
19th century
In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted, who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle; of William Sturgeon, who in 1825 invented the electromagnet; of Joseph Henry and Edward Davy, who invented the electrical relay in 1835; of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor; of Michael Faraday, the discoverer of electromagnetic induction in 1831; and of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism.
In 1782, Georges-Louis Le Sage developed and presented in Berlin probably the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet. This telegraph connected two rooms. It was an electrostatic telegraph that moved gold leaf through electrical conduction.
In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803 and 1804, he worked on electrical telegraphy, and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva's electrolyte telegraph system was very innovative though it was greatly influenced by and based upon two discoveries made in Europe in 1800—Alessandro Volta's electric battery for generating an electric current and William Nicholson and Anthony Carlyle's electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the later 19th century. Practitioners had created a global electric telegraph network, and the first professional electrical engineering institutions were founded in the UK and the US to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity. Over 50 years later, he joined the new Society of Telegraph Engineers (soon to be renamed the Institution of Electrical Engineers) where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy.
Practical applications and advances in such fields created an increasing need for standardized units of measure. They led to the international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893. The publication of these standards formed the basis of future advances in standardization in various industries, and in many countries, the definitions were immediately recognized in relevant legislation.
During these years, the study of electricity was largely considered to be a subfield of physics since early electrical technology was considered electromechanical in nature. The Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882 and introduced the first-degree course in electrical engineering in 1883. The first electrical engineering degree program in the United States was started at Massachusetts Institute of Technology (MIT) in the physics department under Professor Charles Cross, though it was Cornell University to produce the world's first electrical engineering graduates in 1885. The first course in electrical engineering was taught in 1883 in Cornell's Sibley College of Mechanical Engineering and Mechanic Arts.
In about 1885, Cornell President Andrew Dickson White established the first Department of Electrical Engineering in the United States. In the same year, University College London founded the first chair of electrical engineering in Great Britain. Professor Mendell P. Weinbach at University of Missouri established the electrical engineering department in 1886. Afterwards, universities and institutes of technology gradually started to offer electrical engineering programs to their students all over the world.
During these decades the use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts—direct current (DC)—to 59 customers on Manhattan Island in New York City. In 1884, Sir Charles Parsons invented the steam turbine allowing for more efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, John Dixon Gibbs and William Stanley Jr. Practical AC motor designs including induction motors were independently invented by Galileo Ferraris and Nikola Tesla and further developed into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering. The spread in the use of AC set off in the United States what has been called the war of the currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard.
Early 20th century
During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them. In 1895, Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose-built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of .
Millimetre wave communication was first investigated by Jagadish Chandra Bose during 18941896, when he reached an extremely high frequency of up to 60GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901.
In 1897, Karl Ferdinand Braun introduced the cathode-ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode.
In 1920, Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934, the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936.
In 1941, Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943, Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives.
In 1948, Claude Shannon published "A Mathematical Theory of Communication" which mathematically describes the passage of information with uncertainty (electrical noise).
Solid-state electronics
The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain while working under William Shockley at the Bell Telephone Laboratories (BTL) in 1947. They then invented the bipolar junction transistor in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices.
The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958 and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959.
The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world.
The MOSFET made it possible to build high-density integrated circuit chips. The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the MOSFET has been the basic building block of modern electronics. The mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has since led to revolutionary changes in technology, economy, culture and thinking.
The Apollo program which culminated in landing astronauts on the Moon with Apollo 11 in 1969 was enabled by NASA's adoption of advances in semiconductor electronic technology, including MOSFETs in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC).
The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution.
Subfields
One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today, electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes, certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right.
Power and energy
Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems.
Telecommunications
Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer.
Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static.
Control engineering
Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation.
Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback.
Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries.
Electronics
Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner.
Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering.
Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today.
Microelectronics and nanoelectronics
Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level.
Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002.
Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics.
Signal processing
Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals.
Signal processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems.
DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, hi-fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing.
Instrumentation
Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.
Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control.
Computers
Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of embedded devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Robots are one of the applications of computer engineering.
Photonics and optics
Photonics and optics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. The application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses the properties of electromagnetic radiation. Other prominent applications of optics include electro-optical sensors and measurement systems, lasers, fiber-optic communication systems, and optical disc systems (e.g. CD and DVD). Photonics builds heavily on optical technology, supplemented with modern developments such as optoelectronics (mostly involving semiconductors), laser systems, optical amplifiers and novel materials (e.g. metamaterials).
Related disciplines
Mechatronics is an engineering discipline that deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various subsystems of aircraft and automobiles.
Electronic systems design is the subject within electrical engineering that deals with the multi-disciplinary design issues of complex electrical and mechanical systems.
The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already, such small devices, known as microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication.
In aerospace engineering and robotics, an example is the most recent electric propulsion and ion propulsion.
Education
Electrical engineers typically possess an academic degree with a major in electrical engineering, electronics engineering, electrical engineering technology, or electrical and electronic engineering. The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science, depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering. Initially such topics cover most, if not all, of the subdisciplines of electrical engineering. At some schools, the students can then choose to emphasize one or more subdisciplines towards the end of their courses of study.
At many schools, electronic engineering is included as part of an electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others, electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered.
Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (Eng.D.), or an Engineer's degree. The master's and engineer's degrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than a standalone postgraduate degree.
Professional practice
In most countries, a bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa), Chartered engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union).
The advantages of licensure vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations, such as building codes and legislation pertaining to environmental law.
Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe. Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. An MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as an Electrical and computer (technology) engineer.
In Australia, Canada, and the United States, electrical engineers make up around 0.25% of the labor force.
Tools and work
From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test, and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunications systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances, or the electrical control of industrial machinery.
Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others.
Although most electrical engineers will understand basic circuit theory (that is, the interactions of elements such as resistors, capacitors, diodes, transistors, and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunications systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy, and the ability to understand the technical language and concepts that relate to electrical engineering.
A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current, and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high-frequency telecommunications, spectrum analyzers and network analyzers are used. In some disciplines, safety can be a particular concern with instrumentation. For instance, medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids. Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different. Many disciplines of electrical engineering use tests specific to their discipline. Audio electronics engineers use audio test sets consisting of a signal generator and a meter, principally to measure level but also other parameters such as harmonic distortion and noise. Likewise, information technology have their own test sets, often specific to a particular data format, and the same is true of television broadcasting.
For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.
The workplaces of engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, on board a Naval ship, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers, and other engineers.
Electrical engineering has an intimate relationship with the physical sciences. For instance, the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable. Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables. Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project including the power distribution, the instrumentation, and the manufacture and installation of the superconducting electromagnets.
| Technology | Disciplines | null |
9532 | https://en.wikipedia.org/wiki/Electromagnetism | Electromagnetism | In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles.
The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators.
Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies.
In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light.
History
Ancient world
Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures).
19th century
Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:
Electric charges or one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel.
Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole.
An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire.
A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement.
In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.
His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.
This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.
Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community.
An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars."
A fundamental force
The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range.
All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction.
Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena.
Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects.
The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves.
Classical electrodynamics
In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10May 1752 by Thomas-François Dalibard of France using a iron rod instead of a kite and he successfully extracted electrical sparks from a cloud.
One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation.
A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.
One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.)
In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.)
Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields.
Extension to nonlinear phenomena
The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics.
Quantities and units
Here is a list of common units related to electromagnetism:
ampere (electric current, SI unit)
coulomb (electric charge)
farad (capacitance)
henry (inductance)
ohm (resistance)
siemens (conductance)
tesla (magnetic flux density)
volt (electric potential)
watt (power)
weber (magnetic flux)
In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system.
Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units.
Applications
The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction.
| Physical sciences | Physics | null |
9540 | https://en.wikipedia.org/wiki/Electricity%20generation | Electricity generation | Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery (transmission, distribution, etc.) to end users or its storage, using for example, the pumped-storage method.
Consumable electricity is not freely available in nature, so it must be "produced", transforming other forms of energy to electricity. Production is carried out in power stations, also called "power plants". Electricity is most often generated at a power plant by electromechanical generators, primarily driven by heat engines fueled by combustion or nuclear fission, but also by other means such as the kinetic energy of flowing water and wind. Other energy sources include solar photovoltaics and geothermal power. There are exotic and speculative methods to recover energy, such as proposed fusion reactor designs which aim to directly extract energy from intense magnetic fields generated by fast-moving charged particles generated by the fusion reaction (see magnetohydrodynamics).
Phasing out coal-fired power stations and eventually gas-fired power stations, or, if practical, capturing their greenhouse gas emissions, is an important part of the energy transformation required to limit climate change. Vastly more solar power and wind power is forecast to be required, with electricity demand increasing strongly with further electrification of transport, homes and industry. However, in 2023, it was reported that the global electricity supply was approaching peak CO2 emissions thanks to the growth of solar and wind power.
History
The fundamental principles of electricity generation were discovered in the 1820s and early 1830s by British scientist Michael Faraday. His method, still used today, is for electricity to be generated by the movement of a loop of wire, or Faraday disc, between the poles of a magnet. Central power stations became economically practical with the development of alternating current (AC) power transmission, using power transformers to transmit power at high voltage and with low loss.
Commercial electricity production started with the coupling of the dynamo to the hydraulic turbine. The mechanical production of electric power began the Second Industrial Revolution and made possible several inventions using electricity, with the major contributors being Thomas Alva Edison and Nikola Tesla. Previously the only way to produce electricity was by chemical reactions or using battery cells, and the only practical use of electricity was for the telegraph.
Electricity generation at central power stations started in 1882, when a steam engine driving a dynamo at Pearl Street Station produced a DC current that powered public lighting on Pearl Street, New York. The new technology was quickly adopted by many cities around the world, which adapted their gas-fueled street lights to electric power. Soon after electric lights would be used in public buildings, in businesses, and to power public transport, such as trams and trains.
The first power plants used water power or coal. Today a variety of energy sources are used, such as coal, nuclear, natural gas, hydroelectric, wind, and oil, as well as solar energy, tidal power, and geothermal sources.
In the 1880s the popularity of electricity grew massively with the introduction of the Incandescent light bulb. Although there are 22 recognised inventors of the light bulb prior to Joseph Swan and Thomas Edison, Edison and Swan's invention became by far the most successful and popular of all. During the early years of the 19th century, massive jumps in electrical sciences were made. And by the later 19th century the advancement of electrical technology and engineering led to electricity being part of everyday life. With the introduction of many electrical inventions and their implementation into everyday life, the demand for electricity within homes grew dramatically. With this increase in demand, the potential for profit was seen by many entrepreneurs who began investing into electrical systems to eventually create the first electricity public utilities. This process in history is often described as electrification.
The earliest distribution of electricity came from companies operating independently of one another. A consumer would purchase electricity from a producer, and the producer would distribute it through their own power grid. As technology improved so did the productivity and efficiency of its generation. Inventions such as the steam turbine had a massive impact on the efficiency of electrical generation but also the economics of generation as well. This conversion of heat energy into mechanical work was similar to that of steam engines, however at a significantly larger scale and far more productively. The improvements of these large-scale generation plants were critical to the process of centralised generation as they would become vital to the entire power system that we now use today.
Throughout the middle of the 20th century many utilities began merging their distribution networks due to economic and efficiency benefits. Along with the invention of long-distance power transmission, the coordination of power plants began to form. This system was then secured by regional system operators to ensure stability and reliability. The electrification of homes began in Northern Europe and in the Northern America in the 1920s in large cities and urban areas. It was not until the 1930s that rural areas saw the large-scale establishment of electrification.
Methods of generation
Several fundamental methods exist to convert other forms of energy into electrical energy. Utility-scale generation is achieved by rotating electric generators or by photovoltaic systems. A small proportion of electric power distributed by utilities is provided by batteries. Other forms of electricity generation used in niche applications include the triboelectric effect, the piezoelectric effect, the thermoelectric effect, and betavoltaics.
Generators
Electric generators transform kinetic energy into electricity. This is the most used form for generating electricity based on Faraday's law. It can be seen experimentally by rotating a magnet within closed loops of conducting material, e.g. copper wire. Almost all commercial electrical generation uses electromagnetic induction, in which mechanical energy forces a generator to rotate.
Electrochemistry
Electrochemistry is the direct transformation of chemical energy into electricity, as in a battery. Electrochemical electricity generation is important in portable and mobile applications. Currently, most electrochemical power comes from batteries. Primary cells, such as the common zinc–carbon batteries, act as power sources directly, but secondary cells (i.e. rechargeable batteries) are used for storage systems rather than primary generation systems. Open electrochemical systems, known as fuel cells, can be used to extract power either from natural fuels or from synthesized fuels. Osmotic power is a possibility at places where salt and fresh water merge.
Photovoltaic effect
The photovoltaic effect is the transformation of light into electrical energy, as in solar cells. Photovoltaic panels convert sunlight directly to DC electricity. Power inverters can then convert that to AC electricity if needed. Although sunlight is free and abundant, solar power electricity is still usually more expensive to produce than large-scale mechanically generated power due to the cost of the panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells with close to 30% conversion efficiency are now commercially available. Over 40% efficiency has been demonstrated in experimental systems.
Until recently, photovoltaics were most commonly used in remote sites where there is no access to a commercial power grid, or as a supplemental electricity source for individual homes and businesses. Recent advances in manufacturing efficiency and photovoltaic technology, combined with subsidies driven by environmental concerns, have dramatically accelerated the deployment of solar panels. Installed capacity is growing by around 20% per year led by increases in Germany, Japan, United States, China, and India.
Economics
The selection of electricity production modes and their economic viability varies in accordance with demand and region. The economics vary considerably around the world, resulting in widespread residential selling prices. Hydroelectric plants, nuclear power plants, thermal power plants and renewable sources have their own pros and cons, and selection is based upon the local power requirement and the fluctuations in demand.
All power grids have varying loads on them. The daily minimum is the base load, often supplied by plants which run continuously. Nuclear, coal, oil, gas and some hydro plants can supply base load. If well construction costs for natural gas are below $10 per MWh, generating electricity from natural gas is cheaper than generating power by burning coal.
Nuclear power plants can produce a huge amount of power from a single unit. However, nuclear disasters have raised concerns over the safety of nuclear power, and the capital cost of nuclear plants is very high.
Hydroelectric power plants are located in areas where the potential energy from falling water can be harnessed for moving turbines and the generation of power. It may not be an economically viable single source of production where the ability to store the flow of water is limited and the load varies too much during the annual production cycle.
Generating equipment
Electric generators were known in simple forms from the discovery of electromagnetic induction in the 1830s. In general, some form of prime mover such as an engine or the turbines described above, drives a rotating magnetic field past stationary coils of wire thereby turning mechanical energy into electricity. The only commercial scale forms of electricity production that do not employ a generator are photovoltaic solar and fuel cells.
Turbines
Almost all commercial electrical power on Earth is generated with a turbine, driven by wind, water, steam or burning gas. The turbine drives a generator, thus transforming its mechanical energy into electrical energy by electromagnetic induction. There are many different methods of developing mechanical energy, including heat engines, hydro, wind and tidal power. Most electric generation is driven by heat engines.
The combustion of fossil fuels supplies most of the energy to these engines, with a significant fraction from nuclear fission and some from renewable sources. The modern steam turbine, invented by Sir Charles Parsons in 1884, currently generates about 80% of the electric power in the world using a variety of heat sources. Turbine types include:
Steam
Water is boiled by coal burned in a thermal power plant. About 41% of all electricity is generated this way.
Nuclear fission heat created in a nuclear reactor creates steam. Less than 15% of electricity is generated this way.
Renewable energy. The steam is generated by biomass, solar thermal energy, or geothermal power.
Natural gas: turbines are driven directly by gases produced by combustion. Combined cycle are driven by both steam and natural gas. They generate power by burning natural gas in a gas turbine and use residual heat to generate steam. At least 20% of the world's electricity is generated by natural gas.
Water Energy is captured by a water turbine from the movement of water - from falling water, the rise and fall of tides or ocean thermal currents (see ocean thermal energy conversion). Currently, hydroelectric plants provide approximately 16% of the world's electricity.
The windmill was a very early wind turbine. In 2018 around 5% of the world's electricity was produced from wind
Turbines can also use other heat-transfer liquids than steam. Supercritical carbon dioxide based cycles can provide higher conversion efficiency due to faster heat exchange, higher energy density and simpler power cycle infrastructure. Supercritical carbon dioxide blends, that are currently in development, can further increase efficiency by optimizing its critical pressure and temperature points.
Although turbines are most common in commercial power generation, smaller generators can be powered by gasoline or diesel engines. These may used for backup generation or as a prime source of power within isolated villages.
World production
Total world generation in 2021 was 28,003 TWh, including coal (36%), gas (23%), hydro (15%), nuclear (10%), wind (6.6%), solar (3.7%), oil and other fossil fuels (3.1%), biomass (2.4%) and geothermal and other renewables (0.33%).
Production by country
China produced a third of the world's electricity in 2021, largely from coal. The United States produces half as much as China but uses far more natural gas and nuclear.
Environmental concerns
Variations between countries generating electrical power affect concerns about the environment. In France only 10% of electricity is generated from fossil fuels, the US is higher at 70% and China is at 80%. The cleanliness of electricity depends on its source. Methane leaks (from natural gas to fuel gas-fired power plants) and carbon dioxide emissions from fossil fuel-based electricity generation account for a significant portion of world greenhouse gas emissions. In the United States, fossil fuel combustion for electric power generation is responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain. Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and particulate matter in the US.
According to the International Energy Agency (IEA), low-carbon electricity generation needs to account for 85% of global electrical output by 2040 in order to ward off the worst effects of climate change. Like other organizations including the Energy Impact Center (EIC) and the United Nations Economic Commission for Europe (UNECE), the IEA has called for the expansion of nuclear and renewable energy to meet that objective. Some, like EIC founder Bret Kugelmass, believe that nuclear power is the primary method for decarbonizing electricity generation because it can also power direct air capture that removes existing carbon emissions from the atmosphere. Nuclear power plants can also create district heating and desalination projects, limiting carbon emissions and the need for expanded electrical output.
A fundamental issue regarding centralised generation and the current electrical generation methods in use today is the significant negative environmental effects that many of the generation processes have. Processes such as coal and gas not only release carbon dioxide as they combust, but their extraction from the ground also impacts the environment. Open pit coal mines use large areas of land to extract coal and limit the potential for productive land use after the excavation. Natural gas extraction releases large amounts of methane into the atmosphere when extracted from the ground greatly increase global greenhouse gases. Although nuclear power plants do not release carbon dioxide through electricity generation, there are risks associated with nuclear waste and safety concerns associated with the use of nuclear sources.
Per unit of electricity generated coal and gas-fired power life-cycle greenhouse gas emissions are almost always at least ten times that of other generation methods.
Centralised and distributed generation
Centralised generation is electricity generation by large-scale centralised facilities, sent through transmission lines to consumers. These facilities are usually located far away from consumers and distribute the electricity through high voltage transmission lines to a substation, where it is then distributed to consumers; the basic concept being that multi-megawatt or gigawatt scale large stations create electricity for a large number of people. The vast majority of electricity used is created from centralised generation. Most centralised power generation comes from large power plants run by fossil fuels such as coal or natural gas, though nuclear or large hydroelectricity plants are also commonly used.
Centralised generation is fundamentally the opposite of distributed generation. Distributed generation is the small-scale generation of electricity to smaller groups of consumers. This can also include independently producing electricity by either solar or wind power. In recent years distributed generation as has seen a spark in popularity due to its propensity to use renewable energy generation methods such as rooftop solar.
Technologies
Centralised energy sources are large power plants that produce huge amounts of electricity to a large number of consumers. Most power plants used in centralised generation are thermal power plants meaning that they use a fuel to heat steam to produce a pressurised gas which in turn spins a turbine and generates electricity. This is the traditional way of producing energy. This process relies on several forms of technology to produce widespread electricity, these being natural coal, gas and nuclear forms of thermal generation. More recently solar and wind have become large scale.
Solar
Wind
Coal
Natural gas
Natural gas is ignited to create pressurised gas which is used to spin turbines to generate electricity. Natural gas plants use a gas turbine where natural gas is added along with oxygen which in turn combusts and expands through the turbine to force a generator to spin.
Natural gas power plants are more efficient than coal power generation, they however contribute to climate change, but not as highly as coal generation. Not only do they produce carbon dioxide from the ignition of natural gas, the extraction of gas when mined releases a significant amount of methane into the atmosphere.
Nuclear
Nuclear power plants create electricity through steam turbines where the heat input is from the process of nuclear fission. Currently, nuclear power produces 11% of all electricity in the world. Most nuclear reactors use uranium as a source of fuel. In a process called nuclear fission, energy, in the form of heat, is released when nuclear atoms are split. Electricity is created through the use of a nuclear reactor where heat produced by nuclear fission is used to produce steam which in turn spins turbines and powers the generators. Although there are several types of nuclear reactors, all fundamentally use this process.
Normal emissions due to nuclear power plants are primarily waste heat and radioactive spent fuel. In a reactor accident, significant amounts of radioisotopes can be released to the environment, posing a long term hazard to life. This hazard has been a continuing concern of environmentalists. Accidents such as the Three Mile Island accident, Chernobyl disaster and the Fukushima nuclear disaster illustrate this problem.
Electricity generation capacity by country
The table lists 45 countries with their total electricity capacities. The data is from 2022.
According to the Energy Information Administration, the total global electricity capacity in 2022 was nearly 8.9 terawatt (TW), more than four times the total global electricity capacity in 1981. The global average per-capita electricity capacity was about 1,120 watts in 2022, nearly two and a half times the global average per-capita electricity capacity in 1981.
Iceland has the highest installed capacity per capita in the world, at about 8,990 watts. All developed countries have an average per-capita electricity capacity above the global average per-capita electricity capacity, with the United Kingdom having the lowest average per-capita electricity capacity of all other developed countries.
| Technology | Electricity generation and distribution | null |
9541 | https://en.wikipedia.org/wiki/Design%20of%20experiments | Design of experiments | The design of experiments, also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.
In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.
Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity.
Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience.
History
Statistical experiments, following Charles S. Peirce
A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics.
Randomized experiments
Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights.
Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.
Optimal designs for regression models
Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less).
Sequences of experiments
The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952.
Fisher's principles
A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.
Comparison
In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline.
Randomization
Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment.
The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things.
Statistical replication
Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.
Blocking
Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.
Orthogonality
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.
Multifactorial experiments
Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.
Example
This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs.
Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by
We consider two different experiments:
Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the object, for i = 1, ..., 8.
Do the eight weighings according to the following schedule—a weighing matrix:
Let Yi be the measured difference for i = 1, ..., 8. Then the estimated value of the weight θ1 is
Similar estimates can be found for the weights of the other items:
The question of design of experiments is: which experiment is better?
The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.
Many problems of the design of experiments involve combinatorial designs, as in this example and others.
Avoiding false positives
False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields.
Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.
Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance.
P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible.
Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.
Clear and complete documentation of the experimental methodology is also important in order to support replication of results.
Discussion topics when setting up an experimental design
An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:
How many factors does the design have, and are the levels of these factors fixed or random?
Are control conditions needed, and what should they be?
Manipulation checks: did the manipulation really work?
What are the background variables?
What is the sample size? How many units must be collected for the experiment to be generalisable and have enough power?
What is the relevance of interactions between factors?
What is the influence of delayed effects of substantive factors on outcomes?
How do response shifts affect self-report measures?
How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests?
What about using a proxy pretest?
Are there confounding variables?
Should the client/patient, researcher or even the analyst of the data be blind to conditions?
What is the feasibility of subsequent application of different conditions to the same units?
How many of each control and noise factors should be taken into account?
The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.
Causal attributions
In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design.
Statistical control
It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.
To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.
One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.
Experimental designs after Fisher
Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.
In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards.
Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics.
As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space.
Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn.
The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification.
Human participant constraints
Laws and ethical considerations preclude some carefully designed
experiments with human subjects. Legal constraints are dependent on
jurisdiction. Constraints may involve
institutional review boards, informed consent
and confidentiality affecting both clinical (medical) trials and
behavioral and social science experiments.
In the field of toxicology, for example, experimentation is performed
on laboratory animals with the goal of defining safe exposure limits
for humans. Balancing
the constraints are views from the medical field. Regarding the randomization of patients,
"... if no one knows which therapy is better, there is no ethical
imperative to use one therapy or another." (p 380) Regarding
experimental design, "...it is clearly not ethical to place subjects
at risk to collect data in a poorly designed study when this situation
can be easily avoided...". (p 393)
| Mathematics | Statistics and probability | null |
9550 | https://en.wikipedia.org/wiki/Electricity | Electricity | Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others.
The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts.
Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies.
The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force behind the Second Industrial Revolution, with electricity's versatility driving transformations in both industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society.
History
Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them.
Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artefact was electrical in nature.
Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Isaac Newton made early investigations into electricity, with an idea of his written down in his book Opticks arguably the beginning of the field theory of the electric force.
Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges.
In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862.
While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.
In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels.
The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.
Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948.
Concepts
Electric charge
By modern convention, the charge carried by electrons is defined as negative, and that by protons is positive. Before these particles were discovered, Benjamin Franklin had defined a positive charge as being the charge acquired by a glass rod when it is rubbed with a silk cloth. A proton by definition carries a charge of exactly . This value is also defined as the elementary charge. No object can have a charge smaller than the elementary charge, and any amount of charge an object may carry is a multiple of the elementary charge. An electron has an equal negative charge, i.e. . Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.
The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity. A lightweight ball suspended by a fine thread can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract.
The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them. The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together.
Charge originates from certain types of subatomic particles, the most familiar carriers of which are the electron and proton. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact or by passing along a conducting material, such as a wire. The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.
Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer.
Electric current
The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.
By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.
The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.
Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment.
In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.
Electric field
The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.
An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, it follows that an electric field is a vector field.
The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, they originate at positive charges and terminate at negative charges; second, they must enter any good conductor at right angles, and third, they may never cross nor close in on themselves.
A hollow conducting body carries all its charge on its outer surface. The field is therefore 0 at all places inside the body. This is the operating principle of the Faraday cage, a conducting metal shell that isolates its interior from outside electrical effects.
The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.
The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning strike to develop there, rather than to the building it serves to protect.
Electric potential
The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity. This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. The electric field is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated. The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.
For practical purposes, defining a common reference point to which potentials may be expressed and compared is useful. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge and is therefore electrically uncharged—and unchargeable.
Electric potential is a scalar quantity. That is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, since otherwise there would be a force along the surface of the conductor that would move the charge carriers to even the potential across the surface.
The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together.
Electromagnets
Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too.
Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.
This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.
Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.
Electric circuits
An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.
The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.
The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp.
The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.
The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current but opposes a rapidly changing one.
Electric power
Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second.
Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts." The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is
where
Q is electric charge in coulombs
t is time in seconds
I is electric current in amperes
V is electric potential or voltage in volts
Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.
Electronics
Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes digital switching possible, and electronics is widely used in information processing, telecommunications, and signal processing. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.
Today, most electronic devices use semiconductor components to perform electron control. The underlying principles that explain how semiconductors work are studied in solid state physics, whereas the design and construction of electronic circuits to solve practical problems are part of electronics engineering.
Electromagnetic wave
Faraday's and Ampère's work showed that a time-varying magnetic field created an electric field, and a time-varying electric field created a magnetic field. Thus, when either field is changing in time, a field of the other is always induced. These variations are an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that in a vacuum such a wave would travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's equations, which unify light, fields, and charge are one of the great milestones of theoretical physics.
The work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents and, via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances.
Production, storage and uses
Generation and transmission
In the 6th century BC the Greek philosopher Thales of Miletus experimented with amber rods: these were the first studies into the production of electricity. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electricity.
Electrical power is usually generated by electro-mechanical generators. These can be driven by steam produced from fossil fuel combustion or the heat released from nuclear reactions, but also more directly from the kinetic energy of wind or flowing water. The steam turbine invented by Sir Charles Parsons in 1884 is still used to convert the thermal energy of steam into a rotary motion that can be used by electro-mechanical generators. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. Electricity generated by solar panels rely on a different mechanism: solar radiation is converted directly into electricity using the photovoltaic effect.
Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century, a rate of growth that is now being experienced by emerging economies such as those of India or China.
Environmental concerns with electricity generation, in specific the contribution of fossil fuel burning to climate change, have led to an increased focus on generation from renewable sources. In the power sector, wind and solar have become cost effective, speeding up an energy transition away from fossil fuels.
Transmission and storage
The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed.
Normally, demand for electricity must match the supply, as storage of electricity is difficult. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses. With increasing levels of variable renewable energy (wind and solar energy) in the grid, it has become more challenging to match supply and demand. Storage plays an increasing role in bridging that gap. There are four types of energy storage technologies, each in varying states of technology readiness: batteries (electrochemical storage), chemical storage such as hydrogen, thermal or mechanical (such as pumped hydropower).
Applications
Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector.
The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate. Electrification is expected to play a major role in the decarbonisation of sectors that rely on direct fossil fuel burning, such as transport (using electric vehicles) and heating (using heat pumps).
The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership.
Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.
Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain many billions of miniaturised transistors in a region only a few centimetres square.
Electricity and the natural world
Physiological effects
A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock—electrocution—is still used for judicial execution in some US states, though its use had become very rare by the end of the 20th century.
Electrical phenomena in nature
Electricity is not a human invention, and may be observed in several forms in nature, notably lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is due to the natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when pressed. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal: when a piezoelectric material is subjected to an electric field it changes size slightly.
Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon; these are electric fish in different orders. The order Gymnotiformes, of which the best-known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants.
Cultural perception
It is said that in the 1850s, British politician William Ewart Gladstone asked the scientist Michael Faraday why electricity was valuable. Faraday answered, "One day sir, you may tax it." However, according to Snopes.com "the anecdote should be considered apocryphal because it isn't mentioned in any accounts by Faraday or his contemporaries (letters, newspapers, or biographies) and only popped up well after Faraday's death."
In the 19th and early 20th centuries, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature. This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. "Revitalization" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films.
As public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light, such as the workers who "finger death at their gloves' end as they piece and repiece the living wires" in Rudyard Kipling's 1907 poem Sons of Martha. Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books. The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers.
With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it acquired particular attention by popular culture only when it stops flowing, an event that usually signals disaster. The people who keep it flowing, such as the nameless hero of Jimmy Webb's song "Wichita Lineman" (1968), are still often cast as heroic, wizard-like figures.
| Physical sciences | Science and medicine | null |
9555 | https://en.wikipedia.org/wiki/Ericaceae | Ericaceae | The Ericaceae () are a family of flowering plants, commonly known as the heath or heather family, found most commonly in acidic and infertile growing conditions. The family is large, with about 4,250 known species spread across 124 genera, making it the 14th most species-rich family of flowering plants. The many well known and economically important members of the Ericaceae include the cranberry, blueberry, huckleberry, rhododendron (including azaleas), and various common heaths and heathers (Erica, Cassiope, Daboecia, and Calluna for example).
Description
The Ericaceae contain a morphologically diverse range of taxa, including herbs, dwarf shrubs, shrubs, and trees. Their leaves are usually evergreen, alternate or whorled, simple and without stipules. Their flowers are hermaphrodite and show considerable variability. The petals are often fused (sympetalous) with shapes ranging from narrowly tubular to funnelform or widely urn-shaped. The corollas are usually radially symmetrical (actinomorphic) and urn-shaped, but many flowers of the genus Rhododendron are somewhat bilaterally symmetrical (zygomorphic). Anthers open by pores.
Taxonomy
Michel Adanson used the term Vaccinia to describe a similar family, but first used the term Ericaceae. The name comes from the type genus Erica, which appears to be derived from the Greek word (). The exact meaning is difficult to interpret, but some sources show it as meaning 'heather'. The name may have been used informally to refer to the plants before Linnaean times, and simply been formalised when Linnaeus described Erica in 1753, and then again when Jussieu described the Ericaceae in 1789.
Historically, the Ericaceae included both subfamilies and tribes. In 1971, Stevens, who outlined the history from 1876 and in some instances 1839, recognised six subfamilies (Rhododendroideae, Ericoideae, Vaccinioideae, Pyroloideae, Monotropoideae, and Wittsteinioideae), and further subdivided four of the subfamilies into tribes, the Rhododendroideae having seven tribes (Bejarieae, Rhodoreae, Cladothamneae, Epigaeae, Phyllodoceae, and Diplarcheae). Within tribe Rhodoreae, five genera were described, Rhododendron L. (including Azalea L. pro parte), Therorhodion Small, Ledum L., Tsusiophyllum Max., Menziesia J. E. Smith, that were eventually transferred into Rhododendron, along with Diplarche from the monogeneric tribe Diplarcheae.
In 2002, systematic research resulted in the inclusion of the formerly recognised families Empetraceae, Epacridaceae, Monotropaceae, Prionotaceae, and Pyrolaceae into the Ericaceae based on a combination of molecular, morphological, anatomical, and embryological data, analysed within a phylogenetic framework. The move significantly increased the morphological and geographical range found within the group. One possible classification of the resulting family includes 9 subfamilies, 126 genera, and about 4,000 species:
Enkianthoideae Kron, Judd & Anderberg (one genus, 16 species)
Pyroloideae Kosteltsky (4 genera, 40 species)
Monotropoideae Arnott (10 genera, 15 species)
Arbutoideae Niedenzu (up to six genera, about 80 species)
Cassiopoideae Kron & Judd (one genus, 12 species)
Ericoideae Link (19 genera, 1790 species)
Harrimanelloideae Kron & Judd (one species)
Epacridoideae Arn. (=Styphelioideae Sweet) (35 genera, 545 species)
Vaccinioideae Arnott (50 genera, 1580 species)
Genera
Distribution and ecology
The Ericaceae have a nearly worldwide distribution. They are absent from continental Antarctica, parts of the high Arctic, central Greenland, northern and central Australia, and much of the lowland tropics and neotropics.
The family is largely composed of plants that can tolerate acidic, infertile, shady conditions. Due to their tolerance of acidic conditions, this plant family is also typical of peat bogs and blanket bogs; examples include Rhododendron groenlandicum and species in the genus Kalmia. In eastern North America, members of this family often grow in association with an oak canopy, in a habitat known as an oak-heath forest. Plants in Ericaceae, especially species in Vaccinium, rely on buzz pollination for successful pollination to occur.
The majority of ornamental species from Rhododendron are native to East Asia, but most varieties cultivated today are hybrids. Most rhododendrons grown in the United States are cultivated in the Pacific Northwest. The United States is the top producer of both blueberries and cranberries, with the state of Maine growing the majority of lowbush blueberry. The wide distribution of genera within Ericaceae has led to situations in which distinct American and European plants share the same common name, e.g. blueberry (Vaccinium corymbosum in North America and V. myrtillus in Europe) and cranberry (V. macrocarpon in America and V. oxycoccos in Europe).
Mycorrhizal relationships
Like other stress-tolerant plants, many Ericaceae have mycorrhizal fungi to assist with extracting nutrients from infertile soils, as well as evergreen foliage to conserve absorbed nutrients. This trait is not found in the Clethraceae and Cyrillaceae, the two families most closely related to the Ericaceae. Most Ericaceae (excluding the Monotropoideae, and some Epacridoideae) form a distinctive accumulation of mycorrhizae, in which fungi grow in and around the roots and provide the plant with nutrients. The Pyroloideae are mixotrophic and gain sugars from the mycorrhizae, as well as nutrients.
The cultivation of blueberries, cranberries, and wintergreen for their fruit and oils relies especially on these unique relationships with fungi, as a healthy mycorrhizal network in the soil helps the plants to resist environmental stresses that might otherwise damage crop yield. Ericoid mycorrhizae are responsible for a high rate of uptake of nitrogen, which causes naturally low levels of free nitrogen in ericoid soils. These mycorrhizal fungi may also increase the tolerance of Ericaceae to heavy metals in soil, and may cause plants to grow faster by producing phytohormones.
Heathlands
In many parts of the world, a "heath" or "heathland" is an environment characterised by an open dwarf-shrub community found on low-quality acidic soils, generally dominated by plants in Ericaceae. Heathlands are a broadly anthropogenic habitat, requiring regular grazing or burning to prevent succession. Heaths are particularly abundantand constitute important cultural elementsin Norway, the United Kingdom, the Netherlands, Germany, Spain, Portugal, and other countries in Central and Western Europe. The most common examples of plants in Ericaceae which dominate heathlands are Calluna vulgaris, Erica cineria, Erica tetralix, and Vaccinium myrtillus.
In heathland, plants in Ericaceae serve as host plants to the butterfly Plebejus argus. Other insects, such as Saturnia pavonia, Myrmeleotettix maculatus, Metrioptera brachyptera, and Picromerus bidens are closely associated with heathland environments. Reptiles thrive in heaths due to an abundance of sunlight and prey, and birds hunt the insects and reptiles which are present.
Some evidence suggests eutrophic rainwater can convert ericoid heaths with species such as Erica tetralix to grasslands. Nitrogen is particularly suspect in this regard, and may be causing measurable changes to the distribution and abundance of some ericaceous species.
| Biology and health sciences | Ericales | null |
9559 | https://en.wikipedia.org/wiki/Electrical%20network | Electrical network | An electrical network is an interconnection of electrical components (e.g., batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e.g., voltage sources, current sources, resistances, inductances, capacitances). An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Thus all circuits are networks, but not all networks are circuits (although networks without a closed loop are often imprecisely referred to as "circuits").
A resistive network is a network containing only resistors and ideal current and voltage sources. Analysis of resistive networks is less complicated than analysis of networks containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC network. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties.
A network that contains active electronic components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools.
Classification
By passivity
An active network contains at least one voltage source or current source that can supply energy to the network indefinitely. A passive network does not contain an active source.
An active network contains one or more sources of electromotive force. Practical examples of such sources include a battery or a generator. Active elements can inject power to the circuit, provide power gain, and control the current flow within the circuit.
Passive networks do not contain any sources of electromotive force. They consist of passive elements like resistors and capacitors.
By linearity
Linear electrical networks, a special type consisting only of sources (voltage or current), linear lumped elements (resistors, capacitors, inductors), and linear distributed elements (transmission lines), have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response.
Passive networks are generally taken to be linear, but there are exceptions. For instance, an inductor with an iron core can be driven into saturation if driven with a large enough current. In this region, the behaviour of the inductor is very non-linear.
By lumpiness
Discrete passive components (resistors, capacitors and inductors) are called lumped elements because all of their, respectively, resistance, capacitance and inductance is assumed to be located ("lumped") at one place. This design philosophy is called the lumped-element model and networks so designed are called lumped-element circuits. This is the conventional approach to circuit design. At high enough frequencies, or for long enough circuits (such as power transmission lines), the lumped assumption no longer holds because there is a significant fraction of a wavelength across the component dimensions. A new design model is needed for such cases called the distributed-element model. Networks designed to this model are called distributed-element circuits.
A distributed-element circuit that includes some lumped components is called a semi-lumped design. An example of a semi-lumped circuit is the combline filter.
Classification of sources
Sources can be classified as independent sources and dependent sources.
Independent
An ideal independent source maintains the same voltage or current regardless of the other elements present in the circuit. Its value is either constant (DC) or sinusoidal (AC). The strength of voltage or current is not changed by any variation in the connected network.
Dependent
Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is.
Applying electrical laws
A number of electrical laws apply to all linear resistive networks. These include:
Kirchhoff's current law: The sum of all currents entering a node is equal to the sum of all currents leaving the node.
Kirchhoff's voltage law: The directed sum of the electrical potential differences around a loop must be zero.
Ohm's law: The voltage across a resistor is equal to the product of the resistance and the current flowing through it.
Norton's theorem: Any network of voltage or current sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor.
Thévenin's theorem: Any network of voltage or current sources and resistors is electrically equivalent to a single voltage source in series with a single resistor.
Superposition theorem: In a linear network with several independent sources, the response in a particular branch when all the sources are acting simultaneously is equal to the linear sum of individual responses calculated by taking one independent source at a time.
Applying these laws results in a set of simultaneous equations that can be solved either algebraically or numerically. The laws can generally be extended to networks containing reactances. They cannot be used in networks that contain nonlinear or time-varying components.
Design methods
To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized computer programs or estimation techniques such as the piecewise-linear model.
Circuit simulation software, such as HSPICE (an analog circuit simulator), and languages such as VHDL-AMS and verilog-AMS allow engineers to design circuits without the time, cost and risk of error involved in building circuit prototypes.
Network simulation software
More complex circuits can be analyzed numerically with software such as SPICE or GNUCAP, or symbolically using software such as SapWin.
Linearization around operating point
When faced with a new circuit, the software first tries to find a steady state solution, that is, one where all nodes conform to Kirchhoff's current law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element.
Once the steady state solution is found, the operating points of each element in the circuit are known. For a small signal analysis, every non-linear element can be linearized around its operation point to obtain the small-signal estimate of the voltages and currents. This is an application of Ohm's Law. The resulting linear circuit matrix can be solved with Gaussian elimination.
Piecewise-linear approximation
Software such as the PLECS interface to Simulink uses piecewise-linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time.
| Physical sciences | Electrical circuits | null |
9566 | https://en.wikipedia.org/wiki/Empty%20set | Empty set | In mathematics, the empty set or void set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set.
Any set other than the empty set is called non-empty.
In some textbooks and popularizations, the empty set is referred to as the "null set". However, null set is a distinct notion within the context of measure theory, in which it describes a set of measure zero (which is not necessarily empty).
Notation
Common notations for the empty set include "{ }", "", and "∅". The latter two symbols were introduced by the Bourbaki group (specifically André Weil) in 1939, inspired by the letter Ø () in the Danish and Norwegian alphabets. In the past, "0" (the numeral zero) was occasionally used as a symbol for the empty set, but this is now considered to be an improper use of notation.
The symbol ∅ is available at Unicode point . It can be coded in HTML as and as or as . It can be coded in LaTeX as . The symbol is coded in LaTeX as .
When writing in languages such as Danish and Norwegian, where the empty set character may be confused with the alphabetic letter Ø (as when using the symbol in linguistics), the Unicode character U+29B0 REVERSED EMPTY SET ⦰ may be used instead.
Properties
In standard axiomatic set theory, by the principle of extensionality, two sets are equal if they have the same elements (that is, neither of them has an element not in the other). As a result, there can be only one set with no elements, hence the usage of "the empty set" rather than "an empty set".
The only subset of the empty set is the empty set itself; equivalently, the power set of the empty set is the set containing only the empty set. The number of elements of the empty set (i.e., its cardinality) is zero. The empty set is the only set with either of these properties.
For any set A:
The empty set is a subset of A
The union of A with the empty set is A
The intersection of A with the empty set is the empty set
The Cartesian product of A and the empty set is the empty set
For any property P:
For every element of , the property P holds (vacuous truth).
There is no element of for which the property P holds.
Conversely, if for some property P and some set V, the following two statements hold:
For every element of V the property P holds
There is no element of V for which the property P holds
then
By the definition of subset, the empty set is a subset of any set A. That is, element x of belongs to A. Indeed, if it were not true that every element of is in A, then there would be at least one element of that is not present in A. Since there are elements of at all, there is no element of that is not in A. Any statement that begins "for every element of " is not making any substantive claim; it is a vacuous truth. This is often paraphrased as "everything is true of the elements of the empty set."
In the usual set-theoretic definition of natural numbers, zero is modelled by the empty set.
Operations on the empty set
When speaking of the sum of the elements of a finite set, one is inevitably led to the convention that the sum of the elements of the empty set (the empty sum) is zero. The reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set (the empty product) should be considered to be one, since one is the identity element for multiplication.
A derangement is a permutation of a set without fixed points. The empty set can be considered a derangement of itself, because it has only one permutation (), and it is vacuously true that no element (of the empty set) can be found that retains its original position.
In other areas of mathematics
Extended real numbers
Since the empty set has no member when it is considered as a subset of any ordered set, every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two "numbers" or "points" to the real numbers (namely negative infinity, denoted which is defined to be less than every other extended real number, and positive infinity, denoted which is defined to be greater than every other extended real number), we have that:
and
That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for the minimum and infimum operators.
Topology
In any topological space X, the empty set is open by definition, as is X. Since the complement of an open set is closed and the empty set and X are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact.
The closure of the empty set is empty. This is known as "preservation of nullary unions."
Category theory
If is a set, then there exists precisely one function from to the empty function. As a result, the empty set is the unique initial object of the category of sets and functions.
The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set.
Set theory
In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as . Thus, we have , , , and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, , such that the Peano axioms of arithmetic are satisfied.
Questioned existence
Historical issues
In the context of sets of real numbers, Cantor used to denote " contains no single point". This notation was utilized in definitions; for example, Cantor defined two sets as being disjoint if their intersection has an absence of points; however, it is debatable whether Cantor viewed as an existent set on its own, or if Cantor merely used as an emptiness predicate. Zermelo accepted itself as a set, but considered it an "improper set".
Axiomatic set theory
In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways:
Standard first-order logic implies, merely from the logical axioms, that exists, and in the language of set theory, that thing must be a set. Now the existence of the empty set follows easily from the axiom of separation.
Even using free logic (which does not logically imply that something exists), there is already an axiom implying the existence of at least one set, namely the axiom of infinity.
Philosophical issues
While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians.
The empty set is not the same thing as ; rather, it is a set with nothing it and a set is always . This issue can be overcome by viewing a set as a bag—an empty bag undoubtedly still exists. Darling (2004) explains that the empty set is not nothing, but rather "the set of all triangles with four sides, the set of all numbers that are bigger than nine but smaller than eight, and the set of all opening moves in chess that involve a king."
The popular syllogism
Nothing is better than eternal happiness; a ham sandwich is better than nothing; therefore, a ham sandwich is better than eternal happiness
is often used to demonstrate the philosophical relation between the concept of nothing and the empty set. Darling writes that the contrast can be seen by rewriting the statements "Nothing is better than eternal happiness" and "[A] ham sandwich is better than nothing" in a mathematical tone. According to Darling, the former is equivalent to "The set of all things that are better than eternal happiness is " and the latter to "The set {ham sandwich} is better than the set ". The first compares elements of sets, while the second compares the sets themselves.
Jonathan Lowe argues that while the empty set
was undoubtedly an important landmark in the history of mathematics, … we should not assume that its utility in calculation is dependent upon its actually denoting some object.
it is also the case that:
"All that we are ever informed about the empty set is that it (1) is a set, (2) has no members, and (3) is unique amongst sets in having no members. However, there are very many things that 'have no members', in the set-theoretical sense—namely, all non-sets. It is perfectly clear why these things have no members, for they are not sets. What is unclear is how there can be, uniquely amongst sets, a which has no members. We cannot conjure such an entity into existence by mere stipulation."
George Boolos argued that much of what has been heretofore obtained by set theory can just as easily be obtained by plural quantification over individuals, without reifying sets as singular entities having other entities as members.
| Mathematics | Discrete mathematics | null |
9588 | https://en.wikipedia.org/wiki/Extraterrestrial%20life | Extraterrestrial life | Extraterrestrial life, or alien life (colloquially, alien), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology.
Speculation about the possibility of inhabited worlds beyond Earth dates back to antiquity. Early Christian writers discussed the idea of a "plurality of worlds" as proposed by earlier thinkers such as Democritus; Augustine references Epicurus's idea of innumerable worlds "throughout the boundless immensity of space" in The City of God.
Pre-modern writers typically assumed extraterrestrial "worlds" are inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants. Nicholas of Cusa wrote in 1440 that Earth is "a brilliant star" like other celestial objects visible in space; which would appear similar to the Sun, from an exterior perspective, due to a layer of "fiery brightness" in the outer layer of the atmosphere. He theorised all extraterrestrial bodies could be inhabited by men, plants, and animals, including the Sun. Descartes wrote that there was no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.
When considering the atmospheric composition and ecosystems hosted by extraterrestrial bodies, extraterrestrial life can seem more speculation than reality, due to the harsh conditions and disparate chemical composition of the atmospheres, when compared to the life-abundant Earth. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Hydrothermal vents, acidic hot springs, and volcanic lakes are examples of life forming under difficult circumstances, provide parallels to the extreme environments on other planets and support the possibility of extraterrestrial life.
Since the mid-20th century, active research has taken place to look for signs of extraterrestrial life, encompassing searches for current and historic extraterrestrial life, and a narrower search for extraterrestrial intelligent life. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit communications.
The concept of extraterrestrial life, and particularly extraterrestrial intelligence, has had a major cultural impact, especially extraterrestrials in fiction. Science fiction has communicated scientific ideas, imagined a range of possibilities, and influenced public interest in and perspectives on extraterrestrial life. One shared space is the debate over the wisdom of attempting communication with extraterrestrial intelligence. Some encourage aggressive methods to try to contact intelligent extraterrestrial life. Others – citing the tendency of technologically advanced human societies to enslave or destroy less advanced societies – argue it may be dangerous to actively draw attention to Earth.
Context
Initially, after the Big Bang the universe was too hot to allow life. 15 million years later, it cooled to temperate levels, but the elements that make up living things did not exist yet. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell in it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread – by meteoroids, for example – between habitable planets in a process called panspermia.
During most of its stellar evolution stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The higher-sized stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. In the end, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System.
Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects is a difficulty for the study of extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 has left the Solar System at a speed of 50,000 kilometers per hour, if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role on the stellar evolution of stars and planets, it is usually not taken into account by astrobiology.
There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", where water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the habitable zone of the Solar System but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the stars stellar evolution.
The Big Bang took place 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or even billions of years ago. The brief times of existence of Earth's species, when considered from a cosmic perspective, may suggest that extraterrestrial life may be equally fleeting under such a scale.
Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation, and may have stricter requirements. A celestial body may not have any life on it, even if it was habitable.
Likelihood of existence
It is unclear if life and intelligent life are ubiquitous in the cosmos or rare. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the chemical elements that make up life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth.
Other authors consider instead that life in the cosmos, or at least multicellular life, may be actually rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that all such requirements are simultaneously met by another planet. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life, and that at this point it is just a desired result and not a reasonable scientific explanation for any gathered data.
In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilisations in the Milky Way galaxy. The Drake equation is:
where:
N = the number of Milky Way galaxy civilisations already capable of communicating across interplanetary space
and
R* = the average rate of star formation in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life
fl = the fraction of planets that actually support life
fi = the fraction of planets with life that evolves to become intelligent life (civilisations)
fc = the fraction of civilisations that develop a technology to broadcast detectable signs of their existence into space
L = the length of time over which such civilisations broadcast detectable signals into space
Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution:
The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation.
Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten per cent of all Sun-like stars have a system of planets, i.e. there are stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone.
The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox.
Biochemical basis
If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist.
The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones.
Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane.
Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life.
Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it.
The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches.
Harsh environmental conditions on Earth harboring life
It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system.
The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life.
Planetary habitability in the Solar System
The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now.
The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds.
Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground.
As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely.
Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane.
Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study.
Scientific search
The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences.
The scientific search for extraterrestrial life is being carried out both directly and indirectly. , 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported.
Search for basic life
Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology.
An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis.
In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions.
In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012.
A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis.
In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life."
In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.
In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life."
Search for extraterrestrial intelligences
Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres.
Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message.
The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it.
The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products.
Extrasolar planets
Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ).
The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years. The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives.
There is at least one planet on average per star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone, with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions.
The nearest known exoplanet is Proxima Centauri b, located from Earth in the southern constellation of Centaurus.
, the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life.
One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs.
History and cultural impact
Cosmic pluralism
The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos.
Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would made it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe.
Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them.
The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself.
The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere.
Early modern period
By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special.
The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which trialed and executed him.
The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way.
There was very little actual discussion about extraterrestrial life before this point, as the Aristotlean ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate.
The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants.
19th century
Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation.
Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis.
As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced to investigate the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903).
The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site.
Recent history
The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse.
The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes.
Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth.
By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does.
Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate.
On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe. In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon.
As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent".
Government responses
The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact.
One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life."
In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program.
In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research.
He also acknowledged the possibility of existence of primitive life on other planets of the Solar System.
The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied.
In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments.
In fiction
Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive.
Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction.
| Physical sciences | Basics_2 | null |
9597 | https://en.wikipedia.org/wiki/Enola%20Gay | Enola Gay | The Enola Gay () is a Boeing B-29 Superfortress bomber, named after Enola Gay Tibbets, the mother of the pilot, Colonel Paul Tibbets. On 6 August 1945, during the final stages of World War II, it became the first aircraft to drop an atomic bomb in warfare. The bomb, code-named "Little Boy", was targeted at the city of Hiroshima, Japan, and destroyed about three-quarters of the city. Enola Gay participated in the second nuclear attack as the weather reconnaissance aircraft for the primary target of Kokura. Clouds and drifting smoke resulted in Nagasaki, a secondary target, being bombed instead.
After the war, the Enola Gay returned to the United States, where it was operated from Roswell Army Air Field, New Mexico. In May 1946, it was flown to Kwajalein for the Operation Crossroads nuclear tests in the Pacific, but was not chosen to make the test drop at Bikini Atoll. Later that year, it was transferred to the Smithsonian Institution and spent many years parked at air bases exposed to the weather and souvenir hunters, before its 1961 disassembly and storage at a Smithsonian facility in Suitland, Maryland.
In the 1980s, veterans groups engaged in a call for the Smithsonian to put the aircraft on display, leading to an acrimonious debate about exhibiting the aircraft without a proper historical context. The cockpit and nose section of the aircraft were exhibited at the National Air and Space Museum (NASM) on the National Mall, for the bombing's 50th anniversary in 1995, amid controversy. Since 2003, the entire restored B-29 has been on display at NASM's Steven F. Udvar-Hazy Center. The last survivor of its crew, Theodore Van Kirk, died on 28 July 2014 at the age of 93.
World War II
Early history
The Enola Gay (Model number B-29-45-MO, Serial number 44-86292, Victor number 82) was built by the Glenn L. Martin Company (later part of Lockheed Martin) at its bomber plant in Bellevue, Nebraska, located at Offutt Field, now Offutt Air Force Base. The bomber was one of the first fifteen B-29s built to the "Silverplate" specification— of 65 eventually completed during and after World War II—giving them the primary ability to function as nuclear "weapon delivery" aircraft. These modifications included an extensively modified bomb bay with pneumatic doors and British bomb attachment and release systems, reversible pitch propellers that gave more braking power on landing, improved engines with fuel injection and better cooling, and the removal of protective armor and gun turrets.
Enola Gay was personally selected by Colonel Paul W. Tibbets Jr., the commander of the 509th Composite Group, on 9 May 1945, while still on the assembly line. The aircraft was accepted by the United States Army Air Forces (USAAF) on 18 May 1945 and assigned to the 393d Bombardment Squadron, Heavy, 509th Composite Group. Crew B-9, commanded by Captain Robert A. Lewis, took delivery of the bomber and flew it from Omaha to the 509th base at Wendover Army Air Field, Utah, on 14 June 1945.
Thirteen days later, the aircraft left Wendover for Guam, where it received a bomb-bay modification, and flew to North Field, Tinian, on 6 July. It was initially given the Victor (squadron-assigned identification) number 12, but on 1 August, was given the circle R tail markings of the 6th Bombardment Group as a security measure and had its Victor number changed to 82 to avoid misidentification with actual 6th Bombardment Group aircraft. During July, the bomber made eight practice or training flights and flew two missions, on 24 and 26 July, to drop pumpkin bombs on industrial targets at Kobe and Nagoya. Enola Gay was used on 31 July on a rehearsal flight for the actual mission.
The partially assembled Little Boy gun-type fission weapon L-11, weighing , was contained inside a wooden crate that was secured to the deck of the . Unlike the six uranium-235 target discs, which were later flown to Tinian on three separate aircraft arriving 28 and 29 July, the assembled projectile with the nine uranium-235 rings installed was shipped in a single lead-lined steel container weighing that was locked to brackets welded to the deck of Captain Charles B. McVay III's quarters. Both the L-11 and projectile were dropped off at Tinian on 26 July 1945.
Hiroshima mission
On 5 August 1945, during preparation for the first atomic mission, Tibbets assumed command of the aircraft and named it after his mother, Enola Gay Tibbets, who, in turn, had been named for the heroine of a novel. When it came to selecting a name for the plane, Tibbets later recalled that:
In the early morning hours, just prior to the 6 August mission, Tibbets had a young Army Air Forces maintenance man, Private Nelson Miller, paint the name just under the pilot's window. Regularly assigned aircraft commander Robert A. Lewis was unhappy to be displaced by Tibbets for this important mission and became furious when he arrived at the aircraft on the morning of 6 August to see it painted with the now-famous nose art.
Hiroshima was the primary target of the first nuclear bombing mission on 6 August, with Kokura and Nagasaki as alternative targets. Enola Gay, piloted by Tibbets, took off from North Field, in the Northern Mariana Islands, about six hours' flight time from Japan, accompanied by two other B-29s, The Great Artiste, carrying instrumentation, and a then-nameless aircraft later called Necessary Evil, commanded by Captain George Marquardt, to take photographs. The director of the Manhattan Project, Major General Leslie R. Groves Jr., wanted the event recorded for posterity, so the takeoff was illuminated by floodlights. When he wanted to taxi, Tibbets leaned out the window to direct the bystanders out of the way. On request, he gave a friendly wave for the cameras.
After leaving Tinian, the three aircraft made their way separately to Iwo Jima, where they rendezvoused at and set course for Japan. The aircraft arrived over the target in clear visibility at . Navy Captain William S. "Deak" Parsons of Project Alberta, who was in command of the mission, armed the bomb during the flight to minimize the risks during takeoff. His assistant, Second Lieutenant Morris R. Jeppson, removed the safety devices 30 minutes before reaching the target area.
The release at 08:15 (Hiroshima time) went as planned, and the Little Boy took 53 seconds to fall from the aircraft flying at to the predetermined detonation height about above the city. Enola Gay traveled before it felt the shock waves from the blast. Although buffeted by the shock, neither Enola Gay nor The Great Artiste was damaged.
The detonation created a blast equivalent to . The U-235 weapon was considered very inefficient, with only 1.7% of its fissile material reacting. The radius of total destruction was about , with resulting fires across . Americans estimated that of the city were destroyed. Japanese officials determined that 69% of Hiroshima's buildings were destroyed and another 6–7% damaged. Some 70,000–80,000 people, 30% of the city's population, were killed by the blast and resultant firestorm, and another 70,000 injured. Out of those killed, 20,000 were soldiers and 20,000 were Korean slave laborers.
Enola Gay returned safely to its base on Tinian to great fanfare, touching down at 2:58 pm, after 12 hours 13 minutes. The Great Artiste and Necessary Evil followed at short intervals. Several hundred people, including journalists and photographers, had gathered to watch the planes return. Tibbets was the first to disembark and was presented with the Distinguished Service Cross on the spot.
Nagasaki mission
The Hiroshima mission was followed by another atomic strike. Originally scheduled for 11 August, it was brought forward by two days to 9 August owing to a forecast of bad weather. This time, a nuclear bomb code-named "Fat Man" was carried by B-29 Bockscar, piloted by Major Charles W. Sweeney. Enola Gay, flown by Captain George Marquardt's Crew B-10, was the weather reconnaissance aircraft for Kokura, the primary target. Enola Gay reported clear skies over Kokura, but by the time Bockscar arrived, the city was obscured by smoke from fires from the conventional bombing of Yahata by 224 B-29s the day before. After three unsuccessful passes, Bockscar diverted to its secondary target, Nagasaki, where it dropped its bomb. In contrast to the Hiroshima mission, the Nagasaki mission has been described as tactically botched, although the mission did meet its objectives. The crew encountered a number of problems in execution and had very little fuel by the time they landed at the emergency backup landing site Yontan Airfield on Okinawa.
Crews
Hiroshima mission
Enola Gays crew on 6 August 1945 consisted of 12 men. The crew was:
Colonel Paul W. Tibbets Jr. – pilot and aircraft commander
Captain Robert A. Lewis – co-pilot; Enola Gay'''s regularly assigned aircraft commander*
Major Thomas Ferebee – bombardier
Captain Theodore "Dutch" Van Kirk – navigator
Captain William S. "Deak" Parsons, USN – weaponeer and mission commander
First Lieutenant Jacob Beser – radar countermeasures (also the only man to fly on both of the nuclear bombing aircraft.)
Second Lieutenant Morris R. Jeppson – assistant weaponeer
Staff Sergeant Robert "Bob" Caron – tail gunner*
Staff Sergeant Wyatt E. Duzenbury – flight engineer*
Sergeant Joe S. Stiborik – radar operator*
Sergeant Robert H. Shumard – assistant flight engineer*
Private First Class Richard H. Nelson – VHF radio operator*
Asterisks denote regular crewmen of the Enola Gay.
Of mission commander Parsons, it was said: "There is no one more responsible for getting this bomb out of the laboratory and into some form useful for combat operations than Captain Parsons, by his plain genius in the ordnance business."
Nagasaki mission
For the Nagasaki mission, Enola Gay was flown by Crew B-10, normally assigned to Up An' Atom:
Captain George W. Marquardt – aircraft commander
Second Lieutenant James M. Anderson – co-pilot
Second Lieutenant Russell Gackenbach – navigator
Captain James W. Strudwick – bombardier
Technical Sergeant James R. Corliss – flight engineer
Sergeant Warren L. Coble – radio operator
Sergeant Joseph M. DiJulio – radar operator
Sergeant Melvin H. Bierman – tail gunner
Sergeant Anthony D. Capua Jr. – assistant engineer/scanner
Subsequent history
On 6 November 1945, Lewis flew the Enola Gay back to the United States, arriving at the 509th's new base at Roswell Army Air Field, New Mexico, on 8 November. On 29 April 1946, Enola Gay left Roswell as part of the Operation Crossroads nuclear weapons tests in the Pacific. It flew to Kwajalein Atoll on 1 May. It was not chosen to make the test drop at Bikini Atoll and left Kwajalein on 1 July, the date of the test, reaching Fairfield-Suisun Army Air Field, California, the next day.
The decision was made to preserve the Enola Gay, and on 24 July 1946, the aircraft was flown to Davis–Monthan Air Force Base, Tucson, Arizona, in preparation for storage. On 30 August 1946, the title to the aircraft was transferred to the Smithsonian Institution and the Enola Gay was removed from the USAAF inventory. From 1946 to 1961, the Enola Gay was put into temporary storage at a number of locations. It was at Davis-Monthan from 1 September 1946 until 3 July 1949, when it was flown to Orchard Place Air Field, Park Ridge, Illinois, by Tibbets for acceptance by the Smithsonian. It was moved to Pyote Air Force Base, Texas, on 12 January 1952, and then to Andrews Air Force Base, Maryland, on 2 December 1953, because the Smithsonian had no storage space for the aircraft.
It was hoped that the Air Force would guard the plane, but, lacking hangar space, it was left outdoors on a remote part of the air base, exposed to the elements. Souvenir hunters broke in and removed parts. Insects and birds then gained access to the aircraft. Paul E. Garber of the Smithsonian Institution became concerned about the Enola Gays condition, and on 10 August 1960, Smithsonian staff began dismantling the aircraft. The components were transported to the Smithsonian storage facility at Suitland, Maryland, on 21 July 1961.
The Enola Gay remained at Suitland for many years. By the early 1980s, two veterans of the 509th, Don Rehl and his former navigator in the 509th, Frank B. Stewart, began lobbying for the aircraft to be restored and put on display. They enlisted Tibbets and Senator Barry Goldwater in their campaign. In 1983, Walter J. Boyne, a former B-52 pilot with the Strategic Air Command, became director of the National Air and Space Museum, and he made the Enola Gays restoration a priority. Looking at the aircraft, Tibbets recalled, was a "sad meeting. [My] fond memories, and I don't mean the dropping of the bomb, were the numerous occasions I flew the airplane ... I pushed it very, very hard and it never failed me ... It was probably the most beautiful piece of machinery that any pilot ever flew."
Restoration
Restoration of the bomber began on 5 December 1984, at the Paul E. Garber Preservation, Restoration, and Storage Facility in Suitland-Silver Hill, Maryland. The propellers that were used on the bombing mission were later shipped to Texas A&M University. One of these propellers was trimmed to for use in the university's Oran W. Nicks Low Speed Wind Tunnel. The lightweight aluminum variable-pitch propeller is powered by a 1,250 kVA electric motor, providing a wind speed up to . Two engines were rebuilt at Garber and two at San Diego Air & Space Museum. Some parts and instruments had been removed and could not be located. Replacements were found or fabricated, and marked so that future curators could distinguish them from the original components.
Exhibition controversy
The Enola Gay became the center of a controversy at the Smithsonian Institution when the museum planned to put its fuselage on public display in 1995 as part of an exhibit commemorating the 50th anniversary of the atomic bombing of Hiroshima. The exhibit, The Crossroads: The End of World War II, the Atomic Bomb and the Cold War, was drafted by the Smithsonian's National Air and Space Museum staff, and arranged around the restored Enola Gay.
Critics of the planned exhibit, especially those of the American Legion and the Air Force Association, charged that the exhibit focused too much attention on the Japanese casualties inflicted by the nuclear bomb, rather than on the motives for the bombing or the discussion of the bomb's role in ending the conflict with Japan. The exhibit brought to national attention many long-standing academic and political issues related to retrospective views of the bombings. After attempts to revise the exhibit to meet the satisfaction of competing interest groups, the exhibit was canceled on 30 January 1995. Martin O. Harwit, Director of the National Air and Space Museum, was compelled to resign over the controversy. He later reflected that
The forward fuselage went on display on 28 June 1995. On 2 July 1995, three people were arrested for throwing ash and human blood on the aircraft's fuselage, following an earlier incident in which a protester had thrown red paint over the gallery's carpeting. The exhibition closed on 18 May 1998 and the fuselage was returned to the Garber Facility for final restoration.
Complete restoration and display
Its restoration work began in 1984, and eventually required 300,000 staff hours. While the fuselage was on display, from 1995 to 1998, work continued on the remaining unrestored components. The aircraft was shipped in pieces to the National Air and Space Museum's Steven F. Udvar-Hazy Center in Chantilly, Virginia from March–June 2003, with the fuselage and wings reunited for the first time since 1960 on 10 April 2003 and assembly completed on 8 August 2003. The aircraft has been on display at the Udvar-Hazy Center since the museum annex opened on 15 December 2003. As a result of the earlier controversy, the signage around the aircraft provided only the same succinct technical data as is provided for other aircraft in the museum, without discussion of the controversial issues. It read:
The display of the Enola Gay without reference to the historical context of World War II, the Cold War, or the development and deployment of nuclear weapons aroused controversy. A petition from a group calling themselves the Committee for a National Discussion of Nuclear History and Current Policy bemoaned the display of Enola Gay'' as a technological achievement, which it described as an "extraordinary callousness toward the victims, indifference to the deep divisions among American citizens about the propriety of these actions, and disregard for the feelings of most of the world's peoples". It attracted signatures from notable figures including historian Gar Alperovitz, social critic Noam Chomsky, whistle blower Daniel Ellsberg, physicist Joseph Rotblat, writer Kurt Vonnegut, producer Norman Lear, actor Martin Sheen and filmmaker Oliver Stone.
| Technology | Specific aircraft | null |
9598 | https://en.wikipedia.org/wiki/Electronvolt | Electronvolt | In physics, an electronvolt (symbol eV), also written electron-volt and electron volt, is the measure of an amount of kinetic energy gained by a single electron accelerating through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equal to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 revision of the SI, this sets 1 eV equal to the exact value
Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q gains an energy after passing through a voltage of V.
Definition and use
An electronvolt is the amount of energy gained or lost by a single electron when it moves through an electric potential difference of one volt. Hence, it has a value of one volt, which is , multiplied by the elementary charge Therefore, one electronvolt is equal to
The electronvolt (eV) is a unit of energy, but is not an SI unit. It is a commonly used unit of energy within physics, widely used in solid state, atomic, nuclear and particle physics, and high-energy astrophysics. It is commonly used with SI prefixes milli- (10−3), kilo- (103), mega- (106), giga- (109), tera- (1012), peta- (1015) or exa- (1018), the respective symbols being meV, keV, MeV, GeV, TeV, PeV and EeV. The SI unit of energy is the joule (J).
In some older documents, and in the name Bevatron, the symbol BeV is used, where the B stands for billion. The symbol BeV is therefore equivalent to GeV, though neither is an SI unit.
Relation to other physical properties and units
In the fields of physics in which the electronvolt is used, other quantities are typically measured using units derived from the electronvolt as a product with fundamental constants of importance in the theory are often used.
Mass
By mass–energy equivalence, the electronvolt corresponds to a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from ). It is common to informally express mass in terms of eV as a unit of mass, effectively using a system of natural units with c set to 1. The kilogram equivalent of is:
For example, an electron and a positron, each with a mass of , can annihilate to yield of energy. A proton has a mass of . In general, the masses of all hadrons are of the order of , which makes the GeV/c2 a convenient unit of mass for particle physics:
The atomic mass constant (mu), one twelfth of the mass a carbon-12 atom, is close to the mass of a proton. To convert to electronvolt mass-equivalent, use the formula:
Momentum
By dividing a particle's kinetic energy in electronvolts by the fundamental constant c (the speed of light), one can describe the particle's momentum in units of eV/c. In natural units in which the fundamental velocity constant c is numerically 1, the c may informally be omitted to express momentum using the unit electronvolt.
The energy–momentum relation
in natural units (with )
is a Pythagorean equation. When a relatively high energy is applied to a particle with relatively low rest mass, it can be approximated as in high-energy physics such that an applied energy with expressed in the unit eV conveniently results in a numerically approximately equivalent change of momentum when expressed with the unit eV/c.
The dimension of momentum is . The dimension of energy is . Dividing a unit of energy (such as eV) by a fundamental constant (such as the speed of light) that has the dimension of velocity () facilitates the required conversion for using a unit of energy to quantify momentum.
For example, if the momentum p of an electron is , then the conversion to MKS system of units can be achieved by:
Distance
In particle physics, a system of natural units in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: . In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented using a unit of inverse particle mass.
Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following:
The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via . For example, the meson has a lifetime of 1.530(9) picoseconds, mean decay length is , or a decay width of .
Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds.
Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy:
Temperature
In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale:
where kB is the Boltzmann constant.
The kB is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is (kiloelectronvolt), which is equal to 174 MK (megakelvin).
As an approximation: kBT is about (≈ ) at a temperature of .
Wavelength
The energy E, frequency ν, and wavelength λ of a photon are related by
where h is the Planck constant, c is the speed of light. This reduces to
A photon with a wavelength of (green light) would have an energy of approximately . Similarly, would correspond to an infrared photon of wavelength or frequency .
Scattering experiments
In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material.
Energy comparisons
Molar energy
One mole of particles given 1 eV of energy each has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ ), where the energy in joules of n moles of particles each with energy E eV is equal to E·F·n.
| Physical sciences | Energy, power, force and pressure | null |
9601 | https://en.wikipedia.org/wiki/Electrochemistry | Electrochemistry | Electrochemistry is the branch of physical chemistry concerned with the relationship between electrical potential difference and identifiable chemical change. These reactions involve electrons moving via an electronically conducting phase (typically an external electrical circuit, but not necessarily, as in electroless plating) between electrodes separated by an ionically conducting and electronically insulating electrolyte (or ionic species in a solution).
When a chemical reaction is driven by an electrical potential difference, as in electrolysis, or if a potential difference results from a chemical reaction as in an electric battery or fuel cell, it is called an electrochemical reaction. Unlike in other chemical reactions, in electrochemical reactions electrons are not transferred directly between atoms, ions, or molecules, but via the aforementioned electronically conducting circuit. This phenomenon is what distinguishes an electrochemical reaction from a conventional chemical reaction.
History
16th–18th century
Understanding of electrical matters began in the sixteenth century. During this century, the English scientist William Gilbert spent 17 years experimenting with magnetism and, to a lesser extent, electricity. For his work on magnets, Gilbert became known as the "Father of Magnetism." He discovered various methods for producing and strengthening magnets.
In 1663, the German physicist Otto von Guericke created the first electric generator, which produced static electricity by applying friction in the machine. The generator was made of a large sulfur ball cast inside a glass globe, mounted on a shaft. The ball was rotated by means of a crank and an electric spark was produced when a pad was rubbed against the ball as it rotated. The globe could be removed and used as source for experiments with electricity.
By the mid-18th century the French chemist Charles François de Cisternay du Fay had discovered two types of static electricity, and that like charges repel each other whilst unlike charges attract. Du Fay announced that electricity consisted of two fluids: "vitreous" (from the Latin for "glass"), or positive, electricity; and "resinous," or negative, electricity. This was the two-fluid theory of electricity, which was to be opposed by Benjamin Franklin's one-fluid theory later in the century.
In 1785, Charles-Augustin de Coulomb developed the law of electrostatic attraction as an outgrowth of his attempt to investigate the law of electrical repulsions as stated by Joseph Priestley in England.
In the late 18th century the Italian physician and anatomist Luigi Galvani marked the birth of electrochemistry by establishing a bridge between chemical reactions and electricity on his essay "De Viribus Electricitatis in Motu Musculari Commentarius" (Latin for Commentary on the Effect of Electricity on Muscular Motion) in 1791 where he proposed a "nerveo-electrical substance" on biological life forms.
In his essay Galvani concluded that animal tissue contained a here-to-fore neglected innate, vital force, which he termed "animal electricity," which activated nerves and muscles spanned by metal probes. He believed that this new force was a form of electricity in addition to the "natural" form produced by lightning or by the electric eel and torpedo ray as well as the "artificial" form produced by friction (i.e., static electricity).
Galvani's scientific colleagues generally accepted his views, but Alessandro Volta rejected the idea of an "animal electric fluid," replying that the frog's legs responded to differences in metal temper, composition, and bulk. Galvani refuted this by obtaining muscular action with two pieces of the same material. Nevertheless, Volta's experimentation led him to develop the first practical battery, which took advantage of the relatively high energy (weak bonding) of zinc and could deliver an electrical current for much longer than any other device known at the time.
19th century
In 1800, William Nicholson and Johann Wilhelm Ritter succeeded in decomposing water into hydrogen and oxygen by electrolysis using Volta's battery. Soon thereafter Ritter discovered the process of electroplating. He also observed that the amount of metal deposited and the amount of oxygen produced during an electrolytic process depended on the distance between the electrodes. By 1801, Ritter observed thermoelectric currents and anticipated the discovery of thermoelectricity by Thomas Johann Seebeck.
By the 1810s, William Hyde Wollaston made improvements to the galvanic cell.
Sir Humphry Davy's work with electrolysis led to the conclusion that the production of electricity in simple electrolytic cells resulted from chemical action and that chemical combination occurred between substances of opposite charge. This work led directly to the isolation of metallic sodium and potassium by electrolysis of their molten salts, and of the alkaline earth metals from theirs, in 1808.
Hans Christian Ørsted's discovery of the magnetic effect of electric currents in 1820 was immediately recognized as an epoch-making advance, although he left further work on electromagnetism to others. André-Marie Ampère quickly repeated Ørsted's experiment, and formulated them mathematically.
In 1821, Estonian-German physicist Thomas Johann Seebeck demonstrated the electrical potential between the juncture points of two dissimilar metals when there is a temperature difference between the joints.
In 1827, the German scientist Georg Ohm expressed his law in this famous book "Die galvanische Kette, mathematisch bearbeitet" (The Galvanic Circuit Investigated Mathematically) in which he gave his complete theory of electricity.
In 1832, Michael Faraday's experiments led him to state his two laws of electrochemistry. In 1836, John Daniell invented a primary cell which solved the problem of polarization by introducing copper ions into the solution near the positive electrode and thus eliminating hydrogen gas generation. Later results revealed that at the other electrode, amalgamated zinc (i.e., zinc alloyed with mercury) would produce a higher voltage.
William Grove produced the first fuel cell in 1839. In 1846, Wilhelm Weber developed the electrodynamometer. In 1868, Georges Leclanché patented a new cell which eventually became the forerunner to the world's first widely used battery, the zinc–carbon cell.
Svante Arrhenius published his thesis in 1884 on Recherches sur la conductibilité galvanique des électrolytes (Investigations on the galvanic conductivity of electrolytes). From his results the author concluded that electrolytes, when dissolved in water, become to varying degrees split or dissociated into electrically opposite positive and negative ions.
In 1886, Paul Héroult and Charles M. Hall developed an efficient method (the Hall–Héroult process) to obtain aluminium using electrolysis of molten alumina.
In 1894, Friedrich Ostwald concluded important studies of the conductivity and electrolytic dissociation of organic acids.
Walther Hermann Nernst developed the theory of the electromotive force of the voltaic cell in 1888. In 1889, he showed how the characteristics of the voltage produced could be used to calculate the free energy change in the chemical reaction producing the voltage. He constructed an equation, known as Nernst equation, which related the voltage of a cell to its properties.
In 1898, Fritz Haber showed that definite reduction products can result from electrolytic processes if the potential at the cathode is kept constant. In 1898, he explained the reduction of nitrobenzene in stages at the cathode and this became the model for other similar reduction processes.
20th century
In 1902, The Electrochemical Society (ECS) was founded.
In 1909, Robert Andrews Millikan began a series of experiments (see oil drop experiment) to determine the electric charge carried by a single electron.
In 1911, Harvey Fletcher, working with Millikan, was successful in measuring the charge on the electron, by replacing the water droplets used by Millikan, which quickly evaporated, with oil droplets. Within one day Fletcher measured the charge of an electron within several decimal places.
In 1923, Johannes Nicolaus Brønsted and Martin Lowry published essentially the same theory about how acids and bases behave, using an electrochemical basis.
In 1937, Arne Tiselius developed the first sophisticated electrophoretic apparatus. Some years later, he was awarded the 1948 Nobel Prize for his work in protein electrophoresis.
A year later, in 1949, the International Society of Electrochemistry (ISE) was founded.
By the 1960s–1970s quantum electrochemistry was developed by Revaz Dogonadze and his students.
Principles
Oxidation and reduction
The term "redox" stands for reduction-oxidation. It refers to electrochemical processes involving electron transfer to or from a molecule or ion, changing its oxidation state. This reaction can occur through the application of an external voltage or through the release of chemical energy. Oxidation and reduction describe the change of oxidation state that takes place in the atoms, ions or molecules involved in an electrochemical reaction. Formally, oxidation state is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic. An atom or ion that gives up an electron to another atom or ion has its oxidation state increase, and the recipient of the negatively charged electron has its oxidation state decrease.
For example, when atomic sodium reacts with atomic chlorine, sodium donates one electron and attains an oxidation state of +1. Chlorine accepts the electron and its oxidation state is reduced to −1. The sign of the oxidation state (positive/negative) actually corresponds to the value of each ion's electronic charge. The attraction of the differently charged sodium and chlorine ions is the reason they then form an ionic bond.
The loss of electrons from an atom or molecule is called oxidation, and the gain of electrons is reduction. This can be easily remembered through the use of mnemonic devices. Two of the most popular are "OIL RIG" (Oxidation Is Loss, Reduction Is Gain) and "LEO" the lion says "GER" (Lose Electrons: Oxidation, Gain Electrons: Reduction). Oxidation and reduction always occur in a paired fashion such that one species is oxidized when another is reduced. For cases where electrons are shared (covalent bonds) between atoms with large differences in electronegativity, the electron is assigned to the atom with the largest electronegativity in determining the oxidation state.
The atom or molecule which loses electrons is known as the reducing agent, or reductant, and the substance which accepts the electrons is called the oxidizing agent, or oxidant. Thus, the oxidizing agent is always being reduced in a reaction; the reducing agent is always being oxidized. Oxygen is a common oxidizing agent, but not the only one. Despite the name, an oxidation reaction does not necessarily need to involve oxygen. In fact, a fire can be fed by an oxidant other than oxygen; fluorine fires are often unquenchable, as fluorine is an even stronger oxidant (it has a weaker bond and higher electronegativity, and thus accepts electrons even better) than oxygen.
For reactions involving oxygen, the gain of oxygen implies the oxidation of the atom or molecule to which the oxygen is added (and the oxygen is reduced). In organic compounds, such as butane or ethanol, the loss of hydrogen implies oxidation of the molecule from which it is lost (and the hydrogen is reduced). This follows because the hydrogen donates its electron in covalent bonds with non-metals but it takes the electron along when it is lost. Conversely, loss of oxygen or gain of hydrogen implies reduction.
Balancing redox reactions
Electrochemical reactions in water are better analyzed by using the ion-electron method, where H+, OH− ion, H2O and electrons (to compensate the oxidation changes) are added to the cell's half-reactions for oxidation and reduction.
Acidic medium
In acidic medium, H+ ions and water are added to balance each half-reaction.
For example, when manganese reacts with sodium bismuthate.
Unbalanced reaction: Mn2+ + NaBiO3 → Bi3+ +
Oxidation: 4 H2O + Mn2+ → + 8 H+ + 5 e−
Reduction: 2 e− + 6 H+ + → Bi3+ + 3 H2O
Finally, the reaction is balanced by multiplying the stoichiometric coefficients so the numbers of electrons in both half reactions match
8 H2O + 2 Mn2+ → 2 + 16 H+ + 10 e−
10 e− + 30 H+ + 5 → 5 Bi3+ + 15 H2O
and adding the resulting half reactions to give the balanced reaction:
14 H+ + 2 Mn2+ + 5 NaBiO3 → 7 H2O + 2 + 5 Bi3+ + 5 Na+
Basic medium
In basic medium, OH− ions and water are added to balance each half-reaction. For example, in a reaction between potassium permanganate and sodium sulfite:
Unbalanced reaction: KMnO4 + Na2SO3 + H2O → MnO2 + Na2SO4 + KOH
Reduction: 3 e− + 2 H2O + → MnO2 + 4 OH−
Oxidation: 2 OH− + → + H2O + 2 e−
Here, 'spectator ions' (K+, Na+) were omitted from the half-reactions. By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match:
6 e− + 4 H2O + 2 → 2 MnO2 + 8 OH−
6 OH− + 3 → 3 + 3 H2O + 6 e−
the balanced overall reaction is obtained:
2 KMnO4 + 3 Na2SO3 + H2O → 2 MnO2 + 3 Na2SO4 + 2 KOH
Neutral medium
The same procedure as used in acidic medium can be applied, for example, to balance the complete combustion of propane:
Unbalanced reaction: C3H8 + O2 → CO2 + H2O
Reduction: 4 H+ + O2 + 4 e− → 2 H2O
Oxidation: 6 H2O + C3H8 → 3 CO2 + 20 e− + 20 H+
By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match:
20 H+ + 5 O2 + 20 e− → 10 H2O
6 H2O + C3H8 → 3 CO2 + 20 e− + 20 H+
the balanced equation is obtained:
C3H8 + 5 O2 → 3 CO2 + 4 H2O
Electrochemical cells
An electrochemical cell is a device that produces an electric current from energy released by a spontaneous redox reaction. This kind of cell includes the Galvanic cell or Voltaic cell, named after Luigi Galvani and Alessandro Volta, both scientists who conducted experiments on chemical reactions and electric current during the late 18th century.
Electrochemical cells have two conductive electrodes (the anode and the cathode). The anode is defined as the electrode where oxidation occurs and the cathode is the electrode where the reduction takes place. Electrodes can be made from any sufficiently conductive materials, such as metals, semiconductors, graphite, and even conductive polymers. In between these electrodes is the electrolyte, which contains ions that can freely move.
The galvanic cell uses two different metal electrodes, each in an electrolyte where the positively charged ions are the oxidized form of the electrode metal. One electrode will undergo oxidation (the anode) and the other will undergo reduction (the cathode). The metal of the anode will oxidize, going from an oxidation state of 0 (in the solid form) to a positive oxidation state and become an ion. At the cathode, the metal ion in solution will accept one or more electrons from the cathode and the ion's oxidation state is reduced to 0. This forms a solid metal that electrodeposits on the cathode. The two electrodes must be electrically connected to each other, allowing for a flow of electrons that leave the metal of the anode and flow through this connection to the ions at the surface of the cathode. This flow of electrons is an electric current that can be used to do work, such as turn a motor or power a light.
A galvanic cell whose electrodes are zinc and copper submerged in zinc sulfate and copper sulfate, respectively, is known as a Daniell cell.
The half reactions in a Daniell cell are as follows:
Zinc electrode (anode): Zn → Zn2+ + 2 e−
Copper electrode (cathode): Cu2+ + 2 e− → Cu
In this example, the anode is the zinc metal which is oxidized (loses electrons) to form zinc ions in solution, and copper ions accept electrons from the copper metal electrode and the ions deposit at the copper cathode as an electrodeposit. This cell forms a simple battery as it will spontaneously generate a flow of electric current from the anode to the cathode through the external connection. This reaction can be driven in reverse by applying a voltage, resulting in the deposition of zinc metal at the anode and formation of copper ions at the cathode.
To provide a complete electric circuit, there must also be an ionic conduction path between the anode and cathode electrolytes in addition to the electron conduction path. The simplest ionic conduction path is to provide a liquid junction. To avoid mixing between the two electrolytes, the liquid junction can be provided through a porous plug that allows ion flow while minimizing electrolyte mixing. To further minimize mixing of the electrolytes, a salt bridge can be used which consists of an electrolyte saturated gel in an inverted U-tube. As the negatively charged electrons flow in one direction around this circuit, the positively charged metal ions flow in the opposite direction in the electrolyte.
A voltmeter is capable of measuring the change of electrical potential between the anode and the cathode.
The electrochemical cell voltage is also referred to as electromotive force or emf.
A cell diagram can be used to trace the path of the electrons in the electrochemical cell. For example, here is a cell diagram of a Daniell cell:
Zn | Zn2+ (1 M) || Cu2+ (1 M) | Cu
First, the reduced form of the metal to be oxidized at the anode (Zn) is written. This is separated from its oxidized form by a vertical line, which represents the limit between the phases (oxidation changes). The double vertical lines represent the saline bridge on the cell. Finally, the oxidized form of the metal to be reduced at the cathode, is written, separated from its reduced form by the vertical line. The electrolyte concentration is given as it is an important variable in determining the exact cell potential.
Standard electrode potential
To allow prediction of the cell potential, tabulations of standard electrode potential are available. Such tabulations are referenced to the standard hydrogen electrode (SHE). The standard hydrogen electrode undergoes the reaction
2 H+ + 2 e− → H2
which is shown as a reduction but, in fact, the SHE can act as either the anode or the cathode, depending on the relative oxidation/reduction potential of the other electrode/electrolyte combination. The term standard in SHE requires a supply of hydrogen gas bubbled through the electrolyte at a pressure of 1 atm and an acidic electrolyte with H+ activity equal to 1 (usually assumed to be [H+] = 1 mol/liter, i.e. pH = 0).
The SHE electrode can be connected to any other electrode by a salt bridge and an external circuit to form a cell. If the second electrode is also at standard conditions, then the measured cell potential is called the standard electrode potential for the electrode. The standard electrode potential for the SHE is zero, by definition. The polarity of the standard electrode potential provides information about the relative reduction potential of the electrode compared to the SHE. If the electrode has a positive potential with respect to the SHE, then that means it is a strongly reducing electrode which forces the SHE to be the anode (an example is Cu in aqueous CuSO4 with a standard electrode potential of 0.337 V). Conversely, if the measured potential is negative, the electrode is more oxidizing than the SHE (such as Zn in ZnSO4 where the standard electrode potential is −0.76 V).
Standard electrode potentials are usually tabulated as reduction potentials. However, the reactions are reversible and the role of a particular electrode in a cell depends on the relative oxidation/reduction potential of both electrodes. The oxidation potential for a particular electrode is just the negative of the reduction potential. A standard cell potential can be determined by looking up the standard electrode potentials for both electrodes (sometimes called half cell potentials). The one that is smaller will be the anode and will undergo oxidation. The cell potential is then calculated as the sum of the reduction potential for the cathode and the oxidation potential for the anode.
E°cell = E°red (cathode) – E°red (anode) = E°red (cathode) + E°oxi (anode)
For example, the standard electrode potential for a copper electrode is:
Cell diagram
Pt | H2 (1 atm) | H+ (1 M) || Cu2+ (1 M) | Cu
E°cell = E°red (cathode) – E°red (anode)
At standard temperature, pressure and concentration conditions, the cell's emf (measured by a multimeter) is 0.34 V. By definition, the electrode potential for the SHE is zero. Thus, the Cu is the cathode and the SHE is the anode giving
Ecell = E°(Cu2+/Cu) – E°(H+/H2)
Or,
E°(Cu2+/Cu) = 0.34 V
Changes in the stoichiometric coefficients of a balanced cell equation will not change the E°red value because the standard electrode potential is an intensive property.
Spontaneity of redox reaction
During operation of an electrochemical cell, chemical energy is transformed into electrical energy. This can be expressed mathematically as the product of the cell's emf Ecell measured in volts (V) and the electric charge Qele,trans transferred through the external circuit.
Electrical energy = EcellQele,trans
Qele,trans is the cell current integrated over time and measured in coulombs (C); it can also be determined by multiplying the total number ne of electrons transferred (measured in moles) times Faraday's constant (F).
The emf of the cell at zero current is the maximum possible emf. It can be used to calculate the maximum possible electrical energy that could be obtained from a chemical reaction. This energy is referred to as electrical work and is expressed by the following equation:
,
where work is defined as positive when it increases the energy of the system.
Since the free energy is the maximum amount of work that can be extracted from a system, one can write:
A positive cell potential gives a negative change in Gibbs free energy. This is consistent with the cell production of an electric current from the cathode to the anode through the external circuit. If the current is driven in the opposite direction by imposing an external potential, then work is done on the cell to drive electrolysis.
A spontaneous electrochemical reaction (change in Gibbs free energy less than zero) can be used to generate an electric current in electrochemical cells. This is the basis of all batteries and fuel cells. For example, gaseous oxygen (O2) and
hydrogen (H2) can be combined in a fuel cell to form water and energy, typically a combination of heat and electrical energy.
Conversely, non-spontaneous electrochemical reactions can be driven forward by the application of a current at sufficient voltage. The electrolysis of water into gaseous oxygen and hydrogen is a typical example.
The relation between the equilibrium constant, K, and the Gibbs free energy for an electrochemical cell is expressed as follows:
.
Rearranging to express the relation between standard potential and equilibrium constant yields
.
At T = 298 K, the previous equation can be rewritten using the Briggsian logarithm as follows:
Cell EMF dependency on changes in concentration
Nernst equation
The standard potential of an electrochemical cell requires standard conditions (ΔG°) for all of the reactants. When reactant concentrations differ from standard conditions, the cell potential will deviate from the standard potential. In the 20th century German chemist Walther Nernst proposed a mathematical model to determine the effect of reactant concentration on electrochemical cell potential.
In the late 19th century, Josiah Willard Gibbs had formulated a theory to predict whether a chemical reaction is spontaneous based on the free energy
Here ΔG is change in Gibbs free energy, ΔG° is the cell potential when Q is equal to 1, T is absolute temperature (Kelvin), R is the gas constant and Q is the reaction quotient, which can be calculated by dividing concentrations of products by those of reactants, each raised to the power of its stoichiometric coefficient, using only those products and reactants that are aqueous or gaseous.
Gibbs' key contribution was to formalize the understanding of the effect of reactant concentration on spontaneity.
Based on Gibbs' work, Nernst extended the theory to include the contribution from electric potential on charged species. As shown in the previous section, the change in Gibbs free energy for an electrochemical cell can be related to the cell potential. Thus, Gibbs' theory becomes
Here ne is the number of electrons (in moles), F is the Faraday constant (in coulombs/mole), and ΔE is the cell potential (in volts).
Finally, Nernst divided through by the amount of charge transferred to arrive at a new equation which now bears his name:
Assuming standard conditions (T = 298 K or 25 °C) and R = 8.3145 J/(K·mol), the equation above can be expressed on base-10 logarithm as shown below:
Note that is also known as the thermal voltage VT and is found in the study of plasmas and semiconductors as well. The value 0.05916 V in the above equation is just the thermal voltage at standard temperature multiplied by the natural logarithm of 10.
Concentration cells
A concentration cell is an electrochemical cell where the two electrodes are the same material, the electrolytes on the two half-cells involve the same ions, but the electrolyte concentration differs between the two half-cells.
An example is an electrochemical cell, where two copper electrodes are submerged in two copper(II) sulfate solutions, whose concentrations are 0.05 M and 2.0 M, connected through a salt bridge. This type of cell will generate a potential that can be predicted by the Nernst equation. Both can undergo the same chemistry (although the reaction proceeds in reverse at the anode)
Cu2+ + 2 e− → Cu
Le Chatelier's principle indicates that the reaction is more favorable to reduction as the concentration of Cu2+ ions increases. Reduction will take place in the cell's compartment where the concentration is higher and oxidation will occur on the more dilute side.
The following cell diagram describes the concentration cell mentioned above:
Cu | Cu2+ (0.05 M) || Cu2+ (2.0 M) | Cu
where the half cell reactions for oxidation and reduction are:
Oxidation: Cu → Cu2+ (0.05 M) + 2 e−
Reduction: Cu2+ (2.0 M) + 2 e− → Cu
Overall reaction: Cu2+ (2.0 M) → Cu2+ (0.05 M)
The cell's emf is calculated through the Nernst equation as follows:
The value of E° in this kind of cell is zero, as electrodes and ions are the same in both half-cells.
After replacing values from the case mentioned, it is possible to calculate cell's potential:
or by:
However, this value is only approximate, as reaction quotient is defined in terms of ion activities which can be approximated with the concentrations as calculated here.
The Nernst equation plays an important role in understanding electrical effects in cells and organelles. Such effects include nerve synapses and cardiac beat as well as the resting potential of a somatic cell.
Battery
Many types of battery have been commercialized and represent an important practical application of electrochemistry. Early wet cells powered the first telegraph and telephone systems, and were the source of current for electroplating. The zinc-manganese dioxide dry cell was the first portable, non-spillable battery type that made flashlights and other portable devices practical. The mercury battery using zinc and mercuric oxide provided higher levels of power and capacity than the original dry cell for early electronic devices, but has been phased out of common use due to the danger of mercury pollution from discarded cells.
The lead–acid battery was the first practical secondary (rechargeable) battery that could have its capacity replenished from an external source. The electrochemical reaction that produced current was (to a useful degree) reversible, allowing electrical energy and chemical energy to be interchanged as needed. Common lead acid batteries contain a mixture of sulfuric acid and water, as well as lead plates. The most common mixture used today is 30% acid. One problem, however, is if left uncharged acid will crystallize within the lead plates of the battery rendering it useless. These batteries last an average of 3 years with daily use but it is not unheard of for a lead acid battery to still be functional after 7–10 years. Lead-acid cells continue to be widely used in automobiles.
All the preceding types have water-based electrolytes, which limits the maximum voltage per cell. The freezing of water limits low temperature performance. The lithium metal battery, which does not (and cannot) use water in the electrolyte, provides improved performance over other types; a rechargeable lithium-ion battery is an essential part of many mobile devices.
The flow battery, an experimental type, offers the option of vastly larger energy capacity because its reactants can be replenished from external reservoirs. The fuel cell can turn the chemical energy bound in hydrocarbon gases or hydrogen and oxygen directly into electrical energy with a much higher efficiency than any combustion process; such devices have powered many spacecraft and are being applied to grid energy storage for the public power system.
Corrosion
Corrosion is an electrochemical process, which reveals itself as rust or tarnish on metals like iron or copper and their respective alloys, steel and brass.
Iron corrosion
For iron rust to occur the metal has to be in contact with oxygen and water. The chemical reactions for this process are relatively complex and not all of them are completely understood. It is believed the causes are the following:
Electron transfer (reduction-oxidation)
One area on the surface of the metal acts as the anode, which is where the oxidation (corrosion) occurs. At the anode, the metal gives up electrons.
Fe → Fe2+ + 2 e−
Electrons are transferred from iron, reducing oxygen in the atmosphere into water on the cathode, which is placed in another region of the metal.
O2 + 4 H+ + 4 e− → 2 H2O
Global reaction for the process:
2 Fe + O2 + 4 H+ → 2 Fe2+ + 2 H2O
Standard emf for iron rusting:
E° = E° (cathode) − E° (anode)
E° = 1.23V − (−0.44 V) = 1.67 V
Iron corrosion takes place in an acid medium; H+ ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe2+ ions oxidize further, following this equation:
4 Fe2+ + O2 + (4+2) H2O → 2 Fe2O3·H2O + 8 H+
Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by Fe2O3·H2O.
An electric circuit is formed as passage of electrons and ions occurs; thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water.
Corrosion of common metals
Coinage metals, such as copper and silver, slowly corrode through use.
A patina of green-blue copper carbonate forms on the surface of copper with exposure to the water and carbon dioxide in the air. Silver coins or cutlery that are exposed to high sulfur foods such as eggs or the low levels of sulfur species in the air develop a layer of black silver sulfide.
Gold and platinum are extremely difficult to oxidize under normal circumstances, and require exposure to a powerful chemical oxidizing agent such as aqua regia.
Some common metals oxidize extremely rapidly in air. Titanium and aluminium oxidize instantaneously in contact with the oxygen in the air. These metals form an extremely thin layer of oxidized metal on the surface, which bonds with the underlying metal. This thin oxide layer protects the underlying bulk of the metal from the air preventing the entire metal from oxidizing. These metals are used in applications where corrosion resistance is important. Iron, in contrast, has an oxide that forms in air and water, called rust, that does not bond with the iron and therefore does not stop the further oxidation of the iron. Thus iron left exposed to air and water will continue to rust until all of the iron is oxidized.
Prevention of corrosion
Attempts to save a metal from becoming anodic are of two general types. Anodic regions dissolve and destroy the structural integrity of the metal.
While it is almost impossible to prevent anode/cathode formation, if a non-conducting material covers the metal, contact with the electrolyte is not possible and corrosion will not occur.
Coating
Metals can be coated with paint or other less conductive metals (passivation). This prevents the metal surface from being exposed to electrolytes. Scratches exposing the metal substrate will result in corrosion. The region under the coating adjacent to the scratch acts as the anode of the reaction.
Sacrificial anodes
A method commonly used to protect a structural metal is to attach a metal which is more anodic than the metal to be protected. This forces the structural metal to be cathodic, thus spared corrosion. It is called "sacrificial" because the anode dissolves and has to be replaced periodically.
Zinc bars are attached to various locations on steel ship hulls to render the ship hull cathodic. The zinc bars are replaced periodically. Other metals, such as magnesium, would work very well but zinc is the least expensive useful metal.
To protect pipelines, an ingot of buried or exposed magnesium (or zinc) is buried beside the pipeline and is connected electrically to the pipe above ground. The pipeline is forced to be a cathode and is protected from being oxidized and rusting. The magnesium anode is sacrificed. At intervals new ingots are buried to replace those dissolved.
Electrolysis
The spontaneous redox reactions of a conventional battery produce electricity through the different reduction potentials of the cathode and anode in the electrolyte. However, electrolysis requires an external source of electrical energy to induce a chemical reaction, and this process takes place in a compartment called an electrolytic cell.
Electrolysis of molten sodium chloride
When molten, the salt sodium chloride can be electrolyzed to yield metallic sodium and gaseous chlorine. Industrially this process takes place in a special cell named Downs cell. The cell is connected to an electrical power supply, allowing electrons to migrate from the power supply to the electrolytic cell.
Reactions that take place in a Downs cell are the following:
Anode (oxidation): 2 Cl− → Cl2 + 2 e−
Cathode (reduction): 2 Na+ + 2 e− → 2 Na
Overall reaction: 2 Na+ + 2 Cl− → 2 Na + Cl2
This process can yield large amounts of metallic sodium and gaseous chlorine, and is widely used in mineral dressing and metallurgy industries.
The emf for this process is approximately −4 V indicating a (very) non-spontaneous process. In order for this reaction to occur the power supply should provide at least a potential difference of 4 V. However, larger voltages must be used for this reaction to occur at a high rate.
Electrolysis of water
Water can be converted to its component elemental gases, H2 and O2, through the application of an external voltage. Water does not decompose into hydrogen and oxygen spontaneously as the Gibbs free energy change for the process at standard conditions is very positive, about 474.4 kJ. The decomposition of water into hydrogen and oxygen can be performed in an electrolytic cell. In it, a pair of inert electrodes usually made of platinum immersed in water act as anode and cathode in the electrolytic process. The electrolysis starts with the application of an external voltage between the electrodes. This process will not occur except at extremely high voltages without an electrolyte such as sodium chloride or sulfuric acid (most used 0.1 M).
Bubbles from the gases will be seen near both electrodes. The following half reactions describe the process mentioned above:
Anode (oxidation): 2 H2O → O2 + 4 H+ + 4 e−
Cathode (reduction): 2 H2O + 2 e− → H2 + 2 OH−
Overall reaction: 2 H2O → 2 H2 + O2
Although strong acids may be used in the apparatus, the reaction will not net consume the acid. While this reaction will work at any conductive electrode at a sufficiently large potential, platinum catalyzes both hydrogen and oxygen formation, allowing for relatively low voltages (~2 V depending on the pH).
Electrolysis of aqueous solutions
Electrolysis in an aqueous solution is a similar process as mentioned in electrolysis of water. However, it is considered to be a complex process because the contents in solution have to be analyzed in half reactions, whether reduced or oxidized.
Electrolysis of a solution of sodium chloride
The presence of water in a solution of sodium chloride must be examined in respect to its reduction and oxidation in both electrodes. Usually, water is electrolysed as mentioned above in electrolysis of water yielding gaseous oxygen in the anode and gaseous hydrogen in the cathode. On the other hand, sodium chloride in water dissociates in Na+ and Cl− ions. The cation, which is the positive ion, will be attracted to the cathode (−), thus reducing the sodium ion. The chloride anion will then be attracted to the anode (+), where it is oxidized to chlorine gas.
The following half reactions should be considered in the process mentioned:
Cathode: Na+ + e− → NaE°red = –2.71 V
Anode: 2 Cl− → Cl2 + 2 e−E°red = +1.36 V
Cathode: 2 H2O + 2 e− → H2 + 2 OH−E°red = –0.83 V
Anode: 2 H2O → O2 + 4 H+ + 4 e−E°red = +1.23 V
Reaction 1 is discarded as it has the most negative value on standard reduction potential thus making it less thermodynamically favorable in the process.
When comparing the reduction potentials in reactions 2 and 4, the oxidation of chloride ion is favored over oxidation of water, thus chlorine gas is produced at the anode and not oxygen gas.
Although the initial analysis is correct, there is another effect, known as the overvoltage effect. Additional voltage is sometimes required, beyond the voltage predicted by the E°cell. This may be due to kinetic rather than thermodynamic considerations. In fact, it has been proven that the activation energy for the chloride ion is very low, hence favorable in kinetic terms. In other words, although the voltage applied is thermodynamically sufficient to drive electrolysis, the rate is so slow that to make the process proceed in a reasonable time frame, the voltage of the external source has to be increased (hence, overvoltage).
The overall reaction for the process according to the analysis is the following:
Anode (oxidation): 2 Cl− → Cl2 + 2 e−
Cathode (reduction): 2 H2O + 2 e− → H2 + 2 OH−
Overall reaction: 2 H2O + 2 Cl− → H2 + Cl2 + 2 OH−
As the overall reaction indicates, the concentration of chloride ions is reduced in comparison to OH− ions (whose concentration increases). The reaction also shows the production of gaseous hydrogen, chlorine and aqueous sodium hydroxide.
Quantitative electrolysis and Faraday's laws
Quantitative aspects of electrolysis were originally developed by Michael Faraday in 1834. Faraday is also credited to have coined the terms electrolyte, electrolysis, among many others while he studied quantitative analysis of electrochemical reactions. Also he was an advocate of the law of conservation of energy.
First law
Faraday concluded after several experiments on electric current in a non-spontaneous process that the mass of the products yielded on the electrodes was proportional to the value of current supplied to the cell, the length of time the current existed, and the molar mass of the substance analyzed. In other words, the amount of a substance deposited on each electrode of an electrolytic cell is directly proportional to the quantity of electricity passed through the cell.
Below is a simplified equation of Faraday's first law:
where
m is the mass of the substance produced at the electrode (in grams),
Q is the total electric charge that passed through the solution (in coulombs),
n is the valence number of the substance as an ion in solution (electrons per ion),
M is the molar mass of the substance (in grams per mole),
F is Faraday's constant (96485 coulombs per mole).
Second law
Faraday devised the laws of chemical electrodeposition of metals from solutions in 1857. He formulated the second law of electrolysis stating "the amounts of bodies which are equivalent to each other in their ordinary chemical action have equal quantities of electricity naturally associated with them." In other words, the quantities of different elements deposited by a given amount of electricity are in the ratio of their chemical equivalent weights.
An important aspect of the second law of electrolysis is electroplating, which together with the first law of electrolysis has a significant number of applications in industry, as when used to protectively coat metals to avoid corrosion.
Applications
There are various important electrochemical processes in both nature and industry, like the coating of objects with metals or metal oxides through electrodeposition, the addition (electroplating) or removal (electropolishing) of thin layers of metal from an object's surface, and the detection of alcohol in drunk drivers through the redox reaction of ethanol. The generation of chemical energy through photosynthesis is inherently an electrochemical process, as is production of metals like aluminum and titanium from their ores. Certain diabetes blood sugar meters measure the amount of glucose in the blood through its redox potential. In addition to established electrochemical technologies (like deep cycle lead acid batteries) there is also a wide range of new emerging technologies such as fuel cells, large format lithium-ion batteries, electrochemical reactors and super-capacitors that are becoming increasingly commercial. Electrochemical or coulometric titrations were introduced for quantitative analysis of minute quantities in 1938 by the Hungarian chemists László Szebellédy and Zoltan Somogyi. Electrochemistry also has important applications in the food industry, like the assessment of food/package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, or the determination of free acidity in olive oil.
| Physical sciences | Chemistry: General | null |
9604 | https://en.wikipedia.org/wiki/Many-worlds%20interpretation | Many-worlds interpretation | The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wave function collapse. This implies that all possible outcomes of quantum measurements are physically realized in different "worlds". The evolution of reality as a whole in MWI is rigidly deterministic and local. Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957. Bryce DeWitt popularized the formulation and named it many-worlds in the 1970s.
In modern versions of many-worlds, the subjective appearance of wave function collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s. MWI is considered a mainstream interpretation of quantum mechanics, along with the other decoherence interpretations, the Copenhagen interpretation, and hidden variable theories such as Bohmian mechanics.
The many-worlds interpretation implies that there are many parallel, non-interacting worlds. It is one of a number of multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized. This is intended to resolve the measurement problem and thus some paradoxes of quantum theory, such as Wigner's friend, the EPR paradox and Schrödinger's cat, since every possible outcome of a quantum event exists in its own world.
Overview of the interpretation
The many-worlds interpretation's key idea is that the linear and unitary dynamics of quantum mechanics applies everywhere and at all times and so describes the whole universe. In particular, it models a measurement as a unitary transformation, a correlation-inducing interaction, between observer and object, without using a collapse postulate, and models observers as ordinary quantum-mechanical systems. This stands in contrast to the Copenhagen interpretation, in which a measurement is a "primitive" concept, not describable by unitary quantum mechanics; using the Copenhagen interpretation the universe is divided into a quantum and a classical domain, and the collapse postulate is central. In MWI there is no division between classical and quantum: everything is quantum and there is no collapse. MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of an uncountable or undefinable amount or number of increasingly divergent, non-communicating parallel universes or quantum worlds. Sometimes dubbed Everett worlds, each is an internally consistent and actualized alternative history or timeline.
The many-worlds interpretation uses decoherence to explain the measurement process and the emergence of a quasi-classical world. Wojciech H. Zurek, one of decoherence theory's pioneers, said: "Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected." Zurek emphasizes that his work does not depend on a particular interpretation.
The many-worlds interpretation shares many similarities with the decoherent histories interpretation, which also uses decoherence to explain the process of measurement or wave function collapse. MWI treats the other histories or worlds as real, since it regards the universal wave function as the "basic physical entity" or "the fundamental entity, obeying at all times a deterministic wave equation". The decoherent histories interpretation, on the other hand, needs only one of the histories (or worlds) to be real.
Several authors, including Everett, John Archibald Wheeler and David Deutsch, call many-worlds a theory or metatheory, rather than just an interpretation. Everett argued that it was the "only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world." Deutsch dismissed the idea that many-worlds is an "interpretation", saying that to call it an interpretation "is like talking about dinosaurs as an 'interpretation' of fossil records."
Formulation
In his 1957 doctoral dissertation, Everett proposed that, rather than relying on external observation for analysis of isolated quantum systems, one could mathematically model an object, as well as its observers, as purely physical systems within the mathematical framework developed by Paul Dirac, John von Neumann, and others, discarding altogether the ad hoc mechanism of wave function collapse.
Relative state
Everett's original work introduced the concept of a relative state. Two (or more) subsystems, after a general interaction, become correlated, or as is now said, entangled. Everett noted that such entangled systems can be expressed as the sum of products of states, where the two or more subsystems are each in a state relative to each other. After a measurement or observation one of the pair (or triple...) is the measured, object or observed system, and one other member is the measuring apparatus (which may include an observer) having recorded the state of the measured system. Each product of subsystem states in the overall superposition evolves over time independently of other products. Once the subsystems interact, their states have become correlated or entangled and can no longer be considered independent. In Everett's terminology, each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted.
In the example of Schrödinger's cat, after the box is opened, the entangled system is the cat, the poison vial and the observer. One relative triple of states would be the alive cat, the unbroken vial and the observer seeing an alive cat. Another relative triple of states would be the dead cat, the broken vial and the observer seeing a dead cat.
In the example of a measurement of a continuous variable (e.g., position q) the object-observer system decomposes into a continuum of pairs of relative states: the object system's relative state becomes a Dirac delta function each centered on a particular value of q and the corresponding observer relative state representing an observer having recorded the value of q. The states of the pairs of relative states are, post measurement, correlated with each other.
In Everett's scheme, there is no collapse; instead, the Schrödinger equation, or its quantum field theory, relativistic analog, holds all the time, everywhere. An observation or measurement is modeled by applying the wave equation to the entire system, comprising the object being observed and the observer. One consequence is that every observation causes the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches.
Thus the process of measurement or observation, or any correlation-inducing interaction, splits the system into sets of relative states, where each set of relative states, forming a branch of the universal wave function, is consistent within itself, and all future measurements (including by multiple observers) will confirm this consistency.
Renamed many-worlds
Everett had referred to the combined observer–object system as split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a branching tree, where each branch is a set of all the states relative to each other. Bryce DeWitt popularized Everett's work with a series of publications calling it the Many Worlds Interpretation. Focusing on the splitting process, DeWitt introduced the term "world" to describe a single branch of that tree, which is a consistent history. All observations or measurements within any branch are consistent within themselves.
Since many observation-like events have happened and are constantly happening, Everett's model implies that there are an enormous and growing number of simultaneously existing states or "worlds".
Properties
MWI removes the observer-dependent role in the quantum measurement process by replacing wave function collapse with the established mechanism of quantum decoherence. As the observer's role lies at the heart of all "quantum paradoxes" such as the EPR paradox and von Neumann's "boundary problem", this provides a clearer and easier approach to their resolution.
Since the Copenhagen interpretation requires the existence of a classical domain beyond the one described by quantum mechanics, it has been criticized as inadequate for the study of cosmology. While there is no evidence that Everett was inspired by issues of cosmology, he developed his theory with the explicit goal of allowing quantum mechanics to be applied to the universe as a whole, hoping to stimulate the discovery of new phenomena. This hope has been realized in the later development of quantum cosmology.
MWI is a realist, deterministic and local theory. It achieves this by removing wave function collapse, which is indeterministic and nonlocal, from the deterministic and local equations of quantum theory.
MWI (like other, broader multiverse theories) provides a context for the anthropic principle, which may provide an explanation for the fine-tuned universe.
MWI depends crucially on the linearity of quantum mechanics, which underpins the superposition principle. If the final theory of everything is non-linear with respect to wavefunctions, then many-worlds is invalid. All quantum field theories are linear and compatible with the MWI, a point Everett emphasized as a motivation for the MWI. While quantum gravity or string theory may be non-linear in this respect, there is as yet no evidence of this.
Alternative to wavefunction collapse
As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) pass through the double slit, a calculation assuming wavelike behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves.
Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of "collapse" in which an indeterminate quantum system would probabilistically collapse onto, or select, just one determinate outcome to "explain" this phenomenon of observation. Wave function collapse was widely regarded as artificial and ad hoc, so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable.
Everett's PhD work provided such an interpretation. He argued that for a composite system—such as a subject (the "observer" or measuring apparatus) observing an object (the "observed" system, such as a particle)—the claim that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled: we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wave function collapse) the notion of a relativity of states.
Everett noticed that the unitary, deterministic dynamics alone entailed that after an observation is made each element of the quantum superposition of the combined subject–object wave function contains two "relative states": a "collapsed" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wave function collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wave function's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory: that the theory should define what is observed, not for the observables to define the theory.) Since the wave function appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wave function collapse from the theory.
Testability
In 1985, David Deutsch proposed a variant of the Wigner's friend thought experiment as a test of many-worlds versus the Copenhagen interpretation. It consists of an experimenter (Wigner's friend) making a measurement on a quantum system in an isolated laboratory, and another experimenter (Wigner) who would make a measurement on the first one. According to the many-worlds theory, the first experimenter would end up in a macroscopic superposition of seeing one result of the measurement in one branch, and another result in another branch. The second experimenter could then interfere these two branches in order to test whether it is in fact in a macroscopic superposition or has collapsed into a single branch, as predicted by the Copenhagen interpretation. Since then Lockwood, Vaidman, and others have made similar proposals, which require placing macroscopic objects in a coherent superposition and interfering them, a task currently beyond experimental capability.
Probability and the Born rule
Since the many-worlds interpretation's inception, physicists have been puzzled about the role of probability in it. As put by Wallace, there are two facets to the question: the incoherence problem, which asks why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and the quantitative problem, which asks why the probabilities should be given by the Born rule.
Everett tried to answer these questions in the paper that introduced many-worlds. To address the incoherence problem, he argued that an observer who makes a sequence of measurements on a quantum system will in general have an apparently random sequence of results in their memory, which justifies the use of probabilities to describe the measurement process. To address the quantitative problem, Everett proposed a derivation of the Born rule based on the properties that a measure on the branches of the wave function should have. His derivation has been criticized as relying on unmotivated assumptions. Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful.
Frequentism
DeWitt and Graham and Farhi et al., among others, have proposed derivations of the Born rule based on a frequentist interpretation of probability. They try to show that in the limit of uncountably many measurements, no worlds would have relative frequencies that didn't match the probabilities given by the Born rule, but these derivations have been shown to be mathematically incorrect.
Decision theory
A decision-theoretic derivation of the Born rule was produced by David Deutsch (1999) and refined by Wallace and Saunders. They consider an agent who takes part in a quantum gamble: the agent makes a measurement on a quantum system, branches as a consequence, and each of the agent's future selves receives a reward that depends on the measurement result. The agent uses decision theory to evaluate the price they would pay to take part in such a gamble, and concludes that the price is given by the utility of the rewards weighted according to the Born rule. Some reviews have been positive, although these arguments remain highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes. For example, a New Scientist story on a 2007 conference about Everettian interpretations quoted physicist Andy Albrecht as saying, "This work will go down as one of the most important developments in the history of science." In contrast, the philosopher Huw Price, also attending the conference, found the Deutsch–Wallace–Saunders approach fundamentally flawed.
Symmetries and invariance
In 2005, Zurek produced a derivation of the Born rule based on the symmetries of entangled states; Schlosshauer and Fine argue that Zurek's derivation is not rigorous, as it does not define what probability is and has several unstated assumptions about how it should behave.
In 2016, Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman, proposed a similar approach based on self-locating uncertainty. In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. The Sebens–Carroll approach has been criticized by Adrian Kent, and Vaidman does not find it satisfactory.
Branch counting
In 2021, Simon Saunders produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule.
The preferred basis problem
As originally formulated by Everett and DeWitt, the many-worlds interpretation had a privileged role for measurements: they determined which basis of a quantum system would give rise to the eponymous worlds. Without this the theory was ambiguous, as a quantum state can equally well be described (e.g.) as having a well-defined position or as being a superposition of two delocalized states. The assumption is that the preferred basis to use is the one which assigns a unique measurement outcome to each world. This special role for measurements is problematic for the theory, as it contradicts Everett and DeWitt's goal of having a reductionist theory and undermines their criticism of the ill-defined measurement postulate of the Copenhagen interpretation. This is known today as the preferred basis problem.
The preferred basis problem has been solved, according to Saunders and Wallace, among others, by incorporating decoherence into the many-worlds theory. In this approach, the preferred basis does not have to be postulated, but rather is identified as the basis stable under environmental decoherence. In this way measurements no longer play a special role; rather, any interaction that causes decoherence causes the world to split. Since decoherence is never complete, there will always remain some infinitesimal overlap between two worlds, making it arbitrary whether a pair of worlds has split or not. Wallace argues that this is not problematic: it only shows that worlds are not a part of the fundamental ontology, but rather of the emergent ontology, where these approximate, effective descriptions are routine in the physical sciences. Since in this approach the worlds are derived, it follows that they must be present in any other interpretation of quantum mechanics that does not have a collapse mechanism, such as Bohmian mechanics.
This approach to deriving the preferred basis has been criticized as creating circularity with derivations of probability in the many-worlds interpretation, as decoherence theory depends on probability and probability depends on the ontology derived from decoherence. Wallace contends that decoherence theory depends not on probability but only on the notion that one is allowed to do approximations in physics.
History
MWI originated in Everett's Princeton University PhD thesis "The Theory of the Universal Wave Function", developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state"; Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt, who was responsible for the wider popularization of Everett's theory, which had been largely ignored for a decade after publication in 1957.
Everett's proposal was not without precedent. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". According to David Deutsch, this is the earliest known reference to many-worlds; Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger. Schrödinger's writings from the period also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wave function as physical and treating it as information became interchangeable.
Leon Cooper and Deborah Van Vechten developed a very similar approach before reading Everett's work. Zeh also came to the same conclusions as Everett before reading his work, then built a new theory of quantum decoherence based on these ideas.
According to people who knew him, Everett believed in the literal reality of the other quantum worlds. His son and wife reported that he "never wavered in his belief over his many-worlds theory". In their detailed review of Everett's work, Osnaghi, Freitas, and Freire Jr. note that Everett consistently used quotes around "real" to indicate a meaning within scientific practice.
Reception
MWI's initial reception was overwhelmingly negative, in the sense that it was ignored, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory. Everett had already left academia in 1957, never to return, and in 1980, Wheeler disavowed the theory.
Support
One of MWI's strongest longtime advocates is David Deutsch. According to him, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, Deutsch suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". He also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin.
Equivocal
Philosophers of science James Ladyman and Don Ross say that MWI could be true, but do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy.
Rejection
Some scientists consider some aspects of MWI to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them.
Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann worked toward the development of a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists find MWI too extreme, though it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse".
Roger Penrose argues that the idea is flawed because it is based on an oversimplified version of quantum mechanics that does not account for gravity. In his view, applying conventional quantum mechanics to the universe implies the MWI, but the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics. According to Penrose, "the rules must change when gravity is involved". He further asserts that gravity helps anchor reality and "blurry" events have only one allowable outcome: "electrons, atoms, molecules, etc., are so minute that they require almost no amount of energy to maintain their gravity, and therefore their overlapping states. They can stay in that state forever, as described in standard quantum theory". On the other hand, "in the case of large objects, the duplicate states disappear in an instant due to the fact that these objects create a large gravitational field".
Philosopher of science Robert P. Crease says that MWI is "one of the most implausible and unrealistic ideas in the history of science" because it means that everything conceivable happens. Science writer Philip Ball calls MWI's implications fantasies, since "beneath their apparel of scientific equations or symbolic logic, they are acts of imagination, of 'just supposing.
Theoretical physicist Gerard 't Hooft also dismisses the idea: "I do not believe that we have to live with the many-worlds interpretation. Indeed, it would be a stupendous number of parallel worlds, which are only there because physicists couldn't decide which of them is real."
Asher Peres was an outspoken critic of MWI. A section of his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres argued that the various many-worlds interpretations merely shift the arbitrariness or vagueness of the collapse postulate to the question of when "worlds" can be regarded as separate, and that no objective criterion for that separation can actually be formulated.
Polls
A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".
Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."
In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory", Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?
A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.
A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen; the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.
Speculative implications
DeWitt has said that Everett, Wheeler, and Graham "do not in the end exclude any element of the superposition. All the worlds are there, even those in which everything goes wrong and all the statistical laws break down." Tegmark affirmed that absurd or highly unlikely events are rare but inevitable under MWI: "Things inconsistent with the laws of physics will never happen—everything else will... it's important to keep track of the statistics, since even if everything conceivable happens somewhere, really freak events happen only exponentially rarely." David Deutsch speculates in his book The Beginning of Infinity that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics.
According to Ladyman and Ross, many seemingly physically plausible but unrealized possibilities, such as those discussed in other scientific fields, generally have no counterparts in other branches, because they are in fact incompatible with the universal wave function. According to Carroll, human decision-making, contrary to common misconceptions, is best thought of as a classical process, not a quantum one, because it works on the level of neurochemistry rather than fundamental particles. Human decisions do not cause the world to branch into equally realized outcomes; even for subjectively difficult decisions, the "weight" of realized outcomes is almost entirely concentrated in a single branch.
Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics that can purportedly distinguish between the Copenhagen interpretation of quantum mechanics and the many-worlds interpretation by a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. Most experts believe the experiment would not work in the real world, because the world with the surviving experimenter has a lower "measure" than the world before the experiment, making it less likely that the experimenter will experience their survival.
| Physical sciences | Quantum mechanics | Physics |
9613 | https://en.wikipedia.org/wiki/Euler%27s%20formula | Euler's formula | Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number , one has
where is the base of the natural logarithm, is the imaginary unit, and and are the trigonometric functions cosine and sine respectively. This complex exponential function is sometimes denoted ("cosine plus i sine"). The formula is still valid if is a complex number, and is also called Euler's formula in this more general case.
Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicist Richard Feynman called the equation "our jewel" and "the most remarkable formula in mathematics".
When , Euler's formula may be rewritten as or , which is known as Euler's identity.
History
In 1714, the English mathematician Roger Cotes presented a geometrical argument that can be interpreted (after correcting a misplaced factor of ) as:
Exponentiating this equation yields Euler's formula. Note that the logarithmic statement is not universally correct for complex numbers, since a complex logarithm can have infinitely many values, differing by multiples of .
Around 1740 Leonhard Euler turned his attention to the exponential function and derived the equation named after him by comparing the series expansions of the exponential and trigonometric expressions. The formula was first published in 1748 in his foundational work Introductio in analysin infinitorum.
Johann Bernoulli had found that
And since
the above equation tells us something about complex logarithms by relating natural logarithms to imaginary (complex) numbers. Bernoulli, however, did not evaluate the integral.
Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understand complex logarithms. Euler also suggested that complex logarithms can have infinitely many values.
The view of complex numbers as points in the complex plane was described about 50 years later by Caspar Wessel.
Definitions of complex exponentiation
The exponential function for real values of may be defined in a few different equivalent ways (see Characterizations of the exponential function). Several of these methods may be directly extended to give definitions of for complex values of simply by substituting in place of and using the complex algebraic operations. In particular, we may use any of the three following definitions, which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving the unique analytic continuation of to the complex plane.
Differential equation definition
The exponential function is the unique differentiable function of a complex variable for which the derivative equals the function and
Power series definition
For complex
Using the ratio test, it is possible to show that this power series has an infinite radius of convergence and so defines for all complex .
Limit definition
For complex
Here, is restricted to positive integers, so there is no question about what the power with exponent means.
Proofs
Various proofs of the formula are possible.
Using differentiation
This proof shows that the quotient of the trigonometric and exponential expressions is the constant function one, so they must be equal (the exponential function is never zero, so this is permitted).
Consider the function
for real . Differentiating gives by the product rule
Thus, is a constant. Since , then for all real , and thus
Using power series
Here is a proof of Euler's formula using power-series expansions, as well as basic facts about the powers of :
Using now the power-series definition from above, we see that for real values of
where in the last step we recognize the two terms are the Maclaurin series for and . The rearrangement of terms is justified because each series is absolutely convergent.
Using polar coordinates
Another proof is based on the fact that all complex numbers can be expressed in polar coordinates. Therefore, for some and depending on ,
No assumptions are being made about and ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative of is . Therefore, differentiating both sides gives
Substituting for and equating real and imaginary parts in this formula gives and . Thus, is a constant, and is for some constant . The initial values and come from , giving and . This proves the formula
Applications
Applications in complex number theory
Interpretation of the formula
This formula can be interpreted as saying that the function is a unit complex number, i.e., it traces out the unit circle in the complex plane as ranges through the real numbers. Here is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians.
The original proof is based on the Taylor series expansions of the exponential function (where is a complex number) and of and for real numbers (see above). In fact, the same proof shows that Euler's formula is even valid for all complex numbers .
A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number , and its complex conjugate, , can be written as
where
is the real part,
is the imaginary part,
is the magnitude of and
.
is the argument of , i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of . Many texts write instead of , but the first equation needs adjustment when . This is because for any real and , not both zero, the angles of the vectors and differ by radians, but have the identical value of .
Use of the formula to define the logarithm of complex numbers
Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation):
and that
both valid for any complex numbers and . Therefore, one can write:
for any . Taking the logarithm of both sides shows that
and in fact, this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because is multi-valued.
Finally, the other exponential law
which can be seen to hold for all integers , together with Euler's formula, implies several trigonometric identities, as well as de Moivre's formula.
Relationship to trigonometry
Euler's formula, the definitions of the trigonometric functions and the standard identities for exponentials are sufficient to easily derive most trigonometric identities. It provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function:
The two equations above can be derived by adding or subtracting Euler's formulas:
and solving for either cosine or sine.
These formulas can even serve as the definition of the trigonometric functions for complex arguments . For example, letting , we have:
In addition
Complex exponentials can simplify trigonometry, because they are mathematically easier to manipulate than their sine and cosine components. One technique is simply to convert sines and cosines into equivalent expressions in terms of exponentials sometimes called complex sinusoids. After the manipulations, the simplified result is still real-valued. For example:
Another technique is to represent sines and cosines in terms of the real part of a complex expression and perform the manipulations on the complex expression. For example:
This formula is used for recursive generation of for integer values of and arbitrary (in radians).
Considering a parameter in equation above yields recursive formula for Chebyshev polynomials of the first kind.
Topological interpretation
In the language of topology, Euler's formula states that the imaginary exponential function is a (surjective) morphism of topological groups from the real line to the unit circle . In fact, this exhibits as a covering space of . Similarly, Euler's identity says that the kernel of this map is , where . These observations may be combined and summarized in the commutative diagram below:
Other applications
In differential equations, the function is often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The reason for this is that the exponential function is the eigenfunction of the operation of differentiation.
In electrical engineering, signal processing, and similar fields, signals that vary periodically over time are often described as a combination of sinusoidal functions (see Fourier analysis), and these are more conveniently expressed as the sum of exponential functions with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits can include Euler's formula to represent the impedance of a capacitor or an inductor.
In the four-dimensional space of quaternions, there is a sphere of imaginary units. For any point on this sphere, and a real number, Euler's formula applies:
and the element is called a versor in quaternions. The set of all versors forms a 3-sphere in the 4-space.
Other special cases
The special cases that evaluate to units illustrate rotation around the complex unit circle:
The special case at (where , one turn) yields . This is also argued to link five fundamental constants with three basic arithmetic operations, but, unlike Euler's identity, without rearranging the addends from the general case:
An interpretation of the simplified form is that rotating by a full turn is an identity function.
| Mathematics | Calculus and analysis | null |
9619 | https://en.wikipedia.org/wiki/Extremophile | Extremophile | An extremophile () is an organism that is able to live (or in some cases thrive) in extreme environments, i.e., environments with conditions approaching or stretching the limits of what known life can adapt to, such as extreme temperature, pressure, radiation, salinity, or pH level.
Since the definition of an extreme environment is relative to an arbitrarily defined standard, often an anthropocentric one, these organisms can be considered ecologically dominant in the evolutionary history of the planet. Dating back to more than 40 million years ago, extremophiles have continued to thrive in the most extreme conditions, making them one of the most abundant lifeforms. The study of extremophiles has expanded human knowledge of the limits of life, and informs speculation about extraterrestrial life. Extremophiles are also of interest because of their potential for bioremediation of environments made hazardous to humans due to pollution or contamination.
Characteristics
In the 1980s and 1990s, biologists found that microbial life has great flexibility for surviving in extreme environments—niches that are acidic, extraordinarily hot, or with irregular air pressure for example—that would be completely inhospitable to complex organisms. Some scientists even concluded that life may have begun on Earth in hydrothermal vents far beneath the ocean's surface.
According to astrophysicist Steinn Sigurdsson, "There are viable bacterial spores that have been found that are 40 million years old on Earth—and we know they're very hardened to radiation." Some bacteria were found living in the cold and dark in a lake buried a half-mile deep under the ice in Antarctica, and in the Marianas Trench, the deepest place in Earth's oceans. Expeditions of the International Ocean Discovery Program found microorganisms in sediment that is below seafloor in the Nankai Trough subduction zone. Some microorganisms have been found thriving inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States. According to one of the researchers, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are." A key to extremophile adaptation is their amino acid composition, affecting their protein folding ability under particular conditions. Studying extreme environments on Earth can help researchers understand the limits of habitability on other worlds.
Tom Gheysens from Ghent University in Belgium and some of his colleagues have presented research findings that show spores from a species of Bacillus bacteria survived and were still viable after being heated to temperatures of .
Classifications
There are many classes of extremophiles that range all around the globe; each corresponding to the way its environmental niche differs from mesophilic conditions. These classifications are not exclusive. Many extremophiles fall under multiple categories and are classified as polyextremophiles. For example, organisms living inside hot rocks deep under Earth's surface are thermophilic and piezophilic such as Thermococcus barophilus. A polyextremophile living at the summit of a mountain in the Atacama Desert might be a radioresistant xerophile, a psychrophile, and an oligotroph. Polyextremophiles are well known for their ability to tolerate both high and low pH levels.
Terms
Acidophile An organism with optimal growth at pH levels of 3.0 or below.
Alkaliphile An organism with optimal growth at pH levels of 9.0 or above.
Capnophile An organism with optimal growth conditions in high concentrations of carbon dioxide. An example would be Mannheimia succiniciproducens, a bacterium that inhabits a ruminant animal's digestive system.
HalophileAn organism with optimal growth at a concentration of dissolved salts of 50 g/L (= 5% m/v) or above.
HyperpiezophileAn organism with optimal growth at hydrostatic pressures above 50 MPa (= 493 atm = 7,252 psi).
HyperthermophileAn organism with optimal growth at temperatures above .
Metallotolerant Capable of tolerating high levels of dissolved heavy metals in solution, such as copper, cadmium, arsenic, and zinc. Examples include Ferroplasma sp., Cupriavidus metallidurans and GFAJ-1.
Oligotroph An organism with optimal growth in nutritionally limited environments.
Osmophile An organism with optimal growth in environments with a high sugar concentration.
Piezophile An organism with optimal growth in hydrostatic pressures above 10 MPa (= 99 atm = 1,450 psi). Also referred to as barophile.
Polyextremophile A polyextremophile (faux Ancient Latin/Greek for 'affection for many extremes') is an organism that qualifies as an extremophile under more than one category.
Psychrophile/Cryophile An organism with optimal growth at temperatures of or lower.
Radioresistant Organisms resistant to high levels of ionizing radiation, most commonly ultraviolet radiation. This category also includes organisms capable of resisting nuclear radiation.
Sulphophile An organism with optimal growth conditions in high concentrations of sulfur. An example would be Sulfurovum epsilonproteobacteria, a sulfur-oxidizing bacteria that inhabits deep-water sulfur vents.
Thermophile An organism with optimal growth at temperatures above .
Xerophile An organism with optimal growth at water activity below 0.8.
In astrobiology
Astrobiology is the multidisciplinary field that investigates how life arises, distributes, and evolves in the universe. Astrobiology makes use of physics, chemistry, astronomy, solar physics, biology, molecular biology, ecology, planetary science, geography, and geology to investigate the possibility of life on other worlds and recognize biospheres that might be different from that on Earth. Astrobiologists are interested in extremophiles, as it allows them to map what is known about the limits of life on Earth to potential extraterrestrial environments For example, analogous deserts of Antarctica are exposed to harmful UV radiation, low temperature, high salt concentration and low mineral concentration. These conditions are similar to those on Mars. Therefore, finding viable microbes in the subsurface of Antarctica suggests that there may be microbes surviving in endolithic communities and living under the Martian surface. Research indicates it is unlikely that Martian microbes exist on the surface or at shallow depths, but may be found at subsurface depths of around 100 meters.
Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g (i.e. 403,627 times the gravity experienced on Earth). P. denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually found only in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of panspermia.
On 26 April 2012, scientists reported that lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under some conditions similar to those on Mars in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR).
On 29 April 2013, scientists at Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence".
On 19 May 2014, scientists announced that some microbes, like Tersicoccus phoenicis, may be resistant to methods usually used in spacecraft assembly clean rooms, giving rise to speculation that such microbes could have withstood space travel and are present on the Curiosity rover now on the planet Mars.
On 20 August 2014, scientists confirmed the existence of microorganisms living half a mile below the ice of Antarctica.
In September 2015, scientists from CNR-National Research Council of Italy reported that S. soflataricus survived under Martian radiation at a wavelength that was considered lethal to most bacteria. This discovery is significant because it indicates that not only bacterial spores, but also growing cells can resist to strong UV radiation.
In June 2016, scientists from Brigham Young University reported that endospores of Bacillus subtilis were able to survive high speed impacts up to 299±28 m/s, extreme shock, and extreme deceleration. They pointed out that this feature might allow endospores to survive and to be transferred between planets by traveling within meteorites or by experiencing atmosphere disruption. Moreover, they suggested that the landing of spacecraft may also result in interplanetary spore transfer, given that spores can survive high-velocity impact while ejected from the spacecraft onto the planet surface. This is the first study which reported that bacteria can survive in such high-velocity impact. However, the lethal impact speed is unknown, and further experiments should be done by introducing higher-velocity impact to bacterial endospores.
In August 2020 scientists reported that bacteria that feed on air discovered 2017 in Antarctica are likely not limited to Antarctica after discovering the two genes previously linked to their "atmospheric chemosynthesis" in soil of two other similar cold desert sites, which provides further information on this carbon sink and further strengthens the extremophile evidence that supports the potential existence of microbial life on alien planets.
The same month, scientists reported that bacteria from Earth, particularly Deinococcus radiodurans, were found to survive for three years in outer space, based on studies on the International Space Station. These findings support the notion of panspermia.
Bioremediation
Extremophiles can also be useful players in the bioremediation of contaminated sites as some species are capable of biodegradation under conditions too extreme for classic bioremediation candidate species. Anthropogenic activity causes the release of pollutants that may potentially settle in extreme environments as is the case with tailings and sediment released from deep-sea mining activity. While most bacteria would be crushed by the pressure in these environments, piezophiles can tolerate these depths and can metabolize pollutants of concern if they possess bioremediation potential.
Hydrocarbons
There are multiple potential destinations for hydrocarbons after an oil spill has settled and currents routinely deposit them in extreme environments. Methane bubbles resulting from the Deepwater Horizon oil spill were found 1.1 kilometers below water surface level and at concentrations as high as 183 μmol per kilogram. The combination of low temperatures and high pressures in this environment result in low microbial activity. However, bacteria that are present including species of Pseudomonas, Aeromonas and Vibrio were found to be capable of bioremediation, albeit at a tenth of the speed they would perform at sea level pressure. Polycyclic aromatic hydrocarbons increase in solubility and bioavailability with increasing temperature. Thermophilic Thermus and Bacillus species have demonstrated higher gene expression for the alkane mono-oxygenase alkB at temperatures exceeding . The expression of this gene is a crucial precursor to the bioremediation process. Fungi that have been genetically modified with cold-adapted enzymes to tolerate differing pH levels and temperatures have been shown to be effective at remediating hydrocarbon contamination in freezing conditions in the Antarctic.
Metals
Acidithiubacillus ferroxidans has been shown to be effective in remediating mercury in acidic soil due to its merA gene making it mercury resistant. Industrial effluent contain high levels of metals that can be detrimental to both human and ecosystem health. In extreme heat environments the extremophile Geobacillus thermodenitrificans has been shown to effectively manage the concentration of these metals within twelve hours of introduction. Some acidophilic microorganisms are effective at metal remediation in acidic environments due to proteins found in their periplasm, not present in any mesophilic organisms, allowing them to protect themselves from high proton concentrations. Rice paddies are highly oxidative environments that can produce high levels of lead or cadmium. Deinococcus radiodurans are resistant to the harsh conditions of the environment and are therefore candidate species for limiting the extent of contamination of these metals.
Some bacteria are known to also use rare earth elements on their biological processes. For example, Methylacidiphilum fumariolicum, Methylorubrum extorquens, and Methylobacterium radiotolerans are known to be able to use lanthanides as cofactors to increase their methanol dehydrogenase activity.
Acid mine drainage
Acid mine drainage is a major environmental concern associated with many metal mines. This is due to the fact that this highly acidic water can mix with groundwater, streams, and lakes. The drainage turns the pH in these water sources from a more neutral pH to a pH lower than 4. This is close to the acidity levels of battery acid or stomach acid. Exposure to the polluted water can greatly affect the health of plants, humans, and animals. However, a productive method of remediation is to introduce the extremophile, Thiobacillus ferrooxidans. This extremophile is useful for its bioleaching property. It helps to break down minerals in the waste water created by the mine. By breaking down the minerals Thiobacillus ferrooxidans start to help neutralize the acidity of the waste water. This is a way to reduce the environmental impact and help remediate the damage caused by the acid mine drainage leaks.
Oil-based, hazardous pollutants in Arctic regions
Psychrophilic microbes metabolize hydrocarbons which assists in the remediation of hazardous, oil-based pollutants in the Arctic and Antarctic regions. These specific microbes are used in this region due to their ability to perform their functions at extremely cold temperatures.
Radioactive materials
Any bacteria capable of inhabiting radioactive mediums can be classified as an extremophile. Radioresistant organisms are therefore critical in the bioremediation of radionuclides. Uranium is particularly challenging to contain when released into an environment and very harmful to both human and ecosystem health. The NANOBINDERS project is equipping bacteria that can survive in uranium rich environments with gene sequences that enable proteins to bind to uranium in mining effluent, making it more convenient to collect and dispose of. Some examples are Shewanella putrefaciens, Geobacter metallireducens and some strains of Burkholderia fungorum.
Radiotrophic fungi, which use radiation as an energy source, have been found inside and around the Chernobyl Nuclear Power Plant.
Radioresistance has also been observed in certain species of macroscopic lifeforms. The lethal dose required to kill up to 50% of a tortoise population is 40,000 roentgens, compared to only 800 roentgens needed to kill 50% of a human population. In experiments exposing lepidopteran insects to gamma radiation, significant DNA damage was detected only at 20 Gy and higher doses, in contrast with human cells that showed similar damage at only 2 Gy.
Examples and recent findings
New sub-types of extremophiles are identified frequently and the sub-category list for extremophiles is always growing. For example, microbial life lives in the liquid asphalt lake, Pitch Lake. Research indicates that extremophiles inhabit the asphalt lake in populations ranging between 106 and 107 cells/gram. Likewise, until recently, boron tolerance was unknown, but a strong borophile was discovered in bacteria. With the recent isolation of Bacillus boroniphilus, borophiles came into discussion. Studying these borophiles may help illuminate the mechanisms of both boron toxicity and boron deficiency.
In July 2019, a scientific study of Kidd Mine in Canada discovered sulfur-breathing organisms which live below the surface, and which breathe sulfur in order to survive. These organisms are also remarkable due to eating rocks such as pyrite as their regular food source.
Biotechnology
The thermoalkaliphilic catalase, which initiates the breakdown of hydrogen peroxide into oxygen and water, was isolated from an organism, Thermus brockianus, found in Yellowstone National Park by Idaho National Laboratory researchers. The catalase operates over a temperature range from 30 °C to over 94 °C and a pH range from 6–10. This catalase is extremely stable compared to other catalases at high temperatures and pH. In a comparative study, the T. brockianus catalase exhibited a half life of 15 days at 80 °C and pH 10 while a catalase derived from Aspergillus niger had a half life of 15 seconds under the same conditions. The catalase will have applications for removal of hydrogen peroxide in industrial processes such as pulp and paper bleaching, textile bleaching, food pasteurization, and surface decontamination of food packaging.
DNA modifying enzymes such as Taq DNA polymerase and some Bacillus enzymes used in clinical diagnostics and starch liquefaction are produced commercially by several biotechnology companies.
DNA transfer
Over 65 prokaryotic species are known to be naturally competent for genetic transformation, the ability to transfer DNA from one cell to another cell followed by integration of the donor DNA into the recipient cell's chromosome. Several extremophiles are able to carry out species-specific DNA transfer, as described below. However, it is not yet clear how common such a capability is among extremophiles.
The bacterium Deinococcus radiodurans is one of the most radioresistant organisms known. This bacterium can also survive cold, dehydration, vacuum and acid and is thus known as a polyextremophile. D. radiodurans is competent to perform genetic transformation. Recipient cells are able to repair DNA damage in donor transforming DNA that had been UV irradiated as efficiently as they repair cellular DNA when the cells themselves are irradiated. The extreme thermophilic bacterium Thermus thermophilus and other related Thermus species are also capable of genetic transformation.
Halobacterium volcanii, an extreme halophilic (saline tolerant) archaeon, is capable of natural genetic transformation. Cytoplasmic bridges are formed between cells that appear to be used for DNA transfer from one cell to another in either direction.
Sulfolobus solfataricus and Sulfolobus acidocaldarius are hyperthermophilic archaea. Exposure of these organisms to the DNA damaging agents UV irradiation, bleomycin or mitomycin C induces species-specific cellular aggregation. UV-induced cellular aggregation of S. acidocaldarius mediates chromosomal marker exchange with high frequency. Recombination rates exceed those of uninduced cultures by up to three orders of magnitude. Frols et al. and Ajon et al. hypothesized that cellular aggregation enhances species-specific DNA transfer between Sulfolobus cells in order to repair damaged DNA by means of homologous recombination. Van Wolferen et al. noted that this DNA exchange process may be crucial under DNA damaging conditions such as high temperatures. It has also been suggested that DNA transfer in Sulfolobus may be an early form of sexual interaction similar to the more well-studied bacterial transformation systems that involve species-specific DNA transfer leading to homologous recombinational repair of DNA damage (and see Transformation (genetics)).
Extracellular membrane vesicles (MVs) might be involved in DNA transfer between different hyperthermophilic archaeal species. It has been shown that both plasmids and viral genomes can be transferred via MVs. Notably, a horizontal plasmid transfer has been documented between hyperthermophilic Thermococcus and Methanocaldococcus species, respectively belonging to the orders Thermococcales and Methanococcales.
| Biology and health sciences | Ecology | null |
9630 | https://en.wikipedia.org/wiki/Ecology | Ecology | Ecology () is the natural science of the relationships among living organisms and their environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere levels. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history.
Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes.
Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology).
The word ecology () was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory.
Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.
Levels, scope, and scale of organization
The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame.
The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes.
Hierarchy
The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales.
To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to guilds, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties."
Biodiversity
Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry.
Habitat
The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment.
Niche
Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness."
Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species.
Niche construction
Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats."
The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time.
Biome
Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans.
Biosphere
The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance.
Population ecology
Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat.
A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration.
An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by:
where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change.
Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst:
where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size () will grow to approach equilibrium, where (), when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity."
Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data."
Metapopulations and migration
The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population.
In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure.
Community ecology
Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals.
Ecosystem ecology
Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria).
The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity.
Food webs
A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. Food chains in an ecological community create a complex food web. Food webs are a type of concept map that is used to illustrate and study pathways of energy and material flows.
Empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems. Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems.
Food webs illustrate important principles of ecology: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Such linkages explain how ecological communities remain stable over time and eventually can illustrate a "complete" web of life.
The disruption of food webs may have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of an ant species by another (invasive) ant species has been shown to affect how elephants reduce tree cover and thus the predation of lions on zebras.
Trophic levels
A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'.
Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing.
Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."
Keystone species
A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability.
Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied.
Complexity
Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.
"Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960.
Holism
Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed."
Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells.
Relation to evolution
Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation.
Behavioural ecology
All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba.
Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness.
Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk."
Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors.
Cognitive ecology
Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...".
Social ecology
Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members.
Coevolution
Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients.
Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure.
Biogeography
Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory.
Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming.
r/K selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring.
Molecular ecology
The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography.
Human ecology
Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century.
The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth.
Restoration Ecology
Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes.
Relation to the environment
The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat.
The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem.
Disturbance and resilience
A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances are both the cause and product of natural fluctuations within an ecological community. Biodiversity can protect ecosystems from disturbances.
The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celled protozoan has been able to kill up to 100% of sea urchins in some coral reefs in the Red Sea and Western Indian Ocean. Sea urchins enable complex reef ecosystems to thrive by eating algae that would otherwise inhibit coral growth. Similarly, invasive species can wreak havoc on ecosystems. For instance, invasive Burmese pythons have caused a 98% decline of small mammals in the Everglades.
Metabolism and the early atmosphere
The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved.
Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior.
Radiation: heat, temperature and light
The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy.
There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds.
Physical environments
Water
Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water.
Gravity
The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra).
Pressure
Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations.
Wind and turbulence
Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems.
Fire
Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s.
Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems.
Soils
Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils.
Biogeochemistry and climate
Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry.
The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm.
In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles.
History
Early beginnings
Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche.
Ernst Haeckel (left) and Eugenius Warming (right), two founders of ecology
Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" () was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy.
Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous.
From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences.
Since 1900
Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892.
In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations.
The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology.
In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers.
Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s.
In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.
| Biology and health sciences | Science and medicine | null |
9632 | https://en.wikipedia.org/wiki/Ecosystem | Ecosystem | An ecosystem (or ecological system) is a system formed by organisms in interaction with their environment. The biotic and abiotic components are linked together through nutrient cycles and energy flows.
Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them.
Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere.
Plants allow energy to enter the system through photosynthesis, building up plant tissue. Animals play an important role in the movement of matter and energy through the system, by feeding on plants and on one another. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes.
Ecosystems provide a variety of goods and services upon which people depend, and may be part of. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration can contribute to achieving the Sustainable Development Goals.
Definition
An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact. The biotic and abiotic components are linked together through nutrient cycles and energy flows.
"Ecosystem processes" are the transfers of energy and materials from one pool to another. Ecosystem processes are known to "take place at a wide range of scales". Therefore, the correct scale of study depends on the question asked.
Origin and development of the term
The term "ecosystem" was first used in 1935 in a publication by British ecologist Arthur Tansley. The term was coined by Arthur Roy Clapham, who came up with the word at Tansley's request. Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment. He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment". Tansley regarded ecosystems not simply as natural units, but as "mental isolates". Tansley later defined the spatial extent of ecosystems using the term "ecotope".
G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a "systems approach" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems.
Processes
External and internal factors
Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that "most strongly determines ecosystem processes and structure". Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem.
Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside.
Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present. The introduction of non-native species can cause substantial shifts in ecosystem function.
Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors.
Primary production
Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect.
Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP). About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP). Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.
Energy flow
Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system.
Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem.
Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion.
In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level.
The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains which present a number of common, non random properties in the topology of their network.
Decomposition
The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted.
Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones.
Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material.
The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.
Decomposition rates
Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available.
Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth.
Dynamics and resilience
Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance.
Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply."
The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times.
From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests. Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene.
Nutrient cycling
Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer. Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production. Over the long term, phosphorus availability can also be critical.
Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium. Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium.
Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants. Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust. Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.
When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification. Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification.
Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots. This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function.
Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics). Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.
Function and biodiversity
Biodiversity plays an important role in ecosystem functioning. Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment. Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise, one species would competitively exclude the other. Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness, additional species may have little additive effect unless they differ substantially from species already present. This is the case for example for exotic species.
The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.
An ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat.
Study approaches
Ecosystem ecology
Ecosystem ecology is the "study of the interactions between organisms and their environment as an integrated system". The size of ecosystems can range up to ten orders of magnitude, from the surface layers of rocks to the surface of the planet.
The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem. Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades.
Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studying microcosms or mesocosms (simplified representations of ecosystems). American ecologist Stephen R. Carpenter has argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics.
Classifications
Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as "wet coastal needle-leafed forests".
Biomes vary due to global variations in climate. Biomes are often defined by their structure: at a general level, for example, tropical forests, temperate grasslands, and arctic tundra. There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafed boreal forests or wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (see anthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g. novel ecosystem). Each of these taxonomies of ecosystems tends to emphasize different structural or functional properties. None of these is the "best" classification.
Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system.
Human interactions with ecosystems
Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.
Ecosystem goods and services
Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species.
Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.
The Millennium Ecosystem Assessment is an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services. It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both their resilience and biocapacity. The report refers to natural systems as humanity's "life-support system", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition.
The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues of biodiversity and ecosystem services. It is intended to serve a similar role to the Intergovernmental Panel on Climate Change.
Ecosystem services are limited and also threatened by human activities. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species.
Degradation and decline
As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas.
Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species.
These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered collapsed (see also IUCN Red List of Ecosystems). Ecosystem collapse could be reversible and in this way differs from species extinction. Quantitative assessments of the risk of collapse are used as measures of conservation status and trends.
Management
When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management. Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem; "intergenerational sustainability [is] a precondition for management, not an afterthought". While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (see, for example, agroecosystem and close to nature forestry).
Restoration and sustainable development
Integrated conservation and development projects (ICDPs) aim to address conservation and human livelihood (sustainable development) concerns in developing countries together, rather than separately as was often done in the past.
| Biology and health sciences | Biology | null |
9633 | https://en.wikipedia.org/wiki/E%20%28mathematical%20constant%29 | E (mathematical constant) | The number is a mathematical constant approximately equal to 2.71828 that is the base of the natural logarithm and exponential function. It is sometimes called Euler's number, after the Swiss mathematician Leonhard Euler, though this can invite confusion with Euler numbers, or with Euler's constant, a different constant typically denoted . Alternatively, can be called Napier's constant after John Napier. The Swiss mathematician Jacob Bernoulli discovered the constant while studying compound interest.
The number is of great importance in mathematics, alongside 0, 1, , and . All five appear in one formulation of Euler's identity and play important and recurring roles across mathematics. Like the constant , is irrational, meaning that it cannot be represented as a ratio of integers, and moreover it is transcendental, meaning that it is not a root of any non-zero polynomial with rational coefficients. To 30 decimal places, the value of is:
Definitions
The number is the limit
an expression that arises in the computation of compound interest.
It is the sum of the infinite series
It is the unique positive number such that the graph of the function has a slope of 1 at .
One has where is the (natural) exponential function, the unique function that equals its own derivative and satisfies the equation Since the exponential function is commonly denoted as one has also
The logarithm of base can be defined as the inverse function of the function Since one has The equation implies therefore that is the base of the natural logarithm.
The number can also be characterized in terms of an integral:
For other characterizations, see .
History
The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms to the base . It is assumed that the table was written by William Oughtred. In 1661, Christiaan Huygens studied how to compute logarithms by geometrical methods and calculated a quantity that, in retrospect, is the base-10 logarithm of , but he did not recognize itself as a quantity of interest.
The constant itself was introduced by Jacob Bernoulli in 1683, for solving the problem of continuous compounding of interest.
In his solution, the constant occurs as the limit
where represents the number of intervals in a year on which the compound interest is evaluated (for example, for monthly compounding).
The first symbol used for this constant was the letter by Gottfried Leibniz in letters to Christiaan Huygens in 1690 and 1691.
Leonhard Euler started to use the letter for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and in a letter to Christian Goldbach on 25 November 1731. The first appearance of in a printed publication was in Euler's Mechanica (1736). It is unknown why Euler chose the letter . Although some researchers used the letter in the subsequent years, the letter was more common and eventually became standard.
Euler proved that is the sum of the infinite series
where is the factorial of . The equivalence of the two characterizations using the limit and the infinite series can be proved via the binomial theorem.
Applications
Compound interest
Jacob Bernoulli discovered this constant in 1683, while studying a question about compound interest:
If the interest is credited twice in the year, the interest rate for each 6 months will be 50%, so the initial $1 is multiplied by 1.5 twice, yielding at the end of the year. Compounding quarterly yields , and compounding monthly yields . If there are compounding intervals, the interest for each interval will be and the value at the end of the year will be $1.00 × .
Bernoulli noticed that this sequence approaches a limit (the force of interest) with larger and, thus, smaller compounding intervals. Compounding weekly () yields $2.692596..., while compounding daily () yields $2.714567... (approximately two cents more). The limit as grows large is the number that came to be known as . That is, with continuous compounding, the account value will reach $2.718281828... More generally, an account that starts at $1 and offers an annual interest rate of will, after years, yield dollars with continuous compounding. Here, is the decimal equivalent of the rate of interest expressed as a percentage, so for 5% interest, .
Bernoulli trials
The number itself also has applications in probability theory, in a way that is not obviously related to exponential growth. Suppose that a gambler plays a slot machine that pays out with a probability of one in and plays it times. As increases, the probability that gambler will lose all bets approaches . For , this is already approximately 1/2.789509....
This is an example of a Bernoulli trial process. Each time the gambler plays the slots, there is a one in chance of winning. Playing times is modeled by the binomial distribution, which is closely related to the binomial theorem and Pascal's triangle. The probability of winning times out of trials is:
In particular, the probability of winning zero times () is
The limit of the above expression, as tends to infinity, is precisely .
Exponential growth and decay
Exponential growth is a process that increases quantity over time at an ever-increasing rate. It occurs when the instantaneous rate of change (that is, the derivative) of a quantity with respect to time is proportional to the quantity itself. Described as a function, a quantity undergoing exponential growth is an exponential function of time, that is, the variable representing time is the exponent (in contrast to other types of growth, such as quadratic growth). If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoing exponential decay instead. The law of exponential growth can be written in different but mathematically equivalent forms, by using a different base, for which the number is a common and convenient choice:
Here, denotes the initial value of the quantity , is the growth constant, and is the time it takes the quantity to grow by a factor of .
Standard normal distribution
The normal distribution with zero mean and unit standard deviation is known as the standard normal distribution, given by the probability density function
The constraint of unit standard deviation (and thus also unit variance) results in the in the exponent, and the constraint of unit total area under the curve results in the factor . This function is symmetric around , where it attains its maximum value , and has inflection points at .
Derangements
Another application of , also discovered in part by Jacob Bernoulli along with Pierre Remond de Montmort, is in the problem of derangements, also known as the hat check problem: guests are invited to a party and, at the door, the guests all check their hats with the butler, who in turn places the hats into boxes, each labelled with the name of one guest. But the butler has not asked the identities of the guests, and so puts the hats into boxes selected at random. The problem of de Montmort is to find the probability that none of the hats gets put into the right box. This probability, denoted by , is:
As tends to infinity, approaches . Furthermore, the number of ways the hats can be placed into the boxes so that none of the hats are in the right box is rounded to the nearest integer, for every positive .
Optimal planning problems
The maximum value of occurs at . Equivalently, for any value of the base , it is the case that the maximum value of occurs at (Steiner's problem, discussed below).
This is useful in the problem of a stick of length that is broken into equal parts. The value of that maximizes the product of the lengths is then either
or
The quantity is also a measure of information gleaned from an event occurring with probability (approximately when ), so that essentially the same optimal division appears in optimal planning problems like the secretary problem.
Asymptotics
The number occurs naturally in connection with many problems involving asymptotics. An example is Stirling's formula for the asymptotics of the factorial function, in which both the numbers and appear:
As a consequence,
Properties
Calculus
The principal motivation for introducing the number , particularly in calculus, is to perform differential and integral calculus with exponential functions and logarithms. A general exponential has a derivative, given by a limit:
The parenthesized limit on the right is independent of the Its value turns out to be the logarithm of to base . Thus, when the value of is set this limit is equal and so one arrives at the following simple identity:
Consequently, the exponential function with base is particularly suited to doing calculus. (as opposed to some other number) as the base of the exponential function makes calculations involving the derivatives much simpler.
Another motivation comes from considering the derivative of the base- logarithm (i.e., ), for :
where the substitution was made. The base- logarithm of is 1, if equals . So symbolically,
The logarithm with this special base is called the natural logarithm, and is usually denoted as ; it behaves well under differentiation since there is no undetermined limit to carry through the calculations.
Thus, there are two ways of selecting such special numbers . One way is to set the derivative of the exponential function equal to , and solve for . The other way is to set the derivative of the base logarithm to and solve for . In each case, one arrives at a convenient choice of base for doing calculus. It turns out that these two solutions for are actually the same: the number .
The Taylor series for the exponential function can be deduced from the facts that the exponential function is its own derivative and that it equals 1 when evaluated at 0:
Setting recovers the definition of as the sum of an infinite series.
The natural logarithm function can be defined as the integral from 1 to of , and the exponential function can then be defined as the inverse function of the natural logarithm. The number is the value of the exponential function evaluated at , or equivalently, the number whose natural logarithm is 1. It follows that is the unique positive real number such that
Because is the unique function (up to multiplication by a constant ) that is equal to its own derivative,
it is therefore its own antiderivative as well:
Equivalently, the family of functions
where is any real or complex number, is the full solution to the differential equation
Inequalities
The number is the unique real number such that
for all positive .
Also, we have the inequality
for all real , with equality if and only if . Furthermore, is the unique base of the exponential for which the inequality holds for all . This is a limiting case of Bernoulli's inequality.
Exponential-like functions
Steiner's problem asks to find the global maximum for the function
This maximum occurs precisely at . (One can check that the derivative of is zero only for this value of .)
Similarly, is where the global minimum occurs for the function
The infinite tetration
or
converges if and only if , shown by a theorem of Leonhard Euler.
Number theory
The real number is irrational. Euler proved this by showing that its simple continued fraction expansion does not terminate. ( | Mathematics | Counting and numbers | null |
9640 | https://en.wikipedia.org/wiki/Engine | Engine | An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy.
Available energy sources include potential energy (e.g. energy of the Earth's gravitational field as exploited in hydroelectric power generation), heat energy (e.g. geothermal), chemical energy, electric potential and nuclear energy (from nuclear fission or nuclear fusion). Many of these processes generate heat as an intermediate energy form; thus heat engines have special importance. Some natural processes, such as atmospheric convection cells convert environmental heat into motion (e.g. in the form of rising air currents). Mechanical energy is of particular importance in transportation, but also plays a role in many industrial processes such as cutting, grinding, crushing, and mixing.
Mechanical heat engines convert heat into work via various thermodynamic processes. The internal combustion engine is perhaps the most common example of a mechanical heat engine in which heat from the combustion of a fuel causes rapid pressurisation of the gaseous combustion products in the combustion chamber, causing them to expand and drive a piston, which turns a crankshaft. Unlike internal combustion engines, a reaction engine (such as a jet engine) produces thrust by expelling reaction mass, in accordance with Newton's third law of motion.
Apart from heat engines, electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air, and clockwork motors in wind-up toys use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create forces and ultimately motion (a chemical engine, but not a heat engine).
Chemical heat engines which employ air (ambient atmospheric gas) as a part of the fuel reaction are regarded as airbreathing engines. Chemical heat engines designed to operate outside of Earth's atmosphere (e.g. rockets, deeply submerged submarines) need to carry an additional fuel component called the oxidizer (although there exist super-oxidizers suitable for use in rockets, such as fluorine, a more powerful oxidant than oxygen itself); or the application needs to obtain heat by non-chemical means, such as by means of nuclear reactions.
Emission/Byproducts
All chemically fueled heat engines emit exhaust gases. The cleanest engines emit water only. Strict zero-emissions generally means zero emissions other than water and water vapour. Only heat engines which combust pure hydrogen (fuel) and pure oxygen (oxidizer) achieve zero-emission by a strict definition (in practice, one type of rocket engine). If hydrogen is burnt in combination with air (all airbreathing engines), a side reaction occurs between atmospheric oxygen and atmospheric nitrogen resulting in small emissions of . If a hydrocarbon (such as alcohol or gasoline) is burnt as fuel, , a greenhouse gas, is emitted. Hydrogen and oxygen from air can be reacted into water by a fuel cell without side production of , but this is an electrochemical engine not a heat engine.
Terminology
The word engine derives from Old French , from the Latin –the root of the word . Pre-industrial weapons of war, such as catapults, trebuchets and battering rams, were called siege engines, and knowledge of how to construct them was often treated as a military secret. The word gin, as in cotton gin, is short for engine. Most mechanical devices invented during the Industrial Revolution were described as engines—the steam engine being a notable example. However, the original steam engines, such as those by Thomas Savery, were not mechanical engines but pumps. In this manner, a fire engine in its original form was merely a water pump, with the engine being transported to the fire by horses.
In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force (usually in the form of thrust). Devices converting heat energy into motion are commonly referred to simply as engines. Examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets.
When the internal combustion engine was invented, the term motor was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. The term motor derives from the Latin verb which means 'to set in motion', or 'maintain motion'. Thus a motor is a device that imparts motion.
Motor and engine are interchangeable in standard English. In some engineering jargons, the two words have different meanings, in which engine is a device that burns or otherwise consumes fuel, changing its chemical composition, and a motor is a device driven by electricity, air, or hydraulic pressure, which does not change the chemical composition of its energy source. However, rocketry uses the term rocket motor, even though they consume fuel.
A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam).
History
Antiquity
Simple machines, such as the club and oar (examples of the lever), are prehistoric. More complex engines using human power, animal power, water power, wind power and even steam power date back to antiquity. Human power was focused by the use of simple engines, such as the capstan, windlass or treadmill, and with ropes, pulleys, and block and tackle arrangements; this power was transmitted usually with the forces multiplied and the speed reduced. These were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, so their invention may be more ancient. By the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times.
According to Strabo, a water-powered mill was built in Kaberia of the kingdom of Mithridates during the 1st century BC. Use of water wheels in mills spread throughout the Roman Empire over the next few centuries. Some were quite complex, with aqueducts, dams, and sluices to maintain and channel the water, along with systems of gears, or toothed-wheels made of wood and metal to regulate the speed of rotation. More sophisticated small devices, such as the Antikythera Mechanism used complex trains of gears and dials to act as calendars or predict astronomical events. In a poem by Ausonius in the 4th century AD, he mentions a stone-cutting saw powered by water. Hero of Alexandria is credited with many such wind and steam powered machines in the 1st century AD, including the Aeolipile and the vending machine, often these machines were associated with worship, such as animated altars and automated temple doors.
Medieval
Medieval Muslim engineers employed gears in mills and water-raising machines, and used dams as a source of water power to provide additional power to watermills and water-raising machines. In the medieval Islamic world, such advances made it possible to mechanize many industrial tasks previously carried out by manual labour.
In 1206, al-Jazari employed a crank-conrod system for two of his water-raising machines. A rudimentary steam turbine device was described by Taqi al-Din in 1551 and by Giovanni Branca in 1629.
In the 13th century, the solid rocket motor was invented in China. Driven by gunpowder, this simplest form of internal combustion engine was unable to deliver sustained power, but was useful for propelling weaponry at high speeds towards enemies in battle and for fireworks. After invention, this innovation spread throughout Europe.
Industrial Revolution
The Watt steam engine was the first type of steam engine to make use of steam at a pressure just above atmospheric to drive the piston helped by a partial vacuum. Improving on the design of the 1712 Newcomen steam engine, the Watt steam engine, developed sporadically from 1763 to 1775, was a great step in the development of the steam engine. Offering a dramatic increase in fuel efficiency, James Watt's design became synonymous with steam engines, due in no small part to his business partner, Matthew Boulton. It enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available. Later development led to steam locomotives and great expansion of railway transportation.
As for internal combustion piston engines, these were tested in France in 1807 by de Rivaz and independently, by the Niépce brothers. They were theoretically advanced by Carnot in 1824. In 1853–57 Eugenio Barsanti and Felice Matteucci invented and patented an engine using the free-piston principle that was possibly the first 4-cycle engine.
The invention of an internal combustion engine which was later commercially successful was made during 1860 by Etienne Lenoir.
In 1877, the Otto cycle was capable of giving a far higher power-to-weight ratio than steam engines and worked much better for many transportation applications such as cars and aircraft.
Automobiles
The first commercially successful automobile, created by Karl Benz, added to the interest in light and powerful engines. The lightweight gasoline internal combustion engine, operating on a four-stroke Otto cycle, has been the most successful for light automobiles, while the thermally more-efficient Diesel engine is used for trucks and buses. However, in recent years, turbocharged Diesel engines have become increasingly popular in automobiles, especially outside of the United States, even for quite small cars.
Horizontally-opposed pistons
In 1896, Karl Benz was granted a patent for his design of the first engine with horizontally opposed pistons. His design created an engine in which the corresponding pistons move in horizontal cylinders and reach top dead center simultaneously, thus automatically balancing each other with respect to their individual momentum. Engines of this design are often referred to as “flat” or “boxer” engines due to their shape and low profile. They were used in the Volkswagen Beetle, the Citroën 2CV, some Porsche and Subaru cars, many BMW and Honda motorcycles. Opposed four- and six-cylinder engines continue to be used as a power source in small, propeller-driven aircraft.
Advancement
The continued use of internal combustion engines in automobiles is partly due to the improvement of engine control systems, such as on-board computers providing engine management processes, and electronically controlled fuel injection. Forced air induction by turbocharging and supercharging have increased the power output of smaller displacement engines that are lighter in weight and more fuel-efficient at normal cruise power.. Similar changes have been applied to smaller Diesel engines, giving them almost the same performance characteristics as gasoline engines. This is especially evident with the popularity of smaller diesel engine-propelled cars in Europe. Diesel engines produce lower hydrocarbon and emissions, but greater particulate and pollution, than gasoline engines. Diesel engines are also 40% more fuel efficient than comparable gasoline engines.
Increasing power
In the first half of the 20th century, a trend of increasing engine power occurred, particularly in the U.S. models. Design changes incorporated all known methods of increasing engine capacity, including increasing the pressure in the cylinders to improve efficiency, increasing the size of the engine, and increasing the rate at which the engine produces work. The higher forces and pressures created by these changes created engine vibration and size problems that led to stiffer, more compact engines with V and opposed cylinder layouts replacing longer straight-line arrangements.
Combustion efficiency
Optimal combustion efficiency in passenger vehicles is reached with a coolant temperature of around .
Engine configuration
Earlier automobile engine development produced a much larger range of engines than is in common use today. Engines have ranged from 1- to 16-cylinder designs with corresponding differences in overall size, weight, engine displacement, and cylinder bores. Four cylinders and power ratings from 19 to 120 hp (14 to 90 kW) were followed in a majority of the models. Several three-cylinder, two-stroke-cycle models were built while most engines had straight or in-line cylinders. There were several V-type models and horizontally opposed two- and four-cylinder makes too. Overhead camshafts were frequently employed. The smaller engines were commonly air-cooled and located at the rear of the vehicle; compression ratios were relatively low. The 1970s and 1980s saw an increased interest in improved fuel economy, which caused a return to smaller V-6 and four-cylinder layouts, with as many as five valves per cylinder to improve efficiency. The Bugatti Veyron 16.4 operates with a W16 engine, meaning that two V8 cylinder layouts are positioned next to each other to create the W shape sharing the same crankshaft.
The largest internal combustion engine ever built is the Wärtsilä-Sulzer RTA96-C, a 14-cylinder, 2-stroke turbocharged diesel engine that was designed to power the Emma Mærsk, the largest container ship in the world when launched in 2006. This engine has a mass of 2,300 tonnes, and when running at 102 rpm (1.7 Hz) produces over 80 MW, and can use up to 250 tonnes of fuel per day.
Types
An engine can be put into a category according to two criteria: the form of energy it accepts in order to create motion, and the type of motion it outputs.
Heat engine
Combustion engine
Combustion engines are heat engines driven by the heat of a combustion process.
Internal combustion engine
The internal combustion engine is an engine in which the combustion of a fuel (generally, fossil fuel) occurs with an oxidizer (usually air) in a combustion chamber. In an internal combustion engine the expansion of the high temperature and high pressure gases, which are produced by the combustion, directly applies force to components of the engine, such as the pistons or turbine blades or a nozzle, and by moving it over a distance, generates mechanical work.
External combustion engine
An external combustion engine (EC engine) is a heat engine where an internal working fluid is heated by combustion of an external source, through the engine wall or a heat exchanger. The fluid then, by expanding and acting on the mechanism of the engine produces motion and usable work. The fluid is then cooled, compressed and reused (closed cycle), or (less commonly) dumped, and cool fluid pulled in (open cycle air engine).
"Combustion" refers to burning fuel with an oxidizer, to supply the heat. Engines of similar (or even identical) configuration and operation may use a supply of heat from other sources such as nuclear, solar, geothermal or exothermic reactions not involving combustion; but are not then strictly classed as external combustion engines, but as external thermal engines.
The working fluid can be a gas as in a Stirling engine, or steam as in a steam engine or an organic liquid such as n-pentane in an Organic Rankine cycle. The fluid can be of any composition; gas is by far the most common, although even single-phase liquid is sometimes used. In the case of the steam engine, the fluid changes phases between liquid and gas.
Air-breathing combustion engines
Air-breathing combustion engines are combustion engines that use the oxygen in atmospheric air to oxidise ('burn') the fuel, rather than carrying an oxidiser, as in a rocket. Theoretically, this should result in a better specific impulse than for rocket engines.
A continuous stream of air flows through the air-breathing engine. This air is compressed, mixed with fuel, ignited and expelled as the exhaust gas. In reaction engines, the majority of the combustion energy (heat) exits the engine as exhaust gas, which provides thrust directly.
Examples
Typical air-breathing engines include:
Reciprocating engine
Steam engine
Gas turbine
Airbreathing jet engine
Turbo-propeller engine
Pulse detonation engine
Pulse jet
Ramjet
Scramjet
Liquid air cycle engine/Reaction Engines SABRE.
Environmental effects
The operation of engines typically has a negative impact upon air quality and ambient sound levels. There has been a growing emphasis on the pollution producing features of automotive power systems. This has created new interest in alternate power sources and internal-combustion engine refinements. Though a few limited-production battery-powered electric vehicles have appeared, they have not proved competitive owing to costs and operating characteristics. In the 21st century the diesel engine has been increasing in popularity with automobile owners. However, the gasoline engine and the Diesel engine, with their new emission-control devices to improve emission performance, have not yet been significantly challenged. A number of manufacturers have introduced hybrid engines, mainly involving a small gasoline engine coupled with an electric motor and with a large battery bank, these are starting to become a popular option because of their environment awareness.
Air quality
Exhaust gas from a spark ignition engine consists of the following: nitrogen 70 to 75% (by volume), water vapor 10 to 12%, carbon dioxide 10 to 13.5%, hydrogen 0.5 to 2%, oxygen 0.2 to 2%, carbon monoxide: 0.1 to 6%, unburnt hydrocarbons and partial oxidation products (e.g. aldehydes) 0.5 to 1%, nitrogen monoxide 0.01 to 0.4%, nitrous oxide <100 ppm, sulfur dioxide 15 to 60 ppm, traces of other compounds such as fuel additives and lubricants, also halogen and metallic compounds, and other particles. Carbon monoxide is highly toxic, and can cause carbon monoxide poisoning, so it is important to avoid any build-up of the gas in a confined space. Catalytic converters can reduce toxic emissions, but not eliminate them. Also, resulting greenhouse gas emissions, chiefly carbon dioxide, from the widespread use of engines in the modern industrialized world is contributing to the global greenhouse effect – a primary concern regarding global warming.
Non-combusting heat engines
Some engines convert heat from noncombustive processes into mechanical work, for example a nuclear power plant uses the heat from the nuclear reaction to produce steam and drive a steam engine, or a gas turbine in a rocket engine may be driven by decomposing hydrogen peroxide. Apart from the different energy source, the engine is often engineered much the same as an internal or external combustion engine.
Another group of noncombustive engines includes thermoacoustic heat engines (sometimes called "TA engines") which are thermoacoustic devices that use high-amplitude sound waves to pump heat from one place to another, or conversely use a heat difference to induce high-amplitude sound waves. In general, thermoacoustic engines can be divided into standing wave and travelling wave devices.
Stirling engines can be another form of non-combustive heat engine. They use the Stirling thermodynamic cycle to convert heat into work. An example is the alpha type Stirling engine, whereby gas flows, via a recuperator, between a hot cylinder and a cold cylinder, which are attached to reciprocating pistons 90° out of phase. The gas receives heat at the hot cylinder and expands, driving the piston that turns the crankshaft. After expanding and flowing through the recuperator, the gas rejects heat at the cold cylinder and the ensuing pressure drop leads to its compression by the other (displacement) piston, which forces it back to the hot cylinder.
Non-thermal chemically powered motor
Non-thermal motors usually are powered by a chemical reaction, but are not heat engines. Examples include:
Molecular motor – motors found in living things
Synthetic molecular motor.
Electric motor
An electric motor uses electrical energy to produce mechanical energy, usually through the interaction of magnetic fields and current-carrying conductors. The reverse process, producing electrical energy from mechanical energy, is accomplished by a generator or dynamo. Traction motors used on vehicles often perform both tasks. Electric motors can be run as generators and vice versa, although this is not always practical.
Electric motors are ubiquitous, being found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives. They may be powered by direct current (for example a battery powered portable device or motor vehicle), or by alternating current from a central electrical distribution grid. The smallest motors may be found in electric wristwatches. Medium-size motors of highly standardized dimensions and characteristics provide convenient mechanical power for industrial uses. The very largest electric motors are used for propulsion of large ships, and for such purposes as pipeline compressors, with ratings in the thousands of kilowatts. Electric motors may be classified by the source of electric power, by their internal construction, and by their application.
The physical principle of production of mechanical force by the interactions of an electric current and a magnetic field was known as early as 1821. Electric motors of increasing efficiency were constructed throughout the 19th century, but commercial exploitation of electric motors on a large scale required efficient electrical generators and electrical distribution networks.
To reduce the electric energy consumption from motors and their associated carbon footprints, various regulatory authorities in many countries have introduced and implemented legislation to encourage the manufacture and use of higher efficiency electric motors. A well-designed motor can convert over 90% of its input energy into useful power for decades. When the efficiency of a motor is raised by even a few percentage points, the savings, in kilowatt hours (and therefore in cost), are enormous. The electrical energy efficiency of a typical industrial induction motor can be improved by: 1) reducing the electrical losses in the stator windings (e.g., by increasing the cross-sectional area of the conductor, improving the winding technique, and using materials with higher electrical conductivities, such as copper), 2) reducing the electrical losses in the rotor coil or casting (e.g., by using materials with higher electrical conductivities, such as copper), 3) reducing magnetic losses by using better quality magnetic steel, 4) improving the aerodynamics of motors to reduce mechanical windage losses, 5) improving bearings to reduce friction losses, and 6) minimizing manufacturing tolerances. For further discussion on this subject, see Premium efficiency).
By convention, electric engine refers to a railroad electric locomotive, rather than an electric motor.
Physically powered motor
Some motors are powered by potential or kinetic energy, for example some funiculars, gravity plane and ropeway conveyors have used the energy from moving water or rocks, and some clocks have a weight that falls under gravity. Other forms of potential energy include compressed gases (such as pneumatic motors), springs (clockwork motors) and elastic bands.
Historic military siege engines included large catapults, trebuchets, and (to some extent) battering rams were powered by potential energy.
Pneumatic motor
A pneumatic motor is a machine that converts potential energy in the form of compressed air into mechanical work. Pneumatic motors generally convert the compressed air to mechanical work through either linear or rotary motion. Linear motion can come from either a diaphragm or a piston actuator, while rotary motion is supplied by either a vane type air motor or piston air motor. Pneumatic motors have found widespread success in the hand-held tool industry and continual attempts are being made to expand their use to the transportation industry. However, pneumatic motors must overcome efficiency deficiencies before being seen as a viable option in the transportation industry.
Hydraulic motor
A hydraulic motor derives its power from a pressurized liquid. This type of engine is used to move heavy loads and drive machinery.
Hybrid
Some motor units can have multiple sources of energy. For example, a plug-in hybrid electric vehicle's electric motor could source electricity from either a battery or from fossil fuels inputs via an internal combustion engine and a generator.
Performance
The following are used in the assessment of the performance of an engine.
Speed
Speed refers to crankshaft rotation in piston engines and the speed of compressor/turbine rotors and electric motor rotors. It is typically measured in revolutions per minute (rpm).
Thrust
Thrust is the force exerted on an airplane as a consequence of its propeller or jet engine accelerating the air passing through it. It is also the force exerted on a ship as a consequence of its propeller accelerating the water passing through it.
Torque
Torque is a turning moment on a shaft and is calculated by multiplying the force causing the moment by its distance from the shaft.
Power
Power is the measure of how fast work is done.
Efficiency
Efficiency is a proportion of useful energy output compared to total input.
Sound levels
Vehicle noise is predominantly from the engine at low vehicle speeds and from tires and the air flowing past the vehicle at higher speeds. Electric motors are quieter than internal combustion engines. Thrust-producing engines, such as turbofans, turbojets and rockets emit the greatest amount of noise due to the way their thrust-producing, high-velocity exhaust streams interact with the surrounding stationary air.
Noise reduction technology includes intake and exhaust system mufflers (silencers) on gasoline and diesel engines and noise attenuation liners in turbofan inlets.
Engines by use
Particularly notable kinds of engines include:
Aircraft engine
Automobile engine
Model engine
Motorcycle engine
Marine propulsion engines such as Outboard motor
Non-road engine is the term used to define engines that are not used by vehicles on roadways.
Railway locomotive engine
Spacecraft propulsion engines such as Rocket engine
Traction engine
| Technology | Tools and machinery | null |
9649 | https://en.wikipedia.org/wiki/Energy | Energy | Energy () is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J).
Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass. These are not mutually exclusive.
All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, and renewable energy.
Forms
The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself.
While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples.
History
The word energy derives from the , which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
In the late 17th century, Gottfried Leibniz proposed the idea of the , or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy".
In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
Units of measure
In the International System of Units (SI), the unit of energy is the joule. It is a derived unit that is equal to the energy expended, or work done, in applying a force of one newton through a distance of one metre. However energy can also be expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
The SI unit of power, defined as energy per unit of time, is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
In 1843, English physicist James Prescott Joule, namesake of the unit of measure, discovered that the gravitational potential energy lost by a descending weight attached via a string was equal to the internal energy gained by the water through friction with the paddle.
Scientific use
Classical mechanics
In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
Work, a function of energy, is force times distance.
This says that the work () is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics.
Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Chemistry
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse.
Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
Biology
In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 6,900 kJ per day and a basal metabolic rate of 80 watts.
For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.
Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action.
All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria
C6H12O6 + 6O2 -> 6CO2 + 6H2O
C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O
and some of the energy is used to convert ADP into ATP:
The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ
daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat.
Earth sciences
In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy.
Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement.
In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms).
Cosmology
In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen).
The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
Quantum mechanics
In quantum mechanics, energy is defined in terms of the energy operator
(Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: (where is the Planck constant and the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
Relativity
When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
where
m0 is the rest mass of the body,
c is the speed of light in vacuum,
is the rest energy.
For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.
Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts).
Transformation
Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work).
Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time.
Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
Energy is also transferred from potential energy () to kinetic energy () and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half mass times velocity squared). Then the total amount of energy can be found by adding .
Conservation of energy and mass in transformation
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc2, derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information).
Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ , equivalent to 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons.
Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws.
Reversible and non-reversible transformations
Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above.
In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).
As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease.
Conservation of energy
The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.
While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system.
Richard Feynman said during a 1961 lecture:
Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.
This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured.
Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena.
Energy transfer
Closed systems
Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy.
Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:
where is the amount of energy transferred, represents the work done on or by the system, and represents the heat flow into or out of the system. As a simplification, the heat term, , can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes,
This simplified equation is the one used to define the joule, for example.
Open systems
Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by , one may write
Thermodynamics
Internal energy
Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.
First law of thermodynamics
The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
where is the heat supplied to the system and is the work applied to the system.
Equipartition of energy
The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average.
This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production.
| Physical sciences | Science and medicine | null |
9653 | https://en.wikipedia.org/wiki/Expected%20value | Expected value | In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.
The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration.
The expected value of a random variable is often denoted by , , or , with also often stylized as or .
History
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished. This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem could not be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all.
He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.
In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability.
In the foreword to his treatise, Huygens wrote:
In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables.
Etymology
Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes:
More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:
Notations
The use of the letter to denote "expected value" goes back to W. A. Whitworth in 1901. The symbol has since become popular for English writers. In German, stands for Erwartungswert, in Spanish for esperanza matemática, and in French for espérance mathématique.
When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized as (upright), (italic), or (in blackboard bold), while a variety of bracket notations (such as , , and ) are all used.
Another popular notation is . , , and are commonly used in physics. is used in Russian-language literature.
Definition
As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language.
Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector . It is defined component by component, as . Similarly, one may define the expected value of a random matrix with components by .
Random variables with finitely many outcomes
Consider a random variable with a finite list of possible outcomes, each of which (respectively) has probability of occurring. The expectation of is defined as
Since the probabilities must satisfy , it is natural to interpret as a weighted average of the values, with weights given by their probabilities .
In the special case that all possible outcomes are equiprobable (that is, ), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others.
Examples
Let represent the outcome of a roll of a fair six-sided die. More specifically, will be the number of pips showing on the top face of the die after the toss. The possible values for are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of . The expectation of is If one rolls the die times and computes the average (arithmetic mean) of the results, then as grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers.
The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be That is, the expected value to be won from a $1 bet is −$. Thus, in 190 bets, the net loss will probably be about $10.
Random variables with countably infinitely many outcomes
Informally, the expectation of a random variable with a countably infinite set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that
where are the possible outcomes of the random variable and are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context.
However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely.
For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands. In the alternative case that the infinite sum does not converge absolutely, one says the random variable does not have finite expectation.
Examples
Suppose and for where is the scaling factor which makes the probabilities sum to 1. Then we have
Random variables with density
Now consider a random variable which has a probability density function given by a function on the real number line. This means that the probability of taking on a value in any given open interval is given by the integral of over that interval. The expectation of is then given by the integral
A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors.
Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of is given by the Cauchy distribution , so that . It is straightforward to compute in this case that
The limit of this expression as and does not exist: if the limits are taken so that , then the limit is zero, while if the constraint is taken, then the limit is .
To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of for more general random variables .
Arbitrary real-valued random variables
All definitions of the expected value may be expressed in the language of measure theory. In general, if is a real-valued random variable defined on a probability space , then the expected value of , denoted by , is defined as the Lebesgue integral
Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of is defined via weighted averages of approximations of which take on finitely many values. Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable is said to be absolutely continuous if any of the following conditions are satisfied:
there is a nonnegative measurable function on the real line such that for any Borel set , in which the integral is Lebesgue.
the cumulative distribution function of is absolutely continuous.
for any Borel set of real numbers with Lebesgue measure equal to zero, the probability of being valued in is also equal to zero
for any positive number there is a positive number such that: if is a Borel set with Lebesgue measure less than , then the probability of being valued in is less than .
These conditions are all equivalent, although this is nontrivial to establish. In this definition, is called the probability density function of (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration, combined with the law of the unconscious statistician, it follows that
for any absolutely continuous random variable . The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable.
The expected value of any real-valued random variable can also be defined on the graph of its cumulative distribution function by a nearby equality of areas. In fact, with a real number if and only if the two surfaces in the --plane, described by
respectively, have the same finite area, i.e. if
and both improper Riemann integrals converge. Finally, this is equivalent to the representation
also with convergent integrals.
Infinite expected values
Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of . This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes , with associated probabilities , for ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has
It is natural to say that the expected value equals .
There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral. The first fundamental observation is that, whichever of the above definitions are followed, any nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as . The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable , one defines the positive and negative parts by and . These are nonnegative random variables, and it can be directly checked that . Since and are both then defined as either nonnegative numbers or , it is then natural to define:
According to this definition, exists and is finite if and only if and are both finite. Due to the formula , this is the case if and only if is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations.
In the case of the St. Petersburg paradox, one has and so as desired.
Suppose the random variable takes values with respective probabilities . Then it follows that takes value with probability for each positive integer , and takes value with remaining probability. Similarly, takes value with probability for each positive integer and takes value with remaining probability. Using the definition for non-negative random variables, one can show that both and (see Harmonic series). Hence, in this case the expectation of is undefined.
Similarly, the Cauchy distribution, as discussed above, has undefined expectation.
Expected values of common distributions
The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references.
Properties
The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like is true almost surely, when the probability measure attributes zero-mass to the complementary event
Non-negativity: If (a.s.), then
of expectation: The expected value operator (or expectation operator) is linear in the sense that, for any random variables and and a constant whenever the right-hand side is well-defined. By induction, this means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant. Symbolically, for random variables and constants we have If we think of the set of random variables with finite expected value as forming a vector space, then the linearity of expectation implies that the expected value is a linear form on this vector space.
Monotonicity: If (a.s.), and both and exist, then Proof follows from the linearity and the non-negativity property for since (a.s.).
Non-degeneracy: If then (a.s.).
If (a.s.), then In other words, if X and Y are random variables that take different values with probability zero, then the expectation of X will equal the expectation of Y.
If (a.s.) for some real number , then In particular, for a random variable with well-defined expectation, A well defined expectation implies that there is one number, or rather, one constant that defines the expected value. Thus follows that the expectation of this constant is just the original expected value.
As a consequence of the formula as discussed above, together with the triangle inequality, it follows that for any random variable with well-defined expectation, one has
Let denote the indicator function of an event , then is given by the probability of . This is nothing but a different way of stating the expectation of a Bernoulli random variable, as calculated in the table above.
Formulas in terms of CDF: If is the cumulative distribution function of a random variable , then where the values on both sides are well defined or not well defined simultaneously, and the integral is taken in the sense of Lebesgue-Stieltjes. As a consequence of integration by parts as applied to this representation of , it can be proved that with the integrals taken in the sense of Lebesgue. As a special case, for any random variable valued in the nonnegative integers , one has where denotes the underlying probability measure.
Non-multiplicativity: In general, the expected value is not multiplicative, i.e. is not necessarily equal to If and are independent, then one can show that If the random variables are dependent, then generally although in special cases of dependency the equality may hold.
Law of the unconscious statistician: The expected value of a measurable function of given that has a probability density function is given by the inner product of and : This formula also holds in multidimensional case, when is a function of several random variables, and is their joint density.
Inequalities
Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a nonnegative random variable and any positive number , it states that
If is any random variable with finite expectation, then Markov's inequality may be applied to the random variable to obtain Chebyshev's inequality
where is the variance. These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%. The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables.
The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory.
Jensen's inequality: Let be a convex function and a random variable with finite expectation. Then Part of the assertion is that the negative part of has finite expectation, so that the right-hand side is well-defined (possibly infinite). Convexity of can be phrased as saying that the output of the weighted average of two inputs under-estimates the same weighted average of the two outputs; Jensen's inequality extends this to the setting of completely general weighted averages, as represented by the expectation. In the special case that for positive numbers , one obtains the Lyapunov inequality This can also be proved by the Hölder inequality. In measure theory, this is particularly notable for proving the inclusion of , in the special case of probability spaces.
Hölder's inequality: if and are numbers satisfying , then for any random variables and . The special case of is called the Cauchy–Schwarz inequality, and is particularly well-known.
Minkowski inequality: given any number , for any random variables and with and both finite, it follows that is also finite and
The Hölder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces.
Expectations under convergence of random variables
In general, it is not the case that even if pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let be a random variable distributed uniformly on For define a sequence of random variables
with being the indicator function of the event Then, it follows that pointwise. But, for each Hence,
Analogously, for general sequence of random variables the expected value operator is not -additive, i.e.
An example is easily obtained by setting and for where is as in the previous example.
A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below.
Monotone convergence theorem: Let be a sequence of random variables, with (a.s) for each Furthermore, let pointwise. Then, the monotone convergence theorem states that Using the monotone convergence theorem, one can show that expectation indeed satisfies countable additivity for non-negative random variables. In particular, let be non-negative random variables. It follows from the monotone convergence theorem that
Fatou's lemma: Let be a sequence of non-negative random variables. Fatou's lemma states that Corollary. Let with for all If (a.s), then Proof is by observing that (a.s.) and applying Fatou's lemma.
Dominated convergence theorem: Let be a sequence of random variables. If pointwise (a.s.), (a.s.), and Then, according to the dominated convergence theorem,
;
Uniform integrability: In some cases, the equality holds when the sequence is uniformly integrable.
Relationship with characteristic function
The probability density function of a scalar random variable is related to its characteristic function by the inversion formula:
For the expected value of (where is a Borel function), we can use this inversion formula to obtain
If is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem,
where
is the Fourier transform of The expression for also follows directly from the Plancherel theorem.
Uses and applications
The expectation of a random variable plays an important role in a variety of contexts.
In statistics, where one seeks estimates for unknown parameters based on available data gained from samples, the sample mean serves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in being unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter.
For a different example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function.
It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of . The moments of some random variables can be used to specify their distributions, via their moment generating functions.
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. where is the indicator function of the set
In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].
Expected values can also be used to compute the variance, by means of the computational formula for the variance
A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator operating on a quantum state vector is written as The uncertainty in can be calculated by the formula .
| Mathematics | Statistics and probability | null |
9656 | https://en.wikipedia.org/wiki/Electric%20light | Electric light | An electric light, lamp, or light bulb is an electrical component that produces light. It is the most common form of artificial lighting. Lamps usually have a base made of ceramic, metal, glass, or plastic which secures the lamp in the socket of a light fixture, which is often called a "lamp" as well. The electrical connection to the socket may be made with a screw-thread base, two metal pins, two metal caps or a bayonet mount.
The three main categories of electric lights are incandescent lamps, which produce light by a filament heated white-hot by electric current, gas-discharge lamps, which produce light by means of an electric arc through a gas, such as fluorescent lamps, and LED lamps, which produce light by a flow of electrons across a band gap in a semiconductor.
The energy efficiency of electric lighting has increased radically since the first demonstration of arc lamps and the incandescent light bulb of the 19th century. Modern electric light sources come in a profusion of types and sizes adapted to many applications. Most modern electric lighting is powered by centrally generated electric power, but lighting may also be powered by mobile or standby electric generators or battery systems. Battery-powered light is often reserved for when and where stationary lights fail, often in the form of flashlights or electric lanterns, as well as in vehicles.
History
Before electric lighting became common in the early 20th century, people used candles, gas lights, oil lamps, and fires. In 1799–1800, Alessandro Volta created the voltaic pile, the first electric battery. Current from these batteries could heat copper wire to incandescence. Vasily Vladimirovich Petrov developed the first persistent electric arc in 1802, and English chemist Humphry Davy gave a practical demonstration of an arc light in 1806.
It took more than a century of continuous and incremental improvement, including numerous designs, patents, and resulting intellectual property disputes, to get from these early experiments to commercially produced incandescent light bulbs in the 1920s.
In 1840, Warren de la Rue enclosed a platinum coil in a vacuum tube and passed an electric current through it, thus creating one of the world's first electric light bulbs. The design was based on the concept that the high melting point of platinum would allow it to operate at high temperatures and that the evacuated chamber would contain fewer gas molecules to react with the platinum, improving its longevity. Although it was an efficient design, the cost of the platinum made it impractical for commercial use.
William Greener, an English inventor, made significant contributions to early electric lighting with his lamp in 1846 (patent specification 11076), laying the groundwork for future innovations such as those by Thomas Edison.
The late 1870s and 1880s were marked by intense competition and innovation, with inventors like Joseph Swan in the UK and Thomas Edison in the US independently developing functional incandescent lamps. Swan's bulbs, based on designs by William Staite, were successful, but the filaments were too thick. Edison worked to create bulbs with thinner filaments, leading to a better design. The rivalry between Swan and Edison eventually led to a merger, forming the Edison and Swan Electric Light Company. By the early twentieth century these had completely replaced arc lamps.
The turn of the century saw further improvements in bulb longevity and efficiency, notably with the introduction of the tungsten filament by William D. Coolidge, who applied for a patent in 1912. This innovation became a standard for incandescent bulbs for many years.
In 1910, Georges Claude introduced the first neon light, paving the way for neon signs which would become ubiquitous in advertising.
In 1934, Arthur Compton, a renowned physicist and GE consultant, reported to the GE lamp department on successful experiments with fluorescent lighting at General Electric Co., Ltd. in Great Britain (unrelated to General Electric in the United States). Stimulated by this report, and with all of the key elements available, a team led by George E. Inman built a prototype fluorescent lamp in 1934 at General Electric's Nela Park (Ohio) engineering laboratory. This was not a trivial exercise; as noted by Arthur A. Bright, "A great deal of experimentation had to be done on lamp sizes and shapes, cathode construction, gas pressures of both argon and mercury vapor, colors of fluorescent powders, methods of attaching them to the inside of the tube, and other details of the lamp and its auxiliaries before the new device was ready for the public."
The first practical LED arrived in 1962.
U.S. transition to LED bulbs
In the United States, incandescent light bulbs including halogen bulbs stopped being sold as of August 1, 2023, because they do not meet minimum lumens per watt performance metrics established by the U.S. Department of Energy. Compact fluorescent bulbs are also banned despite their lumens per watt performance because of their toxic mercury that can be released into the home if broken and widespread problems with proper disposal of mercury-containing bulbs.
Types
Incandescent
In its modern form, the incandescent light bulb consists of a coiled filament of tungsten sealed in a globular glass chamber, either a vacuum or full of an inert gas such as argon. When an electric current is connected, the tungsten is heated to and glows, emitting light that approximates a continuous spectrum.
Incandescent bulbs are highly inefficient, in that just 2–5% of the energy consumed is emitted as visible, usable light. The remaining 95% is lost as heat. In warmer climates, the emitted heat must then be removed, putting additional pressure on ventilation or air conditioning systems. In colder weather, the heat byproduct has some value, and has been successfully harnessed for warming in devices such as heat lamps. Incandescent bulbs are nonetheless being phased out in favor of technologies like CFLs and LED bulbs in many countries due to their low energy efficiency. The European Commission estimated in 2012 that a complete ban on incandescent bulbs would contribute 5 to 10 billion euros to the economy and save 15 billion metric tonnes of carbon dioxide emissions.
Halogen
Halogen lamps are usually much smaller than standard incandescent lamps, because for successful operation a bulb temperature over 200 °C is generally necessary. For this reason, most have a bulb of fused silica (quartz) or aluminosilicate glass. This is often sealed inside an additional layer of glass. The outer glass is a safety precaution, to reduce ultraviolet emission and to contain hot glass shards should the inner envelope explode during operation. Oily residue from fingerprints may cause a hot quartz envelope to shatter due to excessive heat buildup at the contamination site. The risk of burns or fire is also greater with bare bulbs, leading to their prohibition in some places, unless enclosed by the luminaire.
Those designed for 12- or 24-volt operation have compact filaments, useful for good optical control. Also, they have higher efficacies (lumens per watt) and longer lives than non-halogen types. The light output remains almost constant throughout their life.
Fluorescent
Fluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet photons. They have much higher efficiency than incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent. The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt, several times the efficacy of incandescent bulbs with comparable light output. Fluorescent lamp fixtures are more costly than incandescent lamps, because they require a ballast to regulate the current through the lamp, but the lower energy cost typically offsets the higher initial cost. Compact fluorescent lamps are available in the same popular sizes as incandescent lamps and are used as an energy-saving alternative in homes. Because they contain mercury, many fluorescent lamps are classified as hazardous waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them.
LED
The solid-state light-emitting diode (LED) has been popular as an indicator light in consumer electronics and professional audio gear since the 1970s. In the 2000s, efficacy and output have risen to the point where LEDs are now being used in lighting applications such as car headlights and brake lights, in flashlights and bicycle lights, as well as in decorative applications, such as holiday lighting. Indicator LEDs are known for their extremely long life, up to 100,000 hours, but lighting LEDs are operated much less conservatively, and consequently have shorter lives. LED technology is useful for lighting designers, because of its low power consumption, low heat generation, instantaneous on/off control, and in the case of single color LEDs, continuity of color throughout the life of the diode and relatively low cost of manufacture. LED lifetime depends strongly on the temperature of the diode. Operating an LED lamp in conditions that increase the internal temperature can greatly shorten the lamp's life. Some lasers have been adapted as an alternative to LEDs to provide highly focused illumination.
Carbon arc
Carbon arc lamps consist of two carbon rod electrodes in open air, supplied by a current-limiting ballast. The electric arc is struck by touching the rod tips then separating them. The ensuing arc produces a white-hot plasma between the rod tips. These lamps have higher efficacy than filament lamps, but the carbon rods are short-lived and require constant adjustment in use, as the intense heat of the arc erodes them. The lamps produce significant ultraviolet output, they require ventilation when used indoors, and due to their intensity they need protection from direct sight.
Invented by Humphry Davy around 1805, the carbon arc was the first practical electric light. It was used commercially beginning in the 1870s for large building and street lighting until it was superseded in the early 20th century by the incandescent light. Carbon arc lamps operate at high power and produce high intensity white light. They also are a point source of light. They remained in use in limited applications that required these properties, such as movie projectors, stage lighting, and searchlights, until after World War II.
Discharge
A discharge lamp has a glass or silica envelope containing two metal electrodes separated by a gas. Gases used include, neon, argon, xenon, sodium, metal halides, and mercury. The core operating principle is much the same as the carbon arc lamp, but the term "arc lamp" normally refers to carbon arc lamps, with more modern types of gas discharge lamp normally called discharge lamps. With some discharge lamps, very high voltage is used to strike the arc. This requires an electrical circuit called an igniter, which is part of the electrical ballast circuitry. After the arc is struck, the internal resistance of the lamp drops to a low level, and the ballast limits the current to the operating current. Without a ballast, excess current would flow, causing rapid destruction of the lamp.
Some lamp types contain a small amount of neon, which permits striking at normal running voltage with no external ignition circuitry. Low-pressure sodium lamps operate this way. The simplest ballasts are just an inductor, and are chosen where cost is the deciding factor, such as street lighting. More advanced electronic ballasts may be designed to maintain constant light output over the life of the lamp, may drive the lamp with a square wave to maintain completely flicker-free output, and shut down in the event of certain faults.
The most efficient source of electric light is the low-pressure sodium lamp. It produces, for all practical purposes, a monochromatic orange-yellow light, which gives a similarly monochromatic perception of any illuminated scene. For this reason, it is generally reserved for outdoor public lighting applications. Low-pressure sodium lights are favoured for public lighting by astronomers, since the light pollution that they generate can be easily filtered, contrary to broadband or continuous spectra.
Characteristics
Form factor
Many lamp units, or light bulbs, are specified in standardized shape codes and socket names. Incandescent bulbs and their retrofit replacements are often specified as "A19/A60 E26/E27", a common size for those kinds of light bulbs. In this example, the "A" parameters describe the bulb size and shape within the A-series light bulb while the "E" parameters describe the Edison screw base size and thread characteristics.
Comparison parameters
Common comparison parameters include:
Luminous flux (in lumens)
Energy consumption (in watts)
Luminous efficacy (in lumens per watt)
Color temperature (in kelvins)
Less common parameters include color rendering index (CRI).
Life expectancy
Life expectancy for many types of lamp is defined as the number of hours of operation at which 50% of them fail, that is the median life of the lamps. Production tolerances as low as 1% can create a variance of 25% in lamp life, so in general some lamps will fail well before the rated life expectancy, and some will last much longer. For LEDs, lamp life is defined as the operation time at which 50% of lamps have experienced a 70% decrease in light output. In the 1900s the Phoebus cartel formed in an attempt to reduce the life of electric light bulbs, an example of planned obsolescence.
Some types of lamp are also sensitive to switching cycles. Rooms with frequent switching, such as bathrooms, can expect much shorter lamp life than what is printed on the box. Compact fluorescent lamps are particularly sensitive to switching cycles.
Uses
The total amount of artificial light (especially from street light) is sufficient for cities to be easily visible at night from the air, and from space. External lighting grew at a rate of 3–6 percent for the later half of the 20th century and is the major source of light pollution that burdens astronomers and others with 80% of the world's population living in areas with night time light pollution. Light pollution has been shown to have a negative effect on some wildlife.
Electric lamps can be used as heat sources, for example in incubators, as infrared lamps in fast food restaurants and toys such as the Kenner Easy-Bake Oven.
Lamps can also be used for light therapy to deal with such issues as vitamin D deficiency, skin conditions such as acne and dermatitis, skin cancers, and seasonal affective disorder. Lamps which emit a specific frequency of blue light are also used to treat neonatal jaundice with the treatment which was initially undertaken in hospitals being able to be conducted at home.
Electric lamps can also be used as a grow light to aid in plant growth especially in indoor hydroponics and aquatic plants with recent research into the most effective types of light for plant growth.
Due to their nonlinear resistance characteristics, tungsten filament lamps have long been used as fast-acting thermistors in electronic circuits. Popular uses have included:
Stabilization of sine wave oscillators
Protection of tweeters in loudspeaker enclosures; excess current that is too high for the tweeter illuminates the light rather than destroying the tweeter.
Automatic volume control in telephones
Cultural symbolism
In Western culture, a lightbulb — in particular, the appearance of an illuminated lightbulb above a person's head — signifies sudden inspiration.
A stylized depiction of a light bulb features as the logo of the Turkish AK Party.
| Technology | Electronics | null |
9663 | https://en.wikipedia.org/wiki/Electronics | Electronics | Electronics is a scientific and engineering discipline that studies and applies the principles of physics to design, create, and operate devices that manipulate electrons and other electrically charged particles. It is a subfield of physics and electrical engineering which uses active devices such as transistors, diodes, and integrated circuits to control and amplify the flow of electric current and to convert it from one form to another, such as from alternating current (AC) to direct current (DC) or from analog signals to digital signals.
Electronic devices have hugely influenced the development of many aspects of modern society, such as telecommunications, entertainment, education, health care, industry, and security. The main driving force behind the advancement of electronics is the semiconductor industry, which in response to global demand continually produces ever-more sophisticated electronic devices and circuits. The semiconductor industry is one of the largest and most profitable sectors in the global economy, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017.
History and development
The identification of the electron in 1897 by Sir Joseph John Thomson, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages, such as radio signals from a radio antenna, practicable.
Vacuum tubes (thermionic valves) were the first active electronic components which controlled current flow by influencing the flow of individual electrons, and enabled the construction of equipment that used current amplification and rectification to give us radio, television, radar, long-distance telephony and much more. The early growth of electronics was rapid, and by the 1920s, commercial radio broadcasting and telecommunications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry.
The next big technological step took several decades to appear, when the first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947.
However, vacuum tubes continued to play a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s.
Since then, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode-ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices.
In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic circuits and peripheral devices. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.
The MOSFET was invented at Bell Labs between 1955 and 1960. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment.
As the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material. The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s, followed by VLSI. In 2008, billion-transistor processors became commercially available.
Subfields
Analog electronics
Audio electronics
Avionics
Bioelectronics
Circuit design
Digital electronics
Electronic components
Embedded systems
Integrated circuits
Microelectronics
Nanoelectronics
Optoelectronics
Power electronics
Printed circuit boards
Semiconductor devices
Sensors
Telecommunications
Devices and components
An electronic component is any component in an electronic system either active or passive. Components are connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function. Components may be packaged singly, or in more complex groups as integrated circuits. Passive electronic components are capacitors, inductors, resistors, whilst active components are such as semiconductor devices; transistors and thyristors, which control current flow at electron level.
Types of circuits
Electronic circuit functions can be divided into two function groups: analog and digital. A particular device may consist of circuitry that has either or a mix of the two types. Analog circuits are becoming less common, as many of their functions are being digitized.
Analog circuits
Analog circuits use a continuous range of voltage or current for signal processing, as opposed to the discrete levels used in digital circuits. Analog circuits were common throughout an electronic device in the early years in devices such as radio receivers and transmitters. Analog electronic computers were valuable for solving problems with continuous variables until digital processing advanced.
As semiconductor technology developed, many of the functions of analog circuits were taken over by digital circuits, and modern circuits that are entirely analog are less common; their functions being replaced by hybrid approach which, for instance, uses analog circuits at the front end of a device receiving an analog signal, and then use digital processing using microprocessor techniques thereafter.
Sometimes it may be difficult to classify some circuits that have elements of both linear and non-linear operation. An example is the voltage comparator which receives a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch, having essentially two levels of output.
Analog circuits are still widely used for signal amplification, such as in the entertainment industry, and conditioning signals from analog sensors, such as in industrial measurement and control.
Digital circuits
Digital circuits are electric circuits based on discrete voltage levels. Digital circuits use Boolean algebra and are the basis of all digital computers and microprocessor devices. They range from simple logic gates to large integrated circuits, employing millions of such gates.
Digital circuits use a binary system with two voltage levels labelled "0" and "1" to indicated logical status. Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as "0" or "1" is arbitrary.
Ternary (with three states) logic has been studied, and some prototype computers made, but have not gained any significant practical acceptance. Universally, Computers and Digital signal processors are constructed with digital circuits using Transistors such as MOSFETs in the electronic logic gates to generate binary states.
Logic gates
Adders
Flip-flops
Counters
Registers
Multiplexers
Schmitt triggers
Highly integrated devices:
Memory chip
Microprocessors
Microcontrollers
Application-specific integrated circuit (ASIC)
Digital signal processor (DSP)
Field-programmable gate array (FPGA)
Field-programmable analog array (FPAA)
System on chip (SOC)
Design
Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user.
Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice.
Computer-aided design
Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others.
Negative qualities
Thermal management
Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy.
Noise
Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties.
Packaging methods
Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to go to European markets.
Electrical components are generally mounted in the following ways:
Through-hole (sometimes referred to as 'Pin-Through-Hole')
Surface mount
Chassis mount
Rack mount
LGA/BGA/PGA socket
Industry
The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over as of 2018. The largest industry sector is e-commerce, which generated over in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13sextillion MOSFETs having been manufactured between 1960 and 2018. In the 1960s, U.S. manufacturers were unable to compete with Japanese companies such as Sony and Hitachi who could produce high-quality goods at lower prices. By the 1980s, however, U.S. manufacturers became the world leaders in semiconductor development and assembly.
However, during the 1990s and subsequently, the industry shifted overwhelmingly to East Asia (a process begun with the initial movement of microchip mass-production there in the 1970s), as plentiful, cheap labor, and increasing technological sophistication, became widely available there.
Over three decades, the United States' global share of semiconductor manufacturing capacity fell, from 37% in 1990, to 12% in 2022. America's pre-eminent semiconductor manufacturer, Intel Corporation, fell far behind its subcontractor Taiwan Semiconductor Manufacturing Company (TSMC) in manufacturing technology.
By that time, Taiwan had become the world's leading source of advanced semiconductors—followed by South Korea, the United States, Japan, Singapore, and China.
Important semiconductor industry facilities (which often are subsidiaries of a leading producer based elsewhere) also exist in Europe (notably the Netherlands), Southeast Asia, South America, and Israel.
| Technology | Electronics | null |
9672 | https://en.wikipedia.org/wiki/Entscheidungsproblem | Entscheidungsproblem | In mathematics and computer science, the ; ) is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. It asks for an algorithm that considers an inputted statement and answers "yes" or "no" according to whether it is universally valid, i.e., valid in every structure. Such an algorithm was proven to be impossible by Alonzo Church and Alan Turing in 1936.
Completeness theorem
By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced using logical rules and axioms, so the can also be viewed as asking for an algorithm to decide whether a given statement is provable using the rules of logic.
In 1936, Alonzo Church and Alan Turing published independent papers showing that a general solution to the is impossible, assuming that the intuitive notion of "effectively calculable" is captured by the functions computable by a Turing machine (or equivalently, by those expressible in the lambda calculus). This assumption is now known as the Church–Turing thesis.
History
The origin of the goes back to Gottfried Leibniz, who in the seventeenth century, after having constructed a successful mechanical calculating machine, dreamt of building a machine that could manipulate symbols in order to determine the truth values of mathematical statements. He realized that the first step would have to be a clean formal language, and much of his subsequent work was directed toward that goal. In 1928, David Hilbert and Wilhelm Ackermann posed the question in the form outlined above.
In continuation of his "program", Hilbert posed three questions at an international conference in 1928, the third of which became known as "Hilbert's ". In 1929, Moses Schönfinkel published one paper on special cases of the decision problem, that was prepared by Paul Bernays.
As late as 1930, Hilbert believed that there would be no such thing as an unsolvable problem.
Negative answer
Before the question could be answered, the notion of "algorithm" had to be formally defined. This was done by Alonzo Church in 1935 with the concept of "effective calculability" based on his λ-calculus, and by Alan Turing the next year with his concept of Turing machines. Turing immediately recognized that these are equivalent models of computation.
A negative answer to the was then given by Alonzo Church in 1935–36 (Church's theorem) and independently shortly thereafter by Alan Turing in 1936 (Turing's proof). Church proved that there is no computable function which decides, for two given λ-calculus expressions, whether they are equivalent or not. He relied heavily on earlier work by Stephen Kleene. Turing reduced the question of the existence of an 'algorithm' or 'general method' able to solve the to the question of the existence of a 'general method' which decides whether any given Turing machine halts or not (the halting problem). If 'algorithm' is understood as meaning a method that can be represented as a Turing machine, and with the answer to the latter question negative (in general), the question about the existence of an algorithm for the also must be negative (in general). In his 1936 paper, Turing says: "Corresponding to each computing machine 'it' we construct a formula 'Un(it)' and we show that, if there is a general method for determining whether 'Un(it)' is provable, then there is a general method for determining whether 'it' ever prints 0".
The work of both Church and Turing was heavily influenced by Kurt Gödel's earlier work on his incompleteness theorem, especially by the method of assigning numbers (a Gödel numbering) to logical formulas in order to reduce logic to arithmetic.
The is related to Hilbert's tenth problem, which asks for an algorithm to decide whether Diophantine equations have a solution. The non-existence of such an algorithm, established by the work of Yuri Matiyasevich, Julia Robinson, Martin Davis, and Hilary Putnam, with the final piece of the proof in 1970, also implies a negative answer to the Entscheidungsproblem.
Generalizations
Using the deduction theorem, the Entscheidungsproblem encompasses the more general problem of deciding whether a given first-order sentence is entailed by a given finite set of sentences, but validity in first-order theories with infinitely many axioms cannot be directly reduced to the Entscheidungsproblem. Such more general decision problems are of practical interest. Some first-order theories are algorithmically decidable; examples of this include Presburger arithmetic, real closed fields, and static type systems of many programming languages. On the other hand, the first-order theory of the natural numbers with addition and multiplication expressed by Peano's axioms cannot be decided with an algorithm.
Fragments
By default, the citations in the section are from Pratt-Hartmann (2023).
The classical Entscheidungsproblem asks that, given a first-order formula, whether it is true in all models. The finitary problem asks whether it is true in all finite models. Trakhtenbrot's theorem shows that this is also undecidable.
Some notations: means the problem of deciding whether there exists a model for a set of logical formulas . is the same problem, but for finite models. The -problem for a logical fragment is called decidable if there exists a program that can decide, for each finite set of logical formulas in the fragment, whether or not.
There is a hierarchy of decidabilities. On the top are the undecidable problems. Below it are the decidable problems. Furthermore, the decidable problems can be divided into a complexity hierarchy.
Aristotelian and relational
Aristotelian logic considers 4 kinds of sentences: "All p are q", "All p are not q", "Some p is q", "Some p is not q". We can formalize these kinds of sentences as a fragment of first-order logic:where are atomic predicates, and . Given a finite set of Aristotelean logic formulas, it is NLOGSPACE-complete to decide its . It is also NLOGSPACE-complete to decide for a slight extension (Theorem 2.7):Relational logic extends Aristotelean logic by allowing a relational predicate. For example, "Everybody loves somebody" can be written as . Generally, we have 8 kinds of sentences:It is NLOGSPACE-complete to decide its (Theorem 2.15). Relational logic can be extended to 32 kinds of sentences by allowing , but this extension is EXPTIME-complete (Theorem 2.24).
Arity
The first-order logic fragment where the only variable names are is NEXPTIME-complete (Theorem 3.18). With , it is RE-complete to decide its , and co-RE-complete to decide (Theorem 3.15), thus undecidable.
The monadic predicate calculus is the fragment where each formula contains only 1-ary predicates and no function symbols. Its is NEXPTIME-complete (Theorem 3.22).
Quantifier prefix
Any first-order formula has a prenex normal form. For each possible quantifier prefix to the prenex normal form, we have a fragment of first-order logic. For example, the Bernays–Schönfinkel class, , is the class of first-order formulas with quantifier prefix , equality symbols, and no function symbols.
For example, Turing's 1936 paper (p. 263) observed that since the halting problem for each Turing machine is equivalent to a first-order logical formula of form , the problem is undecidable.
The precise boundaries are known, sharply:
and are co-RE-complete, and the problems are RE-complete (Theorem 5.2).
Same for (Theorem 5.3).
is decidable, proved independently by Gödel, Schütte, and Kalmár.
is undecidable.
For any , both and are NEXPTIME-complete (Theorem 5.1).
This implies that is decidable, a result first published by Bernays and Schönfinkel.
For any , is EXPTIME-complete (Section 5.4.1).
For any , is NEXPTIME-complete (Section 5.4.2).
This implies that is decidable, a result first published by Ackermann.
For any , and are PSPACE-complete (Section 5.4.3).
Börger et al. (2001) describes the level of computational complexity for every possible fragment with every possible combination of quantifier prefix, functional arity, predicate arity, and equality/no-equality.
Practical decision procedures
Having practical decision procedures for classes of logical formulas is of considerable interest for program verification and circuit verification. Pure Boolean logical formulas are usually decided using SAT-solving techniques based on the DPLL algorithm.
For more general decision problems of first-order theories, conjunctive formulas over linear real or rational arithmetic can be decided using the simplex algorithm, formulas in linear integer arithmetic (Presburger arithmetic) can be decided using Cooper's algorithm or William Pugh's Omega test. Formulas with negations, conjunctions and disjunctions combine the difficulties of satisfiability testing with that of decision of conjunctions; they are generally decided nowadays using SMT-solving techniques, which combine SAT-solving with decision procedures for conjunctions and propagation techniques. Real polynomial arithmetic, also known as the theory of real closed fields, is decidable; this is the Tarski–Seidenberg theorem, which has been implemented in computers by using the cylindrical algebraic decomposition.
| Mathematics | Computability theory | null |
9675 | https://en.wikipedia.org/wiki/Ester | Ester | In chemistry, an ester is a compound derived from an acid (organic or inorganic) in which the hydrogen atom (H) of at least one acidic hydroxyl group () of that acid is replaced by an organyl group (R). These compounds contain a distinctive functional group. Analogues derived from oxygen replaced by other chalcogens belong to the ester category as well. According to some authors, organyl derivatives of acidic hydrogen of other acids are esters as well (e.g. amides), but not according to the IUPAC.
Glycerides are fatty acid esters of glycerol; they are important in biology, being one of the main classes of lipids and comprising the bulk of animal fats and vegetable oils. Lactones are cyclic carboxylic esters; naturally occurring lactones are mainly 5- and 6-membered ring lactones. Lactones contribute to the aroma of fruits, butter, cheese, vegetables like celery and other foods.
Esters can be formed from oxoacids (e.g. esters of acetic acid, carbonic acid, sulfuric acid, phosphoric acid, nitric acid, xanthic acid), but also from acids that do not contain oxygen (e.g. esters of thiocyanic acid and trithiocarbonic acid). An example of an ester formation is the substitution reaction between a carboxylic acid () and an alcohol (), forming an ester (), where R stands for any group (typically hydrogen or organyl) and R stands for organyl group.
Organyl esters of carboxylic acids typically have a pleasant smell; those of low molecular weight are commonly used as fragrances and are found in essential oils and pheromones. They perform as high-grade solvents for a broad array of plastics, plasticizers, resins, and lacquers, and are one of the largest classes of synthetic lubricants on the commercial market. Polyesters are important plastics, with monomers linked by ester moieties. Esters of phosphoric acid form the backbone of DNA molecules. Esters of nitric acid, such as nitroglycerin, are known for their explosive properties.
There are compounds in which an acidic hydrogen of acids mentioned in this article are not replaced by an organyl, but by some other group. According to some authors, those compounds are esters as well, especially when the first carbon atom of the organyl group replacing acidic hydrogen, is replaced by another atom from the group 14 elements (Si, Ge, Sn, Pb); for example, according to them, trimethylstannyl acetate (or trimethyltin acetate) is a trimethylstannyl ester of acetic acid, and dibutyltin dilaurate is a dibutylstannylene ester of lauric acid, and the Phillips catalyst is a trimethoxysilyl ester of chromic acid ().
Nomenclature
Etymology
The word ester was coined in 1848 by a German chemist Leopold Gmelin, probably as a contraction of the German , "acetic ether".
IUPAC nomenclature
The names of esters that are formed from an alcohol and an acid, are derived from the parent alcohol and the parent acid, where the latter may be organic or inorganic. Esters derived from the simplest carboxylic acids are commonly named according to the more traditional, so-called "trivial names" e.g. as formate, acetate, propionate, and butyrate, as opposed to the IUPAC nomenclature methanoate, ethanoate, propanoate, and butanoate. Esters derived from more complex carboxylic acids are, on the other hand, more frequently named using the systematic IUPAC name, based on the name for the acid followed by the suffix -oate. For example, the ester hexyl octanoate, also known under the trivial name hexyl caprylate, has the formula .
The chemical formulas of organic esters formed from carboxylic acids and alcohols usually take the form or RCOOR', where R and R' are the organyl parts of the carboxylic acid and the alcohol, respectively, and R can be a hydrogen in the case of esters of formic acid. For example, butyl acetate (systematically butyl ethanoate), derived from butanol and acetic acid (systematically ethanoic acid) would be written . Alternative presentations are common including BuOAc and .
Cyclic esters are called lactones, regardless of whether they are derived from an organic or inorganic acid. One example of an organic lactone is γ-valerolactone.
Orthoesters
An uncommon class of esters are the orthoesters. One of them are the esters of orthocarboxylic acids. Those esters have the formula , where R stands for any group (organic or inorganic) and R stands for organyl group. For example, triethyl orthoformate () is derived, in terms of its name (but not its synthesis) from esterification of orthoformic acid () with ethanol.
Esters of inorganic acids
Esters can also be derived from inorganic acids.
Perchloric acid forms perchlorate esters, e.g., methyl perchlorate ()
Sulfuric acid forms sulfate esters, e.g., dimethyl sulfate () and methyl bisulfate ()
Nitric acid forms nitrate esters, e.g. methyl nitrate () and nitroglycerin ()
Phosphoric acid forms phosphate esters, e.g. triphenyl phosphate () and methyl dihydrogen phosphate ()
Pyrophosphoric (diphosphoric) acid forms pyrophosphate esters, e.g. tetraethyl pyrophosphate, ADP, dADP, ADPR, cADPR, CDP, dCDP, GDP, dGDP, UDP, dTDP, MEcPP, HMBPP, DMAPP, IPP, GPP, FPP, GGPP, ThDP, FAD, NAD, NADP.
Triphosphoric acid forms triphosphate esters, e.g. ATP, dATP, CTP, dCTP, GTP, dGTP, UTP, dTTP, ITP, XTP, ThTP, AThTP.
Carbonic acid forms carbonate esters, e.g. dimethyl carbonate () and 5-membered cyclic ethylene carbonate () (if one classifies carbonic acid as an inorganic compound)
Trithiocarbonic acid forms trithiocarbonate esters, e.g. dimethyl trithiocarbonate () (if one classifies trithiocarbonic acid as an inorganic compound)
Chloroformic acid forms chloroformate esters, e.g. methyl chloroformate () (if one classifies chloroformic acid as an inorganic compound)
Boric acid forms borate esters, e.g. trimethyl borate ()
Chromic acid forms di-tert-butyl chromate ()
Inorganic acids that exist as tautomers form two or more types of esters.
Thiosulfuric acid forms two types of thiosulfate esters, e.g. O,O-dimethyl thiosulfate () and O,S-dimethyl thiosulfate ()
Thiocyanic acid forms thiocyanate esters, e.g. methyl thiocyanate () (if one classifies thiocyanic acid as an inorganic compound), but forms isothiocyanate "esters" as well, e.g. methyl isothiocyanate (), although organyl isothiocyanates are not classified as esters by the IUPAC
Phosphorous acid forms two types of esters: phosphite esters, e.g. triethyl phosphite (), and phosphonate esters, e.g. diethyl phosphonate ()
Some inorganic acids that are unstable or elusive form stable esters.
Sulfurous acid, which is unstable, forms stable dimethyl sulfite ()
Dicarbonic acid, which is unstable, forms stable dimethyl dicarbonate ()
In principle, a part of metal and metalloid alkoxides, of which many hundreds are known, could be classified as esters of the corresponding acids (e.g. aluminium triethoxide () could be classified as an ester of aluminic acid which is aluminium hydroxide, tetraethyl orthosilicate () could be classified as an ester of orthosilicic acid, and titanium ethoxide () could be classified as an ester of orthotitanic acid).
Structure and bonding
Esters derived from carboxylic acids and alcohols contain a carbonyl group C=O, which is a divalent group at C atom, which gives rise to C–C–O and O–C–O angles. Unlike amides, carboxylic acid esters are structurally flexible functional groups because rotation about the C–O–C bonds has a low barrier. Their flexibility and low polarity is manifested in their physical properties; they tend to be less rigid (lower melting point) and more volatile (lower boiling point) than the corresponding amides. The pKa of the alpha-hydrogens on esters of carboxylic acids is around 25 (alpha-hydrogen is a hydrogen bound to the carbon adjacent to the carbonyl group (C=O) of carboxylate esters).
Many carboxylic acid esters have the potential for conformational isomerism, but they tend to adopt an S-cis (or Z) conformation rather than the S-trans (or E) alternative, due to a combination of hyperconjugation and dipole minimization effects. The preference for the Z conformation is influenced by the nature of the substituents and solvent, if present. Lactones with small rings are restricted to the s-trans (i.e. E) conformation due to their cyclic structure.
Physical properties and characterization
Esters derived from carboxylic acids and alcohols are more polar than ethers but less polar than alcohols. They participate in hydrogen bonds as hydrogen-bond acceptors, but cannot act as hydrogen-bond donors, unlike their parent alcohols. This ability to participate in hydrogen bonding confers some water-solubility. Because of their lack of hydrogen-bond-donating ability, esters do not self-associate. Consequently, esters are more volatile than carboxylic acids of similar molecular weight.
Characterization and analysis
Esters are generally identified by gas chromatography, taking advantage of their volatility. IR spectra for esters feature an intense sharp band in the range 1730–1750 cm−1 assigned to νC=O. This peak changes depending on the functional groups attached to the carbonyl. For example, a benzene ring or double bond in conjunction with the carbonyl will bring the wavenumber down about 30 cm−1.
Applications and occurrence
Esters are widespread in nature and are widely used in industry. In nature, fats are, in general, triesters derived from glycerol and fatty acids. Esters are responsible for the aroma of many fruits, including apples, durians, pears, bananas, pineapples, and strawberries. Several billion kilograms of polyesters are produced industrially annually, important products being polyethylene terephthalate, acrylate esters, and cellulose acetate.
Preparation
Esterification is the general name for a chemical reaction in which two reactants (typically an alcohol and an acid) form an ester as the reaction product. Esters are common in organic chemistry and biological materials, and often have a pleasant characteristic, fruity odor. This leads to their extensive use in the fragrance and flavor industry. Ester bonds are also found in many polymers.
Esterification of carboxylic acids with alcohols
The classic synthesis is the Fischer esterification, which involves treating a carboxylic acid with an alcohol in the presence of a dehydrating agent:
The equilibrium constant for such reactions is about 5 for typical esters, e.g., ethyl acetate. The reaction is slow in the absence of a catalyst. Sulfuric acid is a typical catalyst for this reaction. Many other acids are also used such as polymeric sulfonic acids. Since esterification is highly reversible, the yield of the ester can be improved using Le Chatelier's principle:
Using the alcohol in large excess (i.e., as a solvent).
Using a dehydrating agent: sulfuric acid not only catalyzes the reaction but sequesters water (a reaction product). Other drying agents such as molecular sieves are also effective.
Removal of water by physical means such as distillation as a low-boiling azeotrope with toluene, in conjunction with a Dean-Stark apparatus.
Reagents are known that drive the dehydration of mixtures of alcohols and carboxylic acids. One example is the Steglich esterification, which is a method of forming esters under mild conditions. The method is popular in peptide synthesis, where the substrates are sensitive to harsh conditions like high heat. DCC (dicyclohexylcarbodiimide) is used to activate the carboxylic acid to further reaction. 4-Dimethylaminopyridine (DMAP) is used as an acyl-transfer catalyst.
Another method for the dehydration of mixtures of alcohols and carboxylic acids is the Mitsunobu reaction:
Carboxylic acids can be esterified using diazomethane:
Using this diazomethane, mixtures of carboxylic acids can be converted to their methyl esters in near quantitative yields, e.g., for analysis by gas chromatography. The method is useful in specialized organic synthetic operations but is considered too hazardous and expensive for large-scale applications.
Esterification of carboxylic acids with epoxides
Carboxylic acids are esterified by treatment with epoxides, giving β-hydroxyesters:
This reaction is employed in the production of vinyl ester resin from acrylic acid.
Alcoholysis of acyl chlorides and acid anhydrides
Alcohols react with acyl chlorides and acid anhydrides to give esters:
The reactions are irreversible simplifying work-up. Since acyl chlorides and acid anhydrides also react with water, anhydrous conditions are preferred. The analogous acylations of amines to give amides are less sensitive because amines are stronger nucleophiles and react more rapidly than does water. This method is employed only for laboratory-scale procedures, as it is expensive.
Alkylation of carboxylic acids and their salts
Trimethyloxonium tetrafluoroborate can be used for esterification of carboxylic acids under conditions where acid-catalyzed reactions are infeasible:
Although rarely employed for esterifications, carboxylate salts (often generated in situ) react with electrophilic alkylating agents, such as alkyl halides, to give esters. Anion availability can inhibit this reaction, which correspondingly benefits from phase transfer catalysts or such highly polar aprotic solvents as DMF. An additional iodide salt may, via the Finkelstein reaction, catalyze the reaction of a recalcitrant alkyl halide. Alternatively, salts of a coordinating metal, such as silver, may improve the reaction rate by easing halide elimination.
Transesterification
Transesterification, which involves changing one ester into another one, is widely practiced:
Like the hydrolysation, transesterification is catalysed by acids and bases. The reaction is widely used for degrading triglycerides, e.g. in the production of fatty acid esters and alcohols. Poly(ethylene terephthalate) is produced by the transesterification of dimethyl terephthalate and ethylene glycol:
A subset of transesterification is the alcoholysis of diketene. This reaction affords 2-ketoesters.
Carbonylation
Alkenes undergo carboalkoxylation in the presence of metal carbonyl catalysts. Esters of propanoic acid are produced commercially by this method:
A preparation of methyl propionate is one illustrative example.
The carbonylation of methanol yields methyl formate, which is the main commercial source of formic acid. The reaction is catalyzed by sodium methoxide:
Addition of carboxylic acids to alkenes and alkynes
In hydroesterification, alkenes and alkynes insert into the bond of carboxylic acids. Vinyl acetate is produced industrially by the addition of acetic acid to acetylene in the presence of zinc acetate catalysts:
Vinyl acetate can also be produced by palladium-catalyzed reaction of ethylene, acetic acid, and oxygen:
Silicotungstic acid is used to manufacture ethyl acetate by the alkylation of acetic acid by ethylene:
From aldehydes
The Tishchenko reaction involves disproportionation of an aldehyde in the presence of an anhydrous base to give an ester. Catalysts are aluminium alkoxides or sodium alkoxides. Benzaldehyde reacts with sodium benzyloxide (generated from sodium and benzyl alcohol) to generate benzyl benzoate. The method is used in the production of ethyl acetate from acetaldehyde.
Other methods
Favorskii rearrangement of α-haloketones in presence of base
Baeyer–Villiger oxidation of ketones with peroxides
Pinner reaction of nitriles with an alcohol
Nucleophilic abstraction of a metal–acyl complex
Hydrolysis of orthoesters in aqueous acid
Cellulolysis via esterification
Ozonolysis of alkenes using a work up in the presence of hydrochloric acid and various alcohols.
Anodic oxidation of methyl ketones leading to methyl esters.
Interesterification exchanges the fatty acid groups of different esters.
Reactions
Esters are less reactive than acid halides and anhydrides. As with more reactive acyl derivatives, they can react with ammonia and primary and secondary amines to give amides, although this type of reaction is not often used, since acid halides give better yields.
Transesterification
Esters can be converted to other esters in a process known as transesterification. Transesterification can be either acid- or base-catalyzed, and involves the reaction of an ester with an alcohol. Unfortunately, because the leaving group is also an alcohol, the forward and reverse reactions will often occur at similar rates. Using a large excess of the reactant alcohol or removing the leaving group alcohol (e.g. via distillation) will drive the forward reaction towards completion, in accordance with Le Chatelier's principle.
Hydrolysis and saponification
Acid-catalyzed hydrolysis of esters is also an equilibrium process – essentially the reverse of the Fischer esterification reaction. Because an alcohol (which acts as the leaving group) and water (which acts as the nucleophile) have similar pKa values, the forward and reverse reactions compete with each other. As in transesterification, using a large excess of reactant (water) or removing one of the products (the alcohol) can promote the forward reaction.
Basic hydrolysis of esters, known as saponification, is not an equilibrium process; a full equivalent of base is consumed in the reaction, which produces one equivalent of alcohol and one equivalent of a carboxylate salt. The saponification of esters of fatty acids is an industrially important process, used in the production of soap.
Esterification is a reversible reaction. Esters undergo hydrolysis under acidic and basic conditions. Under acidic conditions, the reaction is the reverse reaction of the Fischer esterification. Under basic conditions, hydroxide acts as a nucleophile, while an alkoxide is the leaving group. This reaction, saponification, is the basis of soap making.
The alkoxide group may also be displaced by stronger nucleophiles such as ammonia or primary or secondary amines to give amides (ammonolysis reaction):
This reaction is not usually reversible. Hydrazines and hydroxylamine can be used in place of amines. Esters can be converted to isocyanates through intermediate hydroxamic acids in the Lossen rearrangement.
Sources of carbon nucleophiles, e.g., Grignard reagents and organolithium compounds, add readily to the carbonyl.
Reduction
Compared to ketones and aldehydes, esters are relatively resistant to reduction. The introduction of catalytic hydrogenation in the early part of the 20th century was a breakthrough; esters of fatty acids are hydrogenated to fatty alcohols.
A typical catalyst is copper chromite. Prior to the development of catalytic hydrogenation, esters were reduced on a large scale using the Bouveault–Blanc reduction. This method, which is largely obsolete, uses sodium in the presence of proton sources.
Especially for fine chemical syntheses, lithium aluminium hydride is used to reduce esters to two primary alcohols. The related reagent sodium borohydride is slow in this reaction. DIBAH reduces esters to aldehydes.
Direct reduction to give the corresponding ether is difficult as the intermediate hemiacetal tends to decompose to give an alcohol and an aldehyde (which is rapidly reduced to give a second alcohol). The reaction can be achieved using triethylsilane with a variety of Lewis acids.
Claisen condensation and related reactions
Esters can undergo a variety of reactions with carbon nucleophiles. They react with an excess of a Grignard reagent to give tertiary alcohols. Esters also react readily with enolates. In the Claisen condensation, an enolate of one ester (1) will attack the carbonyl group of another ester (2) to give tetrahedral intermediate 3. The intermediate collapses, forcing out an alkoxide (R'O−) and producing β-keto ester 4.
Crossed Claisen condensations, in which the enolate and nucleophile are different esters, are also possible. An intramolecular Claisen condensation is called a Dieckmann condensation or Dieckmann cyclization, since it can be used to form rings. Esters can also undergo condensations with ketone and aldehyde enolates to give β-dicarbonyl compounds. A specific example of this is the Baker–Venkataraman rearrangement, in which an aromatic ortho-acyloxy ketone undergoes an intramolecular nucleophilic acyl substitution and subsequent rearrangement to form an aromatic β-diketone. The Chan rearrangement is another example of a rearrangement resulting from an intramolecular nucleophilic acyl substitution reaction.
Other ester reactivities
Esters react with nucleophiles at the carbonyl carbon. The carbonyl is weakly electrophilic but is attacked by strong nucleophiles (amines, alkoxides, hydride sources, organolithium compounds, etc.). The C–H bonds adjacent to the carbonyl are weakly acidic but undergo deprotonation with strong bases. This process is the one that usually initiates condensation reactions. The carbonyl oxygen in esters is weakly basic, less so than the carbonyl oxygen in amides due to resonance donation of an electron pair from nitrogen in amides, but forms adducts.
As for aldehydes, the hydrogen atoms on the carbon adjacent ("α to") the carboxyl group in esters are sufficiently acidic to undergo deprotonation, which in turn leads to a variety of useful reactions. Deprotonation requires relatively strong bases, such as alkoxides. Deprotonation gives a nucleophilic enolate, which can further react, e.g., the Claisen condensation and its intramolecular equivalent, the Dieckmann condensation. This conversion is exploited in the malonic ester synthesis, wherein the diester of malonic acid reacts with an electrophile (e.g., alkyl halide), and is subsequently decarboxylated. Another variation is the Fráter–Seebach alkylation.
Other reactions
Esters can be directly converted to nitriles.
Methyl esters are often susceptible to decarboxylation in the Krapcho decarboxylation.
Phenyl esters react to hydroxyarylketones in the Fries rearrangement.
Specific esters are functionalized with an α-hydroxyl group in the Chan rearrangement.
Esters with β-hydrogen atoms can be converted to alkenes in ester pyrolysis.
Pairs of esters are coupled to give α-hydroxyketones in the acyloin condensation
Protecting groups
As a class, esters serve as protecting groups for carboxylic acids. Protecting a carboxylic acid is useful in peptide synthesis, to prevent self-reactions of the bifunctional amino acids. Methyl and ethyl esters are commonly available for many amino acids; the t-butyl ester tends to be more expensive. However, t-butyl esters are particularly useful because, under strongly acidic conditions, the t-butyl esters undergo elimination to give the carboxylic acid and isobutylene, simplifying work-up.
List of ester odorants
Many esters have distinctive fruit-like odors, and many occur naturally in the essential oils of plants. This has also led to their common use in artificial flavorings and fragrances which aim to mimic those odors.
| Physical sciences | Carbon–oxygen bond | null |
9678 | https://en.wikipedia.org/wiki/Exponential%20function | Exponential function | In mathematics, the exponential function is the unique real function which maps zero to one and has a derivative equal to its value. The exponential of a variable is denoted or , with the two notations used interchangeably. It is called exponential because its argument can be seen as an exponent to which a constant number , the base, is raised. There are several other definitions of the exponential function, which are all equivalent although being of very different nature.
The exponential function converts sums to products: it maps the additive identity to the multiplicative identity , and the exponential of a sum is equal to the product of separate exponentials, . Its inverse function, the natural logarithm, or , converts products to sums: .
Other functions of the general form , with base , are also commonly called exponential functions, and share the property of converting addition to multiplication, . Where these two meanings might be confused, the exponential function of base is occasionally called the natural exponential function, matching the name natural logarithm. The generalization of the standard exponent notation to arbitrary real numbers as exponents, is usually formally defined in terms of the exponential and natural logarithm functions, as . The "natural" base is the unique base satisfying the criterion that the exponential function's derivative equals its value, , which simplifies definitions and eliminates extraneous constants when using exponential functions in calculus.
Quantities which change over time in proportion to their value, for example the balance of a bank account bearing compound interest, the size of a bacterial population, the temperature of an object relative to its environment, or the amount of a radioactive substance, can be modeled using functions of the form , also sometimes called exponential functions; these quantities undergo exponential growth if is positive or exponential decay if is negative.
The exponential function can be generalized to accept a complex number as its argument. This reveals a relation between the multiplication of complex numbers and rotation in the Euclidean plane, Euler's formula : the exponential of an imaginary number is a point on the complex unit circle at angle from the real axis. The identities of trigonometry can thus be translated into identities involving exponentials of imaginary quantities. The complex function is a conformal map from an infinite strip of the complex plane (which periodically repeats in the imaginary direction) onto the whole complex plane except for .
The exponential function can be even further generalized to accept other types of arguments, such as matrices and elements of Lie algebras.
Graph
The graph of is upward-sloping, and increases faster as increases. The graph always lies above the -axis, but becomes arbitrarily close to it for large negative ; thus, the -axis is a horizontal asymptote. The equation means that the slope of the tangent to the graph at each point is equal to its height (its -coordinate) at that point.
Definitions and fundamental properties
There are several different definitions of the exponential function, which are all equivalent, although of very different nature.
One of the simplest definitions is: The exponential function is the unique differentiable function that equals its derivative, and takes the value for the value of its variable.
This "conceptual" definition requires a uniqueness proof and an existence proof, but it allows an easy derivation of the main properties of the exponential function.
Uniqueness: If and are two functions satisfying the above definition, then the derivative of is zero everywhere by the quotient rule. It follows that is constant, and this constant is since .
The exponential function is the inverse function of the natural logarithm. The inverse function theorem implies that the natural logarithm has an inverse function, that satisfies the above definition. This is a first proof of existence. Therefore, one has
for every real number and every positive real number
The exponential function is the sum of a power series:
where is the factorial of (the product of the first positive integers). This series is absolutely convergent for every per the ratio test. So, the derivative of the sum can be computed by term-by-term derivation, and this shows that the sum of the series satisfies the above definition. This is a second existence proof, and shows, as a byproduct, that the exponential function is defined for every , and is everywhere the sum of its Maclaurin series.
The exponential satisfies the functional equation:
This results from the uniqueness and the fact that the function
satisfies the above definition. It can be proved that a function that satisfies this functional equation is the exponential function if its derivative at is and the function is either continuous or monotonic
Positiveness: For every , one has , since the functional equation implies . It results that the exponential function is positive (since , if one would have for some , the intermediate value theorem would imply the existence of some such that . It results also that the exponential function is monotonically increasing.
Extension of exponentiation to positive real bases: Let be a positive real number. The exponential function and the natural logarithm being the inverse each of the other, one has If is an integer, the functional equation of the logarithm implies
Since the right-most expression is defined if is any real number, this allows defining for every positive real number and every real number :
In particular, if is the Euler's number one has (inverse function) and thus This shows the equivalence of the two notations for the exponential function.
The exponential function is the limit
where takes only integer values (otherwise, the exponentiation would require the exponential function to be defined). By continuity of the logarithm, this can be proved by taking logarithms and proving
for example with Taylor's theorem.
General exponential functions
The term "exponential function" is used sometimes for referring to any function whose argument appears in an exponent, such as and However, this name is commonly used for differentiable functions satisfying one of the following equivalent conditions:
There exist some constants and such that for every value of .
There exist some constants and such that for every value of .
For every the value of is independent of that is, for all , and . In words: pairs of arguments with the same difference are mapped into pairs of values with the same ratio.G. Harnett, Quora, 2020, What is the base of an exponential function?
"A (general) exponential function changes by the same factor over equal increments of the input. The factor of change over a unit increment is called the base."Mathebibel "Werden bei einer Exponentialfunktion zur basis die -Werte jeweils um einen festen Zahlenwert vergrössert, so werden die Funktionswerte mit einem konstanten Faktor vervielfacht."
The value of is independent of . This constant value is sometimes called the rate constant of and denoted as ; it equals the constant of the second equivalent condition.G.F. Simmons, Differential Equations and Historical | Mathematics | Basics | null |
9696 | https://en.wikipedia.org/wiki/Erosion | Erosion | Erosion is the action of surface processes (such as water flow or wind) that removes soil, rock, or dissolved material from one location on the Earth's crust and then transports it to another location where it is deposited. Erosion is distinct from weathering which involves no movement. Removal of rock or soil as clastic sediment is referred to as physical or mechanical erosion; this contrasts with chemical erosion, where soil or rock material is removed from an area by dissolution. Eroded sediment or solutes may be transported just a few millimetres, or for thousands of kilometres.
Agents of erosion include rainfall; bedrock wear in rivers; coastal erosion by the sea and waves; glacial plucking, abrasion, and scour; areal flooding; wind abrasion; groundwater processes; and mass movement processes in steep landscapes like landslides and debris flows. The rates at which such processes act control how fast a surface is eroded. Typically, physical erosion proceeds the fastest on steeply sloping surfaces, and rates may also be sensitive to some climatically controlled properties including amounts of water supplied (e.g., by rain), storminess, wind speed, wave fetch, or atmospheric temperature (especially for some ice-related processes). Feedbacks are also possible between rates of erosion and the amount of eroded material that is already carried by, for example, a river or glacier. The transport of eroded materials from their original location is followed by deposition, which is arrival and emplacement of material at a new location.
While erosion is a natural process, human activities have increased by 10–40 times the rate at which soil erosion is occurring globally. At agriculture sites in the Appalachian Mountains, intensive farming practices have caused erosion at up to 100 times the natural rate of erosion in the region. Excessive (or accelerated) erosion causes both "on-site" and "off-site" problems. On-site impacts include decreases in agricultural productivity and (on natural landscapes) ecological collapse, both because of loss of the nutrient-rich upper soil layers. In some cases, this leads to desertification. Off-site effects include sedimentation of waterways and eutrophication of water bodies, as well as sediment-related damage to roads and houses. Water and wind erosion are the two primary causes of land degradation; combined, they are responsible for about 84% of the global extent of degraded land, making excessive erosion one of the most significant environmental problems worldwide.
Intensive agriculture, deforestation, roads, anthropogenic climate change and urban sprawl are amongst the most significant human activities in regard to their effect on stimulating erosion. However, there are many prevention and remediation practices that can curtail or limit erosion of vulnerable soils.
Physical processes
Rainfall and surface runoff
Rainfall, and the surface runoff which may result from rainfall, produces four main types of soil erosion: splash erosion, sheet erosion, rill erosion, and gully erosion. Splash erosion is generally seen as the first and least severe stage in the soil erosion process, which is followed by sheet erosion, then rill erosion and finally gully erosion (the most severe of the four).
In splash erosion, the impact of a falling raindrop creates a small crater in the soil, ejecting soil particles. The distance these soil particles travel can be as much as vertically and horizontally on level ground.
If the soil is saturated, or if the rainfall rate is greater than the rate at which water can infiltrate into the soil, surface runoff occurs. If the runoff has sufficient flow energy, it will transport loosened soil particles (sediment) down the slope. Sheet erosion is the transport of loosened soil particles by overland flow.
Rill erosion refers to the development of small, ephemeral concentrated flow paths which function as both sediment source and sediment delivery systems for erosion on hillslopes. Generally, where water erosion rates on disturbed upland areas are greatest, rills are active. Flow depths in rills are typically of the order of a few centimetres (about an inch) or less and along-channel slopes may be quite steep. This means that rills exhibit hydraulic physics very different from water flowing through the deeper, wider channels of streams and rivers.
Gully erosion occurs when runoff water accumulates and rapidly flows in narrow channels during or immediately after heavy rains or melting snow, removing soil to a considerable depth. A gully is distinguished from a rill based on a critical cross-sectional area of at least one square foot, i.e. the size of a channel that can no longer be erased via normal tillage operations.
Extreme gully erosion can progress to formation of badlands. These form under conditions of high relief on easily eroded bedrock in climates favorable to erosion. Conditions or disturbances that limit the growth of protective vegetation (rhexistasy) are a key element of badland formation.
Rivers and streams
Valley or stream erosion occurs with continued water flow along a linear feature. The erosion is both downward, deepening the valley, and headward, extending the valley into the hillside, creating head cuts and steep banks. In the earliest stage of stream erosion, the erosive activity is dominantly vertical, the valleys have a typical V-shaped cross-section and the stream gradient is relatively steep. When some base level is reached, the erosive activity switches to lateral erosion, which widens the valley floor and creates a narrow floodplain. The stream gradient becomes nearly flat, and lateral deposition of sediments becomes important as the stream meanders across the valley floor. In all stages of stream erosion, by far the most erosion occurs during times of flood when more and faster-moving water is available to carry a larger sediment load. In such processes, it is not the water alone that erodes: suspended abrasive particles, pebbles, and boulders can also act erosively as they traverse a surface, in a process known as traction.
Bank erosion is the wearing away of the banks of a stream or river. This is distinguished from changes on the bed of the watercourse, which is referred to as scour. Erosion and changes in the form of river banks may be measured by inserting metal rods into the bank and marking the position of the bank surface along the rods at different times.
Thermal erosion is the result of melting and weakening permafrost due to moving water. It can occur both along rivers and at the coast. Rapid river channel migration observed in the Lena River of Siberia is due to thermal erosion, as these portions of the banks are composed of permafrost-cemented non-cohesive materials. Much of this erosion occurs as the weakened banks fail in large slumps. Thermal erosion also affects the Arctic coast, where wave action and near-shore temperatures combine to undercut permafrost bluffs along the shoreline and cause them to fail. Annual erosion rates along a segment of the Beaufort Sea shoreline averaged per year from 1955 to 2002.
Most river erosion happens nearer to the mouth of a river. On a river bend, the longest least sharp side has slower moving water. Here deposits build up. On the narrowest sharpest side of the bend, there is faster moving water so this side tends to erode away mostly.
Rapid erosion by a large river can remove enough sediments to produce a river anticline, as isostatic rebound raises rock beds unburdened by erosion of overlying beds.
Coastal erosion
Shoreline erosion, which occurs on both exposed and sheltered coasts, primarily occurs through the action of currents and waves but sea level (tidal) change can also play a role.
Hydraulic action takes place when the air in a joint is suddenly compressed by a wave closing the entrance of the joint. This then cracks it. Wave pounding is when the sheer energy of the wave hitting the cliff or rock breaks pieces off. Abrasion or corrasion is caused by waves launching sea load at the cliff. It is the most effective and rapid form of shoreline erosion (not to be confused with corrosion). Corrosion is the dissolving of rock by carbonic acid in sea water. Limestone cliffs are particularly vulnerable to this kind of erosion. Attrition is where particles/sea load carried by the waves are worn down as they hit each other and the cliffs. This then makes the material easier to wash away. The material ends up as shingle and sand. Another significant source of erosion, particularly on carbonate coastlines, is boring, scraping and grinding of organisms, a process termed bioerosion.
Sediment is transported along the coast in the direction of the prevailing current (longshore drift). When the upcurrent supply of sediment is less than the amount being carried away, erosion occurs. When the upcurrent amount of sediment is greater, sand or gravel banks will tend to form as a result of deposition. These banks may slowly migrate along the coast in the direction of the longshore drift, alternately protecting and exposing parts of the coastline. Where there is a bend in the coastline, quite often a buildup of eroded material occurs forming a long narrow bank (a spit). Armoured beaches and submerged offshore sandbanks may also protect parts of a coastline from erosion. Over the years, as the shoals gradually shift, the erosion may be redirected to attack different parts of the shore.
Erosion of a coastal surface, followed by a fall in sea level, can produce a distinctive landform called a raised beach.
Chemical erosion
Chemical erosion is the loss of matter in a landscape in the form of solutes. Chemical erosion is usually calculated from the solutes found in streams. Anders Rapp pioneered the study of chemical erosion in his work about Kärkevagge published in 1960.
Formation of sinkholes and other features of karst topography is an example of extreme chemical erosion.
Glaciers
Glaciers erode predominantly by three different processes: abrasion/scouring, plucking, and ice thrusting. In an abrasion process, debris in the basal ice scrapes along the bed, polishing and gouging the underlying rocks, similar to sandpaper on wood. Scientists have shown that, in addition to the role of temperature played in valley-deepening, other glaciological processes, such as erosion also control cross-valley variations. In a homogeneous bedrock erosion pattern, curved channel cross-section beneath the ice is created. Though the glacier continues to incise vertically, the shape of the channel beneath the ice eventually remain constant, reaching a U-shaped parabolic steady-state shape as we now see in glaciated valleys. Scientists also provide a numerical estimate of the time required for the ultimate formation of a steady-shaped U-shaped valley—approximately 100,000 years. In a weak bedrock (containing material more erodible than the surrounding rocks) erosion pattern, on the contrary, the amount of over deepening is limited because ice velocities and erosion rates are reduced.
Glaciers can also cause pieces of bedrock to crack off in the process of plucking. In ice thrusting, the glacier freezes to its bed, then as it surges forward, it moves large sheets of frozen sediment at the base along with the glacier. This method produced some of the many thousands of lake basins that dot the edge of the Canadian Shield. Differences in the height of mountain ranges are not only being the result tectonic forces, such as rock uplift, but also local climate variations. Scientists use global analysis of topography to show that glacial erosion controls the maximum height of mountains, as the relief between mountain peaks and the snow line are generally confined to altitudes less than 1500 m. The erosion caused by glaciers worldwide erodes mountains so effectively that the term glacial buzzsaw has become widely used, which describes the limiting effect of glaciers on the height of mountain ranges. As mountains grow higher, they generally allow for more glacial activity (especially in the accumulation zone above the glacial equilibrium line altitude), which causes increased rates of erosion of the mountain, decreasing mass faster than isostatic rebound can add to the mountain. This provides a good example of a negative feedback loop. Ongoing research is showing that while glaciers tend to decrease mountain size, in some areas, glaciers can actually reduce the rate of erosion, acting as a glacial armor. Ice can not only erode mountains but also protect them from erosion. Depending on glacier regime, even steep alpine lands can be preserved through time with the help of ice. Scientists have proved this theory by sampling eight summits of northwestern Svalbard using Be10 and Al26, showing that northwestern Svalbard transformed from a glacier-erosion state under relatively mild glacial maxima temperature, to a glacier-armor state occupied by cold-based, protective ice during much colder glacial maxima temperatures as the Quaternary ice age progressed.
These processes, combined with erosion and transport by the water network beneath the glacier, leave behind glacial landforms such as moraines, drumlins, ground moraine (till), glaciokarst, kames, kame deltas, moulins, and glacial erratics in their wake, typically at the terminus or during glacier retreat.
The best-developed glacial valley morphology appears to be restricted to landscapes with low rock uplift rates (less than or equal to 2mm per year) and high relief, leading to long-turnover times. Where rock uplift rates exceed 2mm per year, glacial valley morphology has generally been significantly modified in postglacial time. Interplay of glacial erosion and tectonic forcing governs the morphologic impact of glaciations on active orogens, by both influencing their height, and by altering the patterns of erosion during subsequent glacial periods via a link between rock uplift and valley cross-sectional shape.
Floods
At extremely high flows, kolks, or vortices are formed by large volumes of rapidly rushing water. Kolks cause extreme local erosion, plucking bedrock and creating pothole-type geographical features called rock-cut basins. Examples can be seen in the flood regions result from glacial Lake Missoula, which created the channeled scablands in the Columbia Basin region of eastern Washington.
Wind erosion
Wind erosion is a major geomorphological force, especially in arid and semi-arid regions. It is also a major source of land degradation, evaporation, desertification, harmful airborne dust, and crop damage—especially after being increased far above natural rates by human activities such as deforestation, urbanization, and agriculture.
Wind erosion is of two primary varieties: deflation, where the wind picks up and carries away loose particles; and abrasion, where surfaces are worn down as they are struck by airborne particles carried by wind. Deflation is divided into three categories: (1) surface creep, where larger, heavier particles slide or roll along the ground; (2) saltation, where particles are lifted a short height into the air, and bounce and saltate across the surface of the soil; and (3) suspension, where very small and light particles are lifted into the air by the wind, and are often carried for long distances. Saltation is responsible for the majority (50–70%) of wind erosion, followed by suspension (30–40%), and then surface creep (5–25%).
Wind erosion is much more severe in arid areas and during times of drought. For example, in the Great Plains, it is estimated that soil loss due to wind erosion can be as much as 6100 times greater in drought years than in wet years.
Mass wasting
Mass wasting or mass movement is the downward and outward movement of rock and sediments on a sloped surface, mainly due to the force of gravity.
Mass wasting is an important part of the erosional process and is often the first stage in the breakdown and transport of weathered materials in mountainous areas. It moves material from higher elevations to lower elevations where other eroding agents such as streams and glaciers can then pick up the material and move it to even lower elevations. Mass-wasting processes are always occurring continuously on all slopes; some mass-wasting processes act very slowly; others occur very suddenly, often with disastrous results. Any perceptible down-slope movement of rock or sediment is often referred to in general terms as a landslide. However, landslides can be classified in a much more detailed way that reflects the mechanisms responsible for the movement and the velocity at which the movement occurs. One of the visible topographical manifestations of a very slow form of such activity is a scree slope.
Slumping happens on steep hillsides, occurring along distinct fracture zones, often within materials like clay that, once released, may move quite rapidly downhill. They will often show a spoon-shaped isostatic depression, in which the material has begun to slide downhill. In some cases, the slump is caused by water beneath the slope weakening it. In many cases it is simply the result of poor engineering along highways where it is a regular occurrence.
Surface creep is the slow movement of soil and rock debris by gravity which is usually not perceptible except through extended observation. However, the term can also describe the rolling of dislodged soil particles in diameter by wind along the soil surface.
Submarine sediment gravity flows
On the continental slope, erosion of the ocean floor to create channels and submarine canyons can result from the rapid downslope flow of sediment gravity flows, bodies of sediment-laden water that move rapidly downslope as turbidity currents. Where erosion by turbidity currents creates oversteepened slopes it can also trigger underwater landslides and debris flows. Turbidity currents can erode channels and canyons into substrates ranging from recently deposited unconsolidated sediments to hard crystalline bedrock. Almost all continental slopes and deep ocean basins display such channels and canyons resulting from sediment gravity flows and submarine canyons act as conduits for the transfer of sediment from the continents and shallow marine environments to the deep sea. Turbidites, which are the sedimentary deposits resulting from turbidity currents, comprise some of the thickest and largest sedimentary sequences on Earth, indicating that the associated erosional processes must also have played a prominent role in Earth's history.
Factors affecting erosion rates
Climate
The amount and intensity of precipitation is the main climatic factor governing soil erosion by water. The relationship is particularly strong if heavy rainfall occurs at times when, or in locations where, the soil's surface is not well protected by vegetation. This might be during periods when agricultural activities leave the soil bare, or in semi-arid regions where vegetation is naturally sparse. Wind erosion requires strong winds, particularly during times of drought when vegetation is sparse and soil is dry (and so is more erodible). Other climatic factors such as average temperature and temperature range may also affect erosion, via their effects on vegetation and soil properties. In general, given similar vegetation and ecosystems, areas with more precipitation (especially high-intensity rainfall), more wind, or more storms are expected to have more erosion.
In some areas of the world (e.g. the mid-western US), rainfall intensity is the primary determinant of erosivity (for a definition of erosivity check,) with higher intensity rainfall generally resulting in more soil erosion by water. The size and velocity of rain drops is also an important factor. Larger and higher-velocity rain drops have greater kinetic energy, and thus their impact will displace soil particles by larger distances than smaller, slower-moving rain drops.
In other regions of the world (e.g. western Europe), runoff and erosion result from relatively low intensities of stratiform rainfall falling onto the previously saturated soil. In such situations, rainfall amount rather than intensity is the main factor determining the severity of soil erosion by water. According to the climate change projections, erosivity will increase significantly in Europe and soil erosion may increase by 13–22.5% by 2050
In Taiwan, where typhoon frequency increased significantly in the 21st century, a strong link has been drawn between the increase in storm frequency with an increase in sediment load in rivers and reservoirs, highlighting the impacts climate change can have on erosion.
Vegetative cover
Vegetation acts as an interface between the atmosphere and the soil. It increases the permeability of the soil to rainwater, thus decreasing runoff. It shelters the soil from winds, which results in decreased wind erosion, as well as advantageous changes in microclimate. The roots of the plants bind the soil together, and interweave with other roots, forming a more solid mass that is less susceptible to both water and wind erosion. The removal of vegetation increases the rate of surface erosion.
Topography
The topography of the land determines the velocity at which surface runoff will flow, which in turn determines the erosivity of the runoff. Longer, steeper slopes (especially those without adequate vegetative cover) are more susceptible to very high rates of erosion during heavy rains than shorter, less steep slopes. Steeper terrain is also more prone to mudslides, landslides, and other forms of gravitational erosion processes.
Tectonics
Tectonic processes control rates and distributions of erosion at the Earth's surface. If the tectonic action causes part of the Earth's surface (e.g., a mountain range) to be raised or lowered relative to surrounding areas, this must necessarily change the gradient of the land surface. Because erosion rates are almost always sensitive to the local slope (see above), this will change the rates of erosion in the uplifted area. Active tectonics also brings fresh, unweathered rock towards the surface, where it is exposed to the action of erosion.
However, erosion can also affect tectonic processes. The removal by erosion of large amounts of rock from a particular region, and its deposition elsewhere, can result in a lightening of the load on the lower crust and mantle. Because tectonic processes are driven by gradients in the stress field developed in the crust, this unloading can in turn cause tectonic or isostatic uplift in the region. In some cases, it has been hypothesised that these twin feedbacks can act to localize and enhance zones of very rapid exhumation of deep crustal rocks beneath places on the Earth's surface with extremely high erosion rates, for example, beneath the extremely steep terrain of Nanga Parbat in the western Himalayas. Such a place has been called a "tectonic aneurysm".
Development
Human land development, in forms including agricultural and urban development, is considered a significant factor in erosion and sediment transport, which aggravate food insecurity. In Taiwan, increases in sediment load in the northern, central, and southern regions of the island can be tracked with the timeline of development for each region throughout the 20th century. The intentional removal of soil and rock by humans is a form of erosion that has been named lisasion.
Erosion at various scales
Mountain ranges
Mountain ranges take millions of years to erode to the degree they effectively cease to exist. Scholars Pitman and Golovchenko estimate that it takes probably more than 450 million years to erode a mountain mass similar to the Himalaya into an almost-flat peneplain if there are no significant sea-level changes. Erosion of mountains massifs can create a pattern of equally high summits called summit accordance. It has been argued that extension during post-orogenic collapse is a more effective mechanism of lowering the height of orogenic mountains than erosion.
Examples of heavily eroded mountain ranges include the Timanides of Northern Russia. Erosion of this orogen has produced sediments that are now found in the East European Platform, including the Cambrian Sablya Formation near Lake Ladoga. Studies of these sediments indicate that it is likely that the erosion of the orogen began in the Cambrian and then intensified in the Ordovician.
Soils
If the erosion rate exceeds soil formation, erosion destroys the soil. Lower rates of erosion can prevent the formation of soil features that take time to develop. Inceptisols develop on eroded landscapes that, if stable, would have supported the formation of more developed Alfisols.
While erosion of soils is a natural process, human activities have increased by 10-40 times the rate at which erosion occurs globally. Excessive (or accelerated) erosion causes both "on-site" and "off-site" problems. On-site impacts include decreases in agricultural productivity and (on natural landscapes) ecological collapse, both because of loss of the nutrient-rich upper soil layers. In some cases, the eventual result is desertification. Off-site effects include sedimentation of waterways and eutrophication of water bodies, as well as sediment-related damage to roads and houses. Water and wind erosion are the two primary causes of land degradation; combined, they are responsible for about 84% of the global extent of degraded land, making excessive erosion one of the most significant environmental problems.
Often in the United States, farmers cultivating highly erodible land must comply with a conservation plan to be eligible for agricultural assistance.
Consequences of human-made soil erosion
| Physical sciences | Earth science | null |
9697 | https://en.wikipedia.org/wiki/Euclidean%20space | Euclidean space | Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's Elements, it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean spaces of any positive integer dimension n, which are called Euclidean n-spaces when one wants to specify their dimension. For n equal to one or two, they are commonly called respectively Euclidean lines and Euclidean planes. The qualifier "Euclidean" is used to distinguish Euclidean spaces from other spaces that were later considered in physics and modern mathematics.
Ancient Greek geometers introduced Euclidean space for modeling the physical space. Their work was collected by the ancient Greek mathematician Euclid in his Elements, with the great innovation of proving all properties of the space as theorems, by starting from a few fundamental properties, called postulates, which either were considered as evident (for example, there is exactly one straight line passing through two points), or seemed impossible to prove (parallel postulate).
After the introduction at the end of the 19th century of non-Euclidean geometries, the old postulates were re-formalized to define Euclidean spaces through axiomatic theory. Another definition of Euclidean spaces by means of vector spaces and linear algebra has been shown to be equivalent to the axiomatic definition. It is this definition that is more commonly used in modern mathematics, and detailed in this article. In all definitions, Euclidean spaces consist of points, which are defined only by the properties that they must have for forming a Euclidean space.
There is essentially only one Euclidean space of each dimension; that is, all Euclidean spaces of a given dimension are isomorphic. Therefore, it is usually possible to work with a specific Euclidean space, denoted or , which can be represented using Cartesian coordinates as the real -space equipped with the standard dot product.
Definition
History of the definition
Euclidean space was introduced by ancient Greeks as an abstraction of our physical space. Their great innovation, appearing in Euclid's Elements was to build and prove all geometry by starting from a few very basic properties, which are abstracted from the physical world, and cannot be mathematically proved because of the lack of more basic tools. These properties are called postulates, or axioms in modern language. This way of defining Euclidean space is still in use under the name of synthetic geometry.
In 1637, René Descartes introduced Cartesian coordinates, and showed that these allow reducing geometric problems to algebraic computations with numbers. This reduction of geometry to algebra was a major change in point of view, as, until then, the real numbers were defined in terms of lengths and distances.
Euclidean geometry was not applied in spaces of dimension more than three until the 19th century. Ludwig Schläfli generalized Euclidean geometry to spaces of dimension , using both synthetic and algebraic methods, and discovered all of the regular polytopes (higher-dimensional analogues of the Platonic solids) that exist in Euclidean spaces of any dimension.
Despite the wide use of Descartes' approach, which was called analytic geometry, the definition of Euclidean space remained unchanged until the end of 19th century. The introduction of abstract vector spaces allowed their use in defining Euclidean spaces with a purely algebraic definition. This new definition has been shown to be equivalent to the classical definition in terms of geometric axioms. It is this algebraic definition that is now most often used for introducing Euclidean spaces.
Motivation of the modern definition
One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance and angles. For example, there are two fundamental operations (referred to as motions) on the plane. One is translation, which means a shifting of the plane so that every point is shifted in the same direction and by the same distance. The other is rotation around a fixed point in the plane, in which all points in the plane turn around that fixed point through the same angle. One of the basic tenets of Euclidean geometry is that two figures (usually considered as subsets) of the plane should be considered equivalent (congruent) if one can be transformed into the other by some sequence of translations, rotations and reflections (see below).
In order to make all of this mathematically precise, the theory must clearly define what is a Euclidean space, and the related notions of distance, angle, translation, and rotation. Even when used in physical theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments, and so on. A purely mathematical definition of Euclidean space also ignores questions of units of length and other physical dimensions: the distance in a "mathematical" space is a number, not something expressed in inches or metres.
The standard way to mathematically define a Euclidean space, as carried out in the remainder of this article, is as a set of points on which a real vector space acts — the space of translations which is equipped with an inner product. The action of translations makes the space an affine space, and this allows defining lines, planes, subspaces, dimension, and parallelism. The inner product allows defining distance and angles.
The set of -tuples of real numbers equipped with the dot product is a Euclidean space of dimension . Conversely, the choice of a point called the origin and an orthonormal basis of the space of translations is equivalent with defining an isomorphism between a Euclidean space of dimension and viewed as a Euclidean space.
It follows that everything that can be said about a Euclidean space can also be said about Therefore, many authors, especially at elementary level, call the standard Euclidean space of dimension , or simply the Euclidean space of dimension .
A reason for introducing such an abstract definition of Euclidean spaces, and for working with instead of is that it is often preferable to work in a coordinate-free and origin-free manner (that is, without choosing a preferred basis and a preferred origin). Another reason is that there is no standard origin nor any standard basis in the physical world.
Technical definition
A is a finite-dimensional inner product space over the real numbers.
A Euclidean space is an affine space over the reals such that the associated vector space is a Euclidean vector space. Euclidean spaces are sometimes called Euclidean affine spaces to distinguish them from Euclidean vector spaces.
If is a Euclidean space, its associated vector space (Euclidean vector space) is often denoted The dimension of a Euclidean space is the dimension of its associated vector space.
The elements of are called points, and are commonly denoted by capital letters. The elements of are called Euclidean vectors or free vectors. They are also called translations, although, properly speaking, a translation is the geometric transformation resulting from the action of a Euclidean vector on the Euclidean space.
The action of a translation on a point provides a point that is denoted . This action satisfies
Note: The second in the left-hand side is a vector addition; each other denotes an action of a vector on a point. This notation is not ambiguous, as, to distinguish between the two meanings of , it suffices to look at the nature of its left argument.
The fact that the action is free and transitive means that, for every pair of points , there is exactly one displacement vector such that . This vector is denoted or
As previously explained, some of the basic properties of Euclidean spaces result from the structure of affine space. They are described in and its subsections. The properties resulting from the inner product are explained in and its subsections.
Prototypical examples
For any vector space, the addition acts freely and transitively on the vector space itself. Thus a Euclidean vector space can be viewed as a Euclidean space that has itself as the associated vector space.
A typical case of Euclidean vector space is viewed as a vector space equipped with the dot product as an inner product. The importance of this particular example of Euclidean space lies in the fact that every Euclidean space is isomorphic to it. More precisely, given a Euclidean space of dimension , the choice of a point, called an origin and an orthonormal basis of defines an isomorphism of Euclidean spaces from to
As every Euclidean space of dimension is isomorphic to it, the Euclidean space is sometimes called the standard Euclidean space of dimension .
Affine structure
Some basic properties of Euclidean spaces depend only on the fact that a Euclidean space is an affine space. They are called affine properties and include the concepts of lines, subspaces, and parallelism, which are detailed in next subsections.
Subspaces
Let be a Euclidean space and its associated vector space.
A flat, Euclidean subspace or affine subspace of is a subset of such that
as the associated vector space of is a linear subspace (vector subspace) of A Euclidean subspace is a Euclidean space with as the associated vector space. This linear subspace is also called the direction of .
If is a point of then
Conversely, if is a point of and is a linear subspace of then
is a Euclidean subspace of direction . (The associated vector space of this subspace is .)
A Euclidean vector space (that is, a Euclidean space that is equal to ) has two sorts of subspaces: its Euclidean subspaces and its linear subspaces. Linear subspaces are Euclidean subspaces and a Euclidean subspace is a linear subspace if and only if it contains the zero vector.
Lines and segments
In a Euclidean space, a line is a Euclidean subspace of dimension one. Since a vector space of dimension one is spanned by any nonzero vector, a line is a set of the form
where and are two distinct points of the Euclidean space as a part of the line.
It follows that there is exactly one line that passes through (contains) two distinct points. This implies that two distinct lines intersect in at most one point.
A more symmetric representation of the line passing through and is
where is an arbitrary point (not necessary on the line).
In a Euclidean vector space, the zero vector is usually chosen for ; this allows simplifying the preceding formula into
A standard convention allows using this formula in every Euclidean space, see .
The line segment, or simply segment, joining the points and is the subset of points such that in the preceding formulas. It is denoted or ; that is
Parallelism
Two subspaces and of the same dimension in a Euclidean space are parallel if they have the same direction (i.e., the same associated vector space). Equivalently, they are parallel, if there is a translation vector that maps one to the other:
Given a point and a subspace , there exists exactly one subspace that contains and is parallel to , which is In the case where is a line (subspace of dimension one), this property is Playfair's axiom.
It follows that in a Euclidean plane, two lines either meet in one point or are parallel.
The concept of parallel subspaces has been extended to subspaces of different dimensions: two subspaces are parallel if the direction of one of them is contained in the direction to the other.
Metric structure
The vector space associated to a Euclidean space is an inner product space. This implies a symmetric bilinear form
that is positive definite (that is is always positive for ).
The inner product of a Euclidean space is often called dot product and denoted . This is specially the case when a Cartesian coordinate system has been chosen, as, in this case, the inner product of two vectors is the dot product of their coordinate vectors. For this reason, and for historical reasons, the dot notation is more commonly used than the bracket notation for the inner product of Euclidean spaces. This article will follow this usage; that is will be denoted in the remainder of this article.
The Euclidean norm of a vector is
The inner product and the norm allows expressing and proving metric and topological properties of Euclidean geometry. The next subsection describe the most fundamental ones. In these subsections, denotes an arbitrary Euclidean space, and denotes its vector space of translations.
Distance and length
The distance (more precisely the Euclidean distance) between two points of a Euclidean space is the norm of the translation vector that maps one point to the other; that is
The length of a segment is the distance between its endpoints P and Q. It is often denoted .
The distance is a metric, as it is positive definite, symmetric, and satisfies the triangle inequality
Moreover, the equality is true if and only if a point belongs to the segment . This inequality means that the length of any edge of a triangle is smaller than the sum of the lengths of the other edges. This is the origin of the term triangle inequality.
With the Euclidean distance, every Euclidean space is a complete metric space.
Orthogonality
Two nonzero vectors and of (the associated vector space of a Euclidean space ) are perpendicular or orthogonal if their inner product is zero:
Two linear subspaces of are orthogonal if every nonzero vector of the first one is perpendicular to every nonzero vector of the second one. This implies that the intersection of the linear subspaces is reduced to the zero vector.
Two lines, and more generally two Euclidean subspaces (A line can be considered as one Euclidean subspace.) are orthogonal if their directions (the associated vector spaces of the Euclidean subspaces) are orthogonal. Two orthogonal lines that intersect are said perpendicular.
Two segments and that share a common endpoint are perpendicular or form a right angle if the vectors and are orthogonal.
If and form a right angle, one has
This is the Pythagorean theorem. Its proof is easy in this context, as, expressing this in terms of the inner product, one has, using bilinearity and symmetry of the inner product:
Here, is used since these two vectors are orthogonal.
Angle
The (non-oriented) angle between two nonzero vectors and in is
where is the principal value of the arccosine function. By Cauchy–Schwarz inequality, the argument of the arccosine is in the interval . Therefore is real, and (or if angles are measured in degrees).
Angles are not useful in a Euclidean line, as they can be only 0 or .
In an oriented Euclidean plane, one can define the oriented angle of two vectors. The oriented angle of two vectors and is then the opposite of the oriented angle of and . In this case, the angle of two vectors can have any value modulo an integer multiple of . In particular, a reflex angle equals the negative angle .
The angle of two vectors does not change if they are multiplied by positive numbers. More precisely, if and are two vectors, and and are real numbers, then
If , , and are three points in a Euclidean space, the angle of the segments and is the angle of the vectors and As the multiplication of vectors by positive numbers do not change the angle, the angle of two half-lines with initial point can be defined: it is the angle of the segments and , where and are arbitrary points, one on each half-line. Although this is less used, one can define similarly the angle of segments or half-lines that do not share an initial point.
The angle of two lines is defined as follows. If is the angle of two segments, one on each line, the angle of any two other segments, one on each line, is either or . One of these angles is in the interval , and the other being in . The non-oriented angle of the two lines is the one in the interval . In an oriented Euclidean plane, the oriented angle of two lines belongs to the interval .
Cartesian coordinates
Every Euclidean vector space has an orthonormal basis (in fact, infinitely many in dimension higher than one, and two in dimension one), that is a basis of unit vectors () that are pairwise orthogonal ( for ). More precisely, given any basis the Gram–Schmidt process computes an orthonormal basis such that, for every , the linear spans of and are equal.
Given a Euclidean space , a Cartesian frame is a set of data consisting of an orthonormal basis of and a point of , called the origin and often denoted . A Cartesian frame allows defining Cartesian coordinates for both and in the following way.
The Cartesian coordinates of a vector of are the coefficients of on the orthonormal basis For example, the Cartesian coordinates of a vector on an orthonormal basis (that may be named as as a convention) in a 3-dimensional Euclidean space is if . As the basis is orthonormal, the -th coefficient is equal to the dot product
The Cartesian coordinates of a point of are the Cartesian coordinates of the vector
Other coordinates
As a Euclidean space is an affine space, one can consider an affine frame on it, which is the same as a Euclidean frame, except that the basis is not required to be orthonormal. This define affine coordinates, sometimes called skew coordinates for emphasizing that the basis vectors are not pairwise orthogonal.
An affine basis of a Euclidean space of dimension is a set of points that are not contained in a hyperplane. An affine basis define barycentric coordinates for every point.
Many other coordinates systems can be defined on a Euclidean space of dimension , in the following way. Let be a homeomorphism (or, more often, a diffeomorphism) from a dense open subset of to an open subset of The coordinates of a point of are the components of . The polar coordinate system (dimension 2) and the spherical and cylindrical coordinate systems (dimension 3) are defined this way.
For points that are outside the domain of , coordinates may sometimes be defined as the limit of coordinates of neighbour points, but these coordinates may be not uniquely defined, and may be not continuous in the neighborhood of the point. For example, for the spherical coordinate system, the longitude is not defined at the pole, and on the antimeridian, the longitude passes discontinuously from –180° to +180°.
This way of defining coordinates extends easily to other mathematical structures, and in particular to manifolds.
Isometries
An isometry between two metric spaces is a bijection preserving the distance, that is
In the case of a Euclidean vector space, an isometry that maps the origin to the origin preserves the norm
since the norm of a vector is its distance from the zero vector. It preserves also the inner product
since
An isometry of Euclidean vector spaces is a linear isomorphism.
An isometry of Euclidean spaces defines an isometry of the associated Euclidean vector spaces. This implies that two isometric Euclidean spaces have the same dimension. Conversely, if and are Euclidean spaces, , , and is an isometry, then the map defined by
is an isometry of Euclidean spaces.
It follows from the preceding results that an isometry of Euclidean spaces maps lines to lines, and, more generally Euclidean subspaces to Euclidean subspaces of the same dimension, and that the restriction of the isometry on these subspaces are isometries of these subspaces.
Isometry with prototypical examples
If is a Euclidean space, its associated vector space can be considered as a Euclidean space. Every point defines an isometry of Euclidean spaces
which maps to the zero vector and has the identity as associated linear map. The inverse isometry is the map
A Euclidean frame allows defining the map
which is an isometry of Euclidean spaces. The inverse isometry is
This means that, up to an isomorphism, there is exactly one Euclidean space of a given dimension.
This justifies that many authors talk of as the Euclidean space of dimension .
Euclidean group
An isometry from a Euclidean space onto itself is called Euclidean isometry, Euclidean transformation or rigid transformation. The rigid transformations of a Euclidean space form a group (under composition), called the Euclidean group and often denoted of .
The simplest Euclidean transformations are translations
They are in bijective correspondence with vectors. This is a reason for calling space of translations the vector space associated to a Euclidean space. The translations form a normal subgroup of the Euclidean group.
A Euclidean isometry of a Euclidean space defines a linear isometry of the associated vector space (by linear isometry, it is meant an isometry that is also a linear map) in the following way: denoting by the vector if is an arbitrary point of , one has
It is straightforward to prove that this is a linear map that does not depend from the choice of
The map is a group homomorphism from the Euclidean group onto the group of linear isometries, called the orthogonal group. The kernel of this homomorphism is the translation group, showing that it is a normal subgroup of the Euclidean group.
The isometries that fix a given point form the stabilizer subgroup of the Euclidean group with respect to . The restriction to this stabilizer of above group homomorphism is an isomorphism. So the isometries that fix a given point form a group isomorphic to the orthogonal group.
Let be a point, an isometry, and the translation that maps to . The isometry fixes . So and the Euclidean group is the semidirect product of the translation group and the orthogonal group.
The special orthogonal group is the normal subgroup of the orthogonal group that preserves handedness. It is a subgroup of index two of the orthogonal group. Its inverse image by the group homomorphism is a normal subgroup of index two of the Euclidean group, which is called the special Euclidean group or the displacement group. Its elements are called rigid motions or displacements.
Rigid motions include the identity, translations, rotations (the rigid motions that fix at least a point), and also screw motions.
Typical examples of rigid transformations that are not rigid motions are reflections, which are rigid transformations that fix a hyperplane and are not the identity. They are also the transformations consisting in changing the sign of one coordinate over some Euclidean frame.
As the special Euclidean group is a subgroup of index two of the Euclidean group, given a reflection , every rigid transformation that is not a rigid motion is the product of and a rigid motion. A glide reflection is an example of a rigid transformation that is not a rigid motion or a reflection.
All groups that have been considered in this section are Lie groups and algebraic groups.
Topology
The Euclidean distance makes a Euclidean space a metric space, and thus a topological space. This topology is called the Euclidean topology. In the case of this topology is also the product topology.
The open sets are the subsets that contains an open ball around each of their points. In other words, open balls form a base of the topology.
The topological dimension of a Euclidean space equals its dimension. This implies that Euclidean spaces of different dimensions are not homeomorphic. Moreover, the theorem of invariance of domain asserts that a subset of a Euclidean space is open (for the subspace topology) if and only if it is homeomorphic to an open subset of a Euclidean space of the same dimension.
Euclidean spaces are complete and locally compact. That is, a closed subset of a Euclidean space is compact if it is bounded (that is, contained in a ball). In particular, closed balls are compact.
Axiomatic definitions
The definition of Euclidean spaces that has been described in this article differs fundamentally of Euclid's one. In reality, Euclid did not define formally the space, because it was thought as a description of the physical world that exists independently of human mind. The need of a formal definition appeared only at the end of 19th century, with the introduction of non-Euclidean geometries.
Two different approaches have been used. Felix Klein suggested to define geometries through their symmetries. The presentation of Euclidean spaces given in this article, is essentially issued from his Erlangen program, with the emphasis given on the groups of translations and isometries.
On the other hand, David Hilbert proposed a set of axioms, inspired by Euclid's postulates. They belong to synthetic geometry, as they do not involve any definition of real numbers. Later G. D. Birkhoff and Alfred Tarski proposed simpler sets of axioms, which use real numbers (see Birkhoff's axioms and Tarski's axioms).
In Geometric Algebra, Emil Artin has proved that all these definitions of a Euclidean space are equivalent. It is rather easy to prove that all definitions of Euclidean spaces satisfy Hilbert's axioms, and that those involving real numbers (including the above given definition) are equivalent. The difficult part of Artin's proof is the following. In Hilbert's axioms, congruence is an equivalence relation on segments. One can thus define the length of a segment as its equivalence class. One must thus prove that this length satisfies properties that characterize nonnegative real numbers. Artin proved this with axioms equivalent to those of Hilbert.
Usage
Since the ancient Greeks, Euclidean space has been used for modeling shapes in the physical world. It is thus used in many sciences, such as physics, mechanics, and astronomy. It is also widely used in all technical areas that are concerned with shapes, figure, location and position, such as architecture, geodesy, topography, navigation, industrial design, or technical drawing.
Space of dimensions higher than three occurs in several modern theories of physics; see Higher dimension. They occur also in configuration spaces of physical systems.
Beside Euclidean geometry, Euclidean spaces are also widely used in other areas of mathematics. Tangent spaces of differentiable manifolds are Euclidean vector spaces. More generally, a manifold is a space that is locally approximated by Euclidean spaces. Most non-Euclidean geometries can be modeled by a manifold, and embedded in a Euclidean space of higher dimension. For example, an elliptic space can be modeled by an ellipsoid. It is common to represent in a Euclidean space mathematical objects that are a priori not of a geometrical nature. An example among many is the usual representation of graphs.
Other geometric spaces
Since the introduction, at the end of 19th century, of non-Euclidean geometries, many sorts of spaces have been considered, about which one can do geometric reasoning in the same way as with Euclidean spaces. In general, they share some properties with Euclidean spaces, but may also have properties that could appear as rather strange. Some of these spaces use Euclidean geometry for their definition, or can be modeled as subspaces of a Euclidean space of higher dimension. When such a space is defined by geometrical axioms, embedding the space in a Euclidean space is a standard way for proving consistency of its definition, or, more precisely for proving that its theory is consistent, if Euclidean geometry is consistent (which cannot be proved).
Affine space
A Euclidean space is an affine space equipped with a metric. Affine spaces have many other uses in mathematics. In particular, as they are defined over any field, they allow doing geometry in other contexts.
As soon as non-linear questions are considered, it is generally useful to consider affine spaces over the complex numbers as an extension of Euclidean spaces. For example, a circle and a line have always two intersection points (possibly not distinct) in the complex affine space. Therefore, most of algebraic geometry is built in complex affine spaces and affine spaces over algebraically closed fields. The shapes that are studied in algebraic geometry in these affine spaces are therefore called affine algebraic varieties.
Affine spaces over the rational numbers and more generally over algebraic number fields provide a link between (algebraic) geometry and number theory. For example, the Fermat's Last Theorem can be stated "a Fermat curve of degree higher than two has no point in the affine plane over the rationals."
Geometry in affine spaces over a finite fields has also been widely studied. For example, elliptic curves over finite fields are widely used in cryptography.
Projective space
Originally, projective spaces have been introduced by adding "points at infinity" to Euclidean spaces, and, more generally to affine spaces, in order to make true the assertion "two coplanar lines meet in exactly one point". Projective space share with Euclidean and affine spaces the property of being isotropic, that is, there is no property of the space that allows distinguishing between two points or two lines. Therefore, a more isotropic definition is commonly used, which consists as defining a projective space as the set of the vector lines in a vector space of dimension one more.
As for affine spaces, projective spaces are defined over any field, and are fundamental spaces of algebraic geometry.
Non-Euclidean geometries
Non-Euclidean geometry refers usually to geometrical spaces where the parallel postulate is false. They include elliptic geometry, where the sum of the angles of a triangle is more than 180°, and hyperbolic geometry, where this sum is less than 180°. Their introduction in the second half of 19th century, and the proof that their theory is consistent (if Euclidean geometry is not contradictory) is one of the paradoxes that are at the origin of the foundational crisis in mathematics of the beginning of 20th century, and motivated the systematization of axiomatic theories in mathematics.
Curved spaces
A manifold is a space that in the neighborhood of each point resembles a Euclidean space. In technical terms, a manifold is a topological space, such that each point has a neighborhood that is homeomorphic to an open subset of a Euclidean space. Manifolds can be classified by increasing degree of this "resemblance" into topological manifolds, differentiable manifolds, smooth manifolds, and analytic manifolds. However, none of these types of "resemblance" respect distances and angles, even approximately.
Distances and angles can be defined on a smooth manifold by providing a smoothly varying Euclidean metric on the tangent spaces at the points of the manifold (these tangent spaces are thus Euclidean vector spaces). This results in a Riemannian manifold. Generally, straight lines do not exist in a Riemannian manifold, but their role is played by geodesics, which are the "shortest paths" between two points. This allows defining distances, which are measured along geodesics, and angles between geodesics, which are the angle of their tangents in the tangent space at their intersection. So, Riemannian manifolds behave locally like a Euclidean space that has been bent.
Euclidean spaces are trivially Riemannian manifolds. An example illustrating this well is the surface of a sphere. In this case, geodesics are arcs of great circle, which are called orthodromes in the context of navigation. More generally, the spaces of non-Euclidean geometries can be realized as Riemannian manifolds.
Pseudo-Euclidean space
An inner product of a real vector space is a positive definite bilinear form, and so characterized by a positive definite quadratic form. A pseudo-Euclidean space is an affine space with an associated real vector space equipped with a non-degenerate quadratic form (that may be indefinite).
A fundamental example of such a space is the Minkowski space, which is the space-time of Einstein's special relativity. It is a four-dimensional space, where the metric is defined by the quadratic form
where the last coordinate (t) is temporal, and the other three (x, y, z) are spatial.
To take gravity into account, general relativity uses a pseudo-Riemannian manifold that has Minkowski spaces as tangent spaces. The curvature of this manifold at a point is a function of the value of the gravitational field at this point.
| Mathematics | Euclidean geometry | null |
9707 | https://en.wikipedia.org/wiki/Electronegativity | Electronegativity | Electronegativity, symbolized as χ, is the tendency for an atom of a given chemical element to attract shared electrons (or electron density) when forming a chemical bond. An atom's electronegativity is affected by both its atomic number and the distance at which its valence electrons reside from the charged nucleus. The higher the associated electronegativity, the more an atom or a substituent group attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, and the sign and magnitude of a bond's chemical polarity, which characterizes a bond along the continuous scale from covalent to ionic bonding. The loosely defined term electropositivity is the opposite of electronegativity: it characterizes an element's tendency to donate valence electrons.
On the most basic level, electronegativity is determined by factors like the nuclear charge (the more protons an atom has, the more "pull" it will have on electrons) and the number and location of other electrons in the atomic shells (the more electrons an atom has, the farther from the nucleus the valence electrons will be, and as a result, the less positive charge they will experience—both because of their increased distance from the nucleus and because the other electrons in the lower energy core orbitals will act to shield the valence electrons from the positively charged nucleus).
The term "electronegativity" was introduced by Jöns Jacob Berzelius in 1811,
though the concept was known before that and was studied by many chemists including Avogadro.
In spite of its long history, an accurate scale of electronegativity was not developed until 1932, when Linus Pauling proposed an electronegativity scale which depends on bond energies, as a development of valence bond theory. It has been shown to correlate with a number of other chemical properties. Electronegativity cannot be directly measured and must be calculated from other atomic or molecular properties. Several methods of calculation have been proposed, and although there may be small differences in the numerical values of the electronegativity, all methods show the same periodic trends between elements.
The most commonly used method of calculation is that originally proposed by Linus Pauling. This gives a dimensionless quantity, commonly referred to as the Pauling scale (χr), on a relative scale running from 0.79 to 3.98 (hydrogen = 2.20). When other methods of calculation are used, it is conventional (although not obligatory) to quote the results on a scale that covers the same range of numerical values: this is known as an electronegativity in Pauling units.
As it is usually calculated, electronegativity is not a property of an atom alone, but rather a property of an atom in a molecule. Even so, the electronegativity of an atom is strongly correlated with the first ionization energy. The electronegativity is slightly negatively correlated (for smaller electronegativity values) and rather strongly positively correlated (for most and larger electronegativity values) with the electron affinity. It is to be expected that the electronegativity of an element will vary with its chemical environment, but it is usually considered to be a transferable property, that is to say that similar values will be valid in a variety of situations.
Caesium is the least electronegative element (0.79); fluorine is the most (3.98).
Methods of calculation
Pauling electronegativity
Pauling first proposed the concept of electronegativity in 1932 to explain why the covalent bond between two different atoms (A–B) is stronger than the average of the A–A and the B–B bonds. According to valence bond theory, of which Pauling was a notable proponent, this "additional stabilization" of the heteronuclear bond is due to the contribution of ionic canonical forms to the bonding.
The difference in electronegativity between atoms A and B is given by:
where the dissociation energies, Ed, of the A–B, A–A and B–B bonds are expressed in electronvolts, the factor (eV)− being included to ensure a dimensionless result. Hence, the difference in Pauling electronegativity between hydrogen and bromine is 0.73 (dissociation energies: H–Br, 3.79 eV; H–H, 4.52 eV; Br–Br 2.00 eV)
As only differences in electronegativity are defined, it is necessary to choose an arbitrary reference point in order to construct a scale. Hydrogen was chosen as the reference, as it forms covalent bonds with a large variety of elements: its electronegativity was fixed first at 2.1, later revised to 2.20. It is also necessary to decide which of the two elements is the more electronegative (equivalent to choosing one of the two possible signs for the square root). This is usually done using "chemical intuition": in the above example, hydrogen bromide dissolves in water to form H+ and Br− ions, so it may be assumed that bromine is more electronegative than hydrogen. However, in principle, since the same electronegativities should be obtained for any two bonding compounds, the data are in fact overdetermined, and the signs are unique once a reference point has been fixed (usually, for H or F).
To calculate Pauling electronegativity for an element, it is necessary to have data on the dissociation energies of at least two types of covalent bonds formed by that element. A. L. Allred updated Pauling's original values in 1961 to take account of the greater availability of thermodynamic data, and it is these "revised Pauling" values of the electronegativity that are most often used.
The essential point of Pauling electronegativity is that there is an underlying, quite accurate, semi-empirical formula for dissociation energies, namely:
or sometimes, a more accurate fit
These are approximate equations but they hold with good accuracy. Pauling obtained the first equation by noting that a bond can be approximately represented as a quantum mechanical superposition of a covalent bond and two ionic bond-states. The covalent energy of a bond is approximate, by quantum mechanical calculations, the geometric mean of the two energies of covalent bonds of the same molecules, and there is additional energy that comes from ionic factors, i.e. polar character of the bond.
The geometric mean is approximately equal to the arithmetic mean—which is applied in the first formula above—when the energies are of a similar value, e.g., except for the highly electropositive elements, where there is a larger difference of two dissociation energies; the geometric mean is more accurate and almost always gives positive excess energy, due to ionic bonding. The square root of this excess energy, Pauling notes, is approximately additive, and hence one can introduce the electronegativity. Thus, it is these semi-empirical formulas for bond energy that underlie the concept of Pauling electronegativity.
The formulas are approximate, but this rough approximation is in fact relatively good and gives the right intuition, with the notion of the polarity of the bond and some theoretical grounding in quantum mechanics. The electronegativities are then determined to best fit the data.
In more complex compounds, there is an additional error since electronegativity depends on the molecular environment of an atom. Also, the energy estimate can be only used for single, not for multiple bonds. The enthalpy of formation of a molecule containing only single bonds can subsequently be estimated based on an electronegativity table, and it depends on the constituents and the sum of squares of differences of electronegativities of all pairs of bonded atoms. Such a formula for estimating energy typically has a relative error on the order of 10% but can be used to get a rough qualitative idea and understanding of a molecule.
Mulliken electronegativity
Robert S. Mulliken proposed that the arithmetic mean of the first ionization energy (Ei) and the electron affinity (Eea) should be a measure of the tendency of an atom to attract electrons:
As this definition is not dependent on an arbitrary relative scale, it has also been termed absolute electronegativity, with the units of kilojoules per mole or electronvolts. However, it is more usual to use a linear transformation to transform these absolute values into values that resemble the more familiar Pauling values. For ionization energies and electron affinities in electronvolts,
and for energies in kilojoules per mole,
The Mulliken electronegativity can only be calculated for an element whose electron affinity is known. Measured values are available for 72 elements, while approximate values have been estimated or calculated for the remaining elements.
The Mulliken electronegativity of an atom is sometimes said to be the negative of the chemical potential. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is possible to show that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons., i.e.,
Allred–Rochow electronegativity
A. Louis Allred and Eugene G. Rochow considered that electronegativity should be related to the charge experienced by an electron on the "surface" of an atom: The higher the charge per unit area of atomic surface the greater the tendency of that atom to attract electrons. The effective nuclear charge, Zeff, experienced by valence electrons can be estimated using Slater's rules, while the surface area of an atom in a molecule can be taken to be proportional to the square of the covalent radius, rcov. When rcov is expressed in picometres,
Sanderson electronegativity equalization
R.T. Sanderson has also noted the relationship between Mulliken electronegativity and atomic size, and has proposed a method of calculation based on the reciprocal of the atomic volume. With a knowledge of bond lengths, Sanderson's model allows the estimation of bond energies in a wide range of compounds. Sanderson's model has also been used to calculate molecular geometry, s-electron energy, NMR spin-spin coupling constants and other parameters for organic compounds. This work underlies the concept of electronegativity equalization, which suggests that electrons distribute themselves around a molecule to minimize or to equalize the Mulliken electronegativity. This behavior is analogous to the equalization of chemical potential in macroscopic thermodynamics.
Allen electronegativity
Perhaps the simplest definition of electronegativity is that of Leland C. Allen, who has proposed that it is related to the average energy of the valence electrons in a free atom,
where εs,p are the one-electron energies of s- and p-electrons in the free atom and ns,p are the number of s- and p-electrons in the valence shell.
The one-electron energies can be determined directly from spectroscopic data, and so electronegativities calculated by this method are sometimes referred to as spectroscopic electronegativities. The necessary data are available for almost all elements, and this method allows the estimation of electronegativities for elements that cannot be treated by the other methods, e.g. francium, which has an Allen electronegativity of 0.67. However, it is not clear what should be considered to be valence electrons for the d- and f-block elements, which leads to an ambiguity for their electronegativities calculated by the Allen method.
On this scale, neon has the highest electronegativity of all elements, followed by fluorine, helium, and oxygen.
Correlation of electronegativity with other properties
The wide variety of methods of calculation of electronegativities, which all give results that correlate well with one another, is one indication of the number of chemical properties that might be affected by electronegativity. The most obvious application of electronegativities is in the discussion of bond polarity, for which the concept was introduced by Pauling. In general, the greater the difference in electronegativity between two atoms the more polar the bond that will be formed between them, with the atom having the higher electronegativity being at the negative end of the dipole. Pauling proposed an equation to relate the "ionic character" of a bond to the difference in electronegativity of the two atoms, although this has fallen somewhat into disuse.
Several correlations have been shown between infrared stretching frequencies of certain bonds and the electronegativities of the atoms involved: however, this is not surprising as such stretching frequencies depend in part on bond strength, which enters into the calculation of Pauling electronegativities. More convincing are the correlations between electronegativity and chemical shifts in NMR spectroscopy or isomer shifts in Mössbauer spectroscopy (see figure). Both these measurements depend on the s-electron density at the nucleus, and so are a good indication that the different measures of electronegativity really are describing "the ability of an atom in a molecule to attract electrons to itself".
Trends in electronegativity
Periodic trends
In general, electronegativity increases on passing from left to right along a period and decreases on descending a group. Hence, fluorine is the most electronegative of the elements (not counting noble gases), whereas caesium is the least electronegative, at least of those elements for which substantial data is available.
There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon, respectively, because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity (see Allred-Rochow electronegativity and Sanderson electronegativity above). The anomalously high electronegativity of lead, in particular when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state with a Pauling value of 1.87 instead of the +4 state.
Variation of electronegativity with oxidation number
In inorganic chemistry, it is common to consider a single value of electronegativity to be valid for most "normal" situations. While this approach has the advantage of simplicity, it is clear that the electronegativity of an element is not an invariable atomic property and, in particular, increases with the oxidation state of the element.
Allred used the Pauling method to calculate separate electronegativities for different oxidation states of the handful of elements (including tin and lead) for which sufficient data were available. However, for most elements, there are not enough different covalent compounds for which bond dissociation energies are known to make this approach feasible.
The chemical effects of this increase in electronegativity can be seen both in the structures of oxides and halides and in the acidity of oxides and oxoacids. Hence CrO3 and Mn2O7 are acidic oxides with low melting points, while Cr2O3 is amphoteric and Mn2O3 is a completely basic oxide.
The effect can also be clearly seen in the dissociation constants pKa of the oxoacids of chlorine. The effect is much larger than could be explained by the negative charge being shared among a larger number of oxygen atoms, which would lead to a difference in pKa of log10() = –0.6 between hypochlorous acid and perchloric acid. As the oxidation state of the central chlorine atom increases, more electron density is drawn from the oxygen atoms onto the chlorine, diminishing the partial negative charge of individual oxygen atoms. At the same time, the positive partial charge on the hydrogen increases with a higher oxidation state. This explains the observed increased acidity with an increasing oxidation state in the oxoacids of chlorine.
Electronegativity and hybridization scheme
The electronegativity of an atom changes depending on the hybridization of the orbital employed in bonding. Electrons in s orbitals are held more tightly than electrons in p orbitals. Hence, a bond to an atom that employs an spx hybrid orbital for bonding will be more heavily polarized to that atom when the hybrid orbital has more s character. That is, when electronegativities are compared for different hybridization schemes of a given element, the order holds (the trend should apply to non-integer hybridization indices as well).
Group electronegativity
In organic chemistry, electronegativity is associated more with different functional groups than with individual atoms. The terms group electronegativity and substituent electronegativity are used synonymously. However, it is common to distinguish between the inductive effect and the resonance effect, which might be described as σ- and π-electronegativities, respectively. There are a number of linear free-energy relationships that have been used to quantify these effects, of which the Hammett equation is the best known. Kabachnik Parameters are group electronegativities for use in organophosphorus chemistry.
Electropositivity
Electropositivity is a measure of an element's ability to donate electrons, and therefore form positive ions; thus, it is antipode to electronegativity.
Mainly, this is an attribute of metals, meaning that, in general, the greater the metallic character of an element the greater the electropositivity. Therefore, the alkali metals are the most electropositive of all. This is because they have a single electron in their outer shell and, as this is relatively far from the nucleus of the atom, it is easily lost; in other words, these metals have low ionization energies.
While electronegativity increases along periods in the periodic table, and decreases down groups, electropositivity decreases along periods (from left to right) and increases down groups. This means that elements in the upper right of the periodic table of elements (oxygen, sulfur, chlorine, etc.) will have the greatest electronegativity, and those in the lower-left (rubidium, caesium, and francium) the greatest electropositivity.
| Physical sciences | Periodic table | Chemistry |
9710 | https://en.wikipedia.org/wiki/Elementary%20algebra | Elementary algebra | Elementary algebra, also known as high school algebra or college algebra, encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables (quantities without fixed values).
This use of variables entails use of algebraic notation and an understanding of the general rules of the operations introduced in arithmetic: addition, subtraction, multiplication, division, etc. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers.
It is typically taught to secondary school students and at introductory college level in the United States, and builds on their understanding of arithmetic. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations.
Algebraic operations
Algebraic notation
Algebraic notation describes the rules and conventions for writing mathematical expressions, as well as the terminology used for talking about parts of expressions. For example, the expression has the following components:
A coefficient is a numerical value, or letter representing a numerical constant, that multiplies a variable (the operator is omitted). A term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. Letters represent variables and constants. By convention, letters at the beginning of the alphabet (e.g. ) are typically used to represent constants, and those toward the end of the alphabet (e.g. and ) are used to represent variables. They are usually printed in italics.
Algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation, and are applied to algebraic variables and terms. Multiplication symbols are usually omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example, is written as , and may be written .
Usually terms with the highest power (exponent), are written on the left, for example, is written to the left of . When a coefficient is one, it is usually omitted (e.g. is written ). Likewise when the exponent (power) is one, (e.g. is written ). When the exponent is zero, the result is always 1 (e.g. is always rewritten to ). However , being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents.
Alternative notation
Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. As an illustration of this, while exponents are usually formatted using superscripts, e.g., , in plain text, and in the TeX mark-up language, the caret symbol represents exponentiation, so is written as "x^2". This also applies to some programming languages such as Lua. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so is written as "x**2". Many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example, is written "3*x".
Concepts
Variables
Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons.
Variables may represent numbers whose values are not yet known. For example, if the temperature of the current day, C, is 20 degrees higher than the temperature of the previous day, P, then the problem can be described algebraically as .
Variables allow one to describe general problems, without specifying the values of the quantities that are involved. For example, it can be stated specifically that 5 minutes is equivalent to seconds. A more general (algebraic) description may state that the number of seconds, , where m is the number of minutes.
Variables allow one to describe mathematical relationships between quantities that may vary. For example, the relationship between the circumference, c, and diameter, d, of a circle is described by .
Variables allow one to describe some mathematical properties. For example, a basic property of addition is commutativity which states that the order of numbers being added together does not matter. Commutativity is stated algebraically as .
Simplifying expressions
Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example,
Added terms are simplified using coefficients. For example, can be simplified as (where 3 is a numerical coefficient).
Multiplied terms are simplified using exponents. For example, is represented as
Like terms are added together, for example, is written as , because the terms containing are added together, and, the terms containing are added together.
Brackets can be "multiplied out", using the distributive property. For example, can be written as which can be written as
Expressions can be factored. For example, , by dividing both terms by the common factor, can be written as
Equations
An equation states that two expressions are equal using the symbol for equality, (the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle:
This equation states that , representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by and .
An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. is true only for and . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving.
Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: where represents 'greater than', and where represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped.
Properties of equality
By definition, equality is an equivalence relation, meaning it is reflexive (i.e. ), symmetric (i.e. if then ), and transitive (i.e. if and then ). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties:
if and then and ;
if then and ;
more generally, for any function , if then .
Properties of inequality
The relations less than and greater than have the property of transitivity:
If and then ;
If and then ;
If and then ;
If and then .
By reversing the inequation, and can be swapped, for example:
is equivalent to
Substitution
Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for in the expression makes a new expression with meaning . Substituting the terms of a statement makes a new statement. When the original statement is true independently of the values of the terms, the statement created by substitutions is also true. Hence, definitions can be made in symbolic terms and interpreted through substitution: if is meant as the definition of as the product of with itself, substituting for informs the reader of this statement that means . Often it's not known whether the statement is true independently of the values of the terms. And, substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement , if is substituted with , this implies , which is false, which implies that if then cannot be .
If and are integers, rationals, or real numbers, then implies or . Consider . Then, substituting for and for , we learn or . Then we can substitute again, letting and , to show that if then or . Therefore, if , then or ( or ), so implies or or .
If the original fact were stated as " implies or ", then when saying "consider ," we would have a conflict of terms when substituting. Yet the above logic is still valid to show that if then or or if, instead of letting and , one substitutes for and for (and with , substituting for and for ). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it's clear that if we substitute an expression into the term of the original equation, the substituted does not refer to the in the statement " implies or ."
Solving algebraic equations
The following sections lay out examples of some of the types of algebraic equations that may be encountered.
Linear equations with one variable
Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider:
Problem in words: If you double the age of a child and add 4, the resulting answer is 12. How old is the child?
Equivalent equation: where represent the child's age
To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. This problem and its solution are as follows:
In words: the child is 4 years old.
The general form of a linear equation with one variable, can be written as:
Following the same procedure (i.e. subtract from both sides, and then divide by ), the general solution is given by
Linear equations with two variables
A linear equation with two variables has many (i.e. an infinite number of) solutions. For example:
Problem in words: A father is 22 years older than his son. How old are they?
Equivalent equation: where is the father's age, is the son's age.
That cannot be worked out by itself. If the son's age was made known, then there would no longer be two unknowns (variables). The problem then becomes a linear equation with just one variable, that can be solved as described above.
To solve a linear equation with two variables (unknowns), requires two related equations. For example, if it was also revealed that:
Problem in words
In 10 years, the father will be twice as old as his son.
Equivalent equation
Now there are two related linear equations, each with two unknowns, which enables the production of a linear equation with just one variable, by subtracting one from the other (called the elimination method):
In other words, the son is aged 12, and since the father 22 years older, he must be 34. In 10 years, the son will be 22, and the father will be twice his age, 44. This problem is illustrated on the associated plot of the equations.
For other ways to solve this kind of equations, see below, System of linear equations.
Quadratic equations
A quadratic equation is one which includes a term with an exponent of 2, for example, , and no term with higher exponent. The name derives from the Latin quadrus, meaning square. In general, a quadratic equation can be expressed in the form , where is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term , which is known as the quadratic term. Hence , and so we may divide by and rearrange the equation into the standard form
where and . Solving this, by a process known as completing the square, leads to the quadratic formula
where the symbol "±" indicates that both
are solutions of the quadratic equation.
Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring:
which is the same thing as
It follows from the zero-product property that either or are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but need not have any in the real number system. For example,
has no real number solution since no real number squared equals −1.
Sometimes a quadratic equation has a root of multiplicity 2, such as:
For this equation, −1 is a root of multiplicity 2. This means −1 appears twice, since the equation can be rewritten in factored form as
Complex numbers
All quadratic equations have exactly two solutions in complex numbers (but they may be equal to each other), a category that includes real numbers, imaginary numbers, and sums of real and imaginary numbers. Complex numbers first arise in the teaching of quadratic equations and the quadratic formula. For example, the quadratic equation
has solutions
Since is not any real number, both of these solutions for x are complex numbers.
Exponential and logarithmic equations
An exponential equation is one which has the form for , which has solution
when . Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if
then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain
whence
or
A logarithmic equation is an equation of the form for , which has solution
For example, if
then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get
whence
from which we obtain
Radical equations
A radical equation is one that includes a radical sign, which includes square roots, cube roots, , and nth roots, . Recall that an nth root can be rewritten in exponential format, so that is equivalent to . Combined with regular exponents (powers), then (the square root of cubed), can be rewritten as . So a common form of a radical equation is (equivalent to ) where and are integers. It has real solution(s):
For example, if:
then
and thus
System of linear equations
There are different methods to solve a system of linear equations with two variables.
Elimination method
An example of solving a system of linear equations is by using the elimination method:
Multiplying the terms in the second equation by 2:
Adding the two equations together to get:
which simplifies to
Since the fact that is known, it is then possible to deduce that by either of the original two equations (by using 2 instead of ) The full solution to this problem is then
This is not the only way to solve this specific system; could have been resolved before .
Substitution method
Another way of solving the same system of linear equations is by substitution.
An equivalent for can be deduced by using one of the two equations. Using the second equation:
Subtracting from each side of the equation:
and multiplying by −1:
Using this value in the first equation in the original system:
Adding 2 on each side of the equation:
which simplifies to
Using this value in one of the equations, the same solution as in the previous method is obtained.
This is not the only way to solve this specific system; in this case as well, could have been solved before .
Other types of systems of linear equations
Inconsistent systems
In the above example, a solution exists. However, there are also systems of equations which do not have any solution. Such a system is called inconsistent. An obvious example is
As 0≠2, the second equation in the system has no solution. Therefore, the system has no solution.
However, not all inconsistent systems are recognized at first sight. As an example, consider the system
Multiplying by 2 both sides of the second equation, and adding it to the first one results in
which clearly has no solution.
Undetermined systems
There are also systems which have infinitely many solutions, in contrast to a system with a unique solution (meaning, a unique pair of values for and ) For example:
Isolating in the second equation:
And using this value in the first equation in the system:
The equality is true, but it does not provide a value for . Indeed, one can easily verify (by just filling in some values of ) that for any there is a solution as long as . There is an infinite number of solutions for this system.
Over- and underdetermined systems
Systems with more variables than the number of linear equations are called underdetermined. Such a system, if it has any solutions, does not have a unique one but rather an infinitude of them. An example of such a system is
When trying to solve it, one is led to express some variables as functions of the other ones if any solutions exist, but cannot express all solutions numerically because there are an infinite number of them if there are any.
A system with a higher number of equations than variables is called overdetermined. If an overdetermined system has any solutions, necessarily some equations are linear combinations of the others.
| Mathematics | Algebra | null |
9730 | https://en.wikipedia.org/wiki/Electron%20microscope | Electron microscope | An electron microscope is a microscope that uses a beam of electrons as a source of illumination. They use electron optics that are analogous to the glass lenses of an optical light microscope to control the electron beam, for instance focusing them to produce magnified images or electron diffraction patterns. As the wavelength of an electron can be up to 100,000 times smaller than that of visible light, electron microscopes have a much higher resolution of about 0.1 nm, which compares to about 200 nm for light microscopes. Electron microscope may refer to:
Transmission electron microscopy (TEM) where swift electrons go through a thin sample
Scanning transmission electron microscopy (STEM) which is similar to TEM with a scanned electron probe
Scanning electron microscope (SEM) which is similar to STEM, but with thick samples
Electron microprobe similar to a SEM, but more for chemical analysis
Low-energy electron microscopy (LEEM), used to image surfaces
Photoemission electron microscopy (PEEM) which is similar to LEEM using electrons emitted from surfaces by photons
Additional details can be found in the above links. This article contains some general information mainly about transmission electron microscopes.
History
Many developments laid the groundwork of the electron optics used in microscopes. One significant step was the work of Hertz in 1883 who made a cathode-ray tube with electrostatic and magnetic deflection, demonstrating manipulation of the direction of an electron beam. Others were focusing of the electrons by an axial magnetic field by Emil Wiechert in 1899, improved oxide-coated cathodes which produced more electrons by Arthur Wehnelt in 1905 and the development of the electromagnetic lens in 1926 by Hans Busch. According to Dennis Gabor, the physicist Leó Szilárd tried in 1928 to convince him to build an electron microscope, for which Szilárd had filed a patent.
To this day the issue of who invented the transmission electron microscope is controversial. In 1928, at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), Adolf Matthias (Professor of High Voltage Technology and Electrical Installations) appointed Max Knoll to lead a team of researchers to advance research on electron beams and cathode-ray oscilloscopes. The team consisted of several PhD students including Ernst Ruska. In 1931, Max Knoll and Ernst Ruska successfully generated magnified images of mesh grids placed over an anode aperture. The device, a replicate of which is shown in the figure, used two magnetic lenses to achieve higher magnifications, the first electron microscope. (Max Knoll died in 1969, so did not receive a share of the 1986 Nobel prize for the invention of electron microscopes.)
Apparently independent of this effort was work at Siemens-Schuckert by Reinhold Rüdenberg. According to patent law (U.S. Patent No. 2058914 and 2070318, both filed in 1932), he is the inventor of the electron microscope, but it is not clear when he had a working instrument. He stated in a very brief article in 1932 that Siemens had been working on this for some years before the patents were filed in 1932, claiming that his effort was parallel to the university development. He died in 1961, so similar to Max Knoll, was not eligible for a share of the 1986 Nobel prize.
In the following year, 1933, Ruska and Knoll built the first electron microscope that exceeded the resolution of an optical (light) microscope. Four years later, in 1937, Siemens financed the work of Ernst Ruska and Bodo von Borries, and employed Helmut Ruska, Ernst's brother, to develop applications for the microscope, especially with biological specimens. Also in 1937, Manfred von Ardenne pioneered the scanning electron microscope. Siemens produced the first commercial electron microscope in 1938. The first North American electron microscopes were constructed in the 1930s, at the Washington State University by Anderson and Fitzsimmons and at the University of Toronto by Eli Franklin Burton and students Cecil Hall, James Hillier, and Albert Prebus. Siemens produced a transmission electron microscope (TEM) in 1939. Although current transmission electron microscopes are capable of two million times magnification, as scientific instruments they remain similar but with improved optics.
In the 1940s, high-resolution electron microscopes were developed, enabling greater magnification and resolution. By 1965, Albert Crewe at the University of Chicago introduced the scanning transmission electron microscope using a field emission source, enabling scanning microscopes at high resolution. By the early 1980s improvements in mechanical stability as well as the use of higher accelerating voltages enabled imaging of materials at the atomic scale. In the 1980s, the field emission gun became common for electron microscopes, improving the image quality due to the additional coherence and lower chromatic aberrations. The 2000s were marked by advancements in aberration-corrected electron microscopy, allowing for significant improvements in resolution and clarity of images.
Types
Transmission electron microscope (TEM)
The original form of the electron microscope, the transmission electron microscope (TEM), uses a high voltage electron beam to illuminate the specimen and create an image. An electron beam is produced by an electron gun, with the electrons typically having energies in the range 20 to 400 keV, focused by electromagnetic lenses, and transmitted through the specimen. When it emerges from the specimen, the electron beam carries information about the structure of the specimen that is magnified by lenses of the microscope. The spatial variation in this information (the "image") may be viewed by projecting the magnified electron image onto a detector. For example, the image may be viewed directly by an operator using a fluorescent viewing screen coated with a phosphor or scintillator material such as zinc sulfide. A high-resolution phosphor may also be coupled by means of a lens optical system or a fibre optic light-guide to the sensor of a digital camera. Direct electron detectors have no scintillator and are directly exposed to the electron beam, which addresses some of the limitations of scintillator-coupled cameras.
The resolution of TEMs is limited primarily by spherical aberration, but a new generation of hardware correctors can reduce spherical aberration to increase the resolution in high-resolution transmission electron microscopy (HRTEM) to below 0.5 angstrom (50 picometres), enabling magnifications above 50 million times. The ability of HRTEM to determine the positions of atoms within materials is useful for nano-technologies research and development.
Scanning transmission electron microscope (STEM)
The STEM rasters a focused incident probe across a specimen. The high resolution of the TEM is thus possible in STEM. The focusing action (and aberrations) occur before the electrons hit the specimen in the STEM, but afterward in the TEM. The STEMs use of SEM-like beam rastering simplifies annular dark-field imaging, and other analytical techniques, but also means that image data is acquired in serial rather than in parallel fashion.
Scanning electron microscope (SEM)
The SEM produces images by probing the specimen with a focused electron beam that is scanned across the specimen (raster scanning). When the electron beam interacts with the specimen, it loses energy by a variety of mechanisms. These interactions lead to, among other events, emission of low-energy secondary electrons and high-energy backscattered electrons, light emission (cathodoluminescence) or X-ray emission, all of which provide signals carrying information about the properties of the specimen surface, such as its topography and composition. The image displayed by SEM represents the varying intensity of any of these signals into the image in a position corresponding to the position of the beam on the specimen when the signal was generated.
SEMs are different from TEMs in that they use electrons with much lower energy, generally below 20 keV, while TEMs generally use electrons with energies in the range of 80-300 keV. Thus, the electron sources and optics of the two microscopes have different designs, and they are normally separate instruments.
Main operating modes
Diffraction contrast imaging
Diffraction contrast uses the variation in either or both the direction of diffracted electrons or their amplitude as the contrast mechanism.
Phase contrast imaging
Phase contrast imaging involves generating contrast, for instance around edges, by defocusing the micriscope.
High resolution imaging
Chemical analysis
Electron diffraction
Transmission electron microscopes can be used in electron diffraction mode where a map of the angles of the electrons leaving the sample is produced. The advantages of electron diffraction over X-ray crystallography are primarily in the size of the crystals. In X-ray crystallography, crystals are commonly visible by the naked eye and are generally in the hundreds of micrometers in length. In comparison, crystals for electron diffraction must be less than a few hundred nanometers in thickness, and have no lower boundary of size. Additionally, electron diffraction is done on a TEM, which can also be used to obtain many other types of information, rather than requiring a separate instrument.
Sample preparation
Samples for electron microscopes mostly cannot be observed directly. The samples need to be prepared to stabilize the sample and enhance contrast. Preparation techniques differ vastly in respect to the sample and its specific qualities to be observed as well as the specific microscope used.
Scanning Electron Microscope (SEM)
To prevent charging and enhance the signal in SEM, non-conductive samples (e.g. biological samples as in figure) can be sputter-coated in a thin film of metal.
Transmission electron microscope
Materials to be viewed in a transmission electron microscope (TEM) may require processing to produce a suitable sample. The technique required varies depending on the specimen and the analysis required:
Chemical fixation – for biological specimens this aims to stabilize the specimen's mobile macromolecular structure by chemical crosslinking of proteins with aldehydes such as formaldehyde and glutaraldehyde, and lipids with osmium tetroxide.
Cryofixation – freezing a specimen so that the water forms vitreous (non-crystalline) ice. This preserves the specimen in a snapshot of its native state. Methods to achieve this vitrification include plunge freezing rapidly in liquid ethane, and high pressure freezing. An entire field called cryo-electron microscopy has branched from this technique. With the development of cryo-electron microscopy of vitreous sections (CEMOVIS) and cryo-focused ion beam milling of lamellae, it is now possible to observe samples from virtually any biological specimen close to its native state.
Dehydration – replacement of water with organic solvents such as ethanol or acetone, followed by critical point drying or infiltration with embedding resins. | Technology | Optical instruments | null |
9735 | https://en.wikipedia.org/wiki/Electromagnetic%20field | Electromagnetic field | An electromagnetic field (also EM field) is a physical field, mathematical functions of position and time, representing the influences on and due to electric charges. The field at any point in space and time can be regarded as a combination of an electric field and a magnetic field.
Because of the interrelationship between the fields, a disturbance in the electric field can create a disturbance in the magnetic field which in turn affects the electric field, leading to an oscillation that propagates through space, known as an electromagnetic wave.
The way in which charges and currents (i.e. streams of charges) interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. Maxwell's equations detail how the electric field converges towards or diverges away from electric charges, how the magnetic field curls around electrical currents, and how changes in the electric and magnetic fields influence each other. The Lorentz force law states that a charge subject to an electric field feels a force along the direction of the field, and a charge moving through a magnetic field feels a force that is perpendicular both to the magnetic field and to its direction of motion.
The electromagnetic field is described by classical electrodynamics, an example of a classical field theory. This theory describes many macroscopic physical phenomena accurately. However, it was unable to explain the photoelectric effect and atomic absorption spectroscopy, experiments at the atomic scale. That required the use of quantum mechanics, specifically the quantization of the electromagnetic field and the development of quantum electrodynamics.
History
The empirical investigation of electromagnetism is at least as old as the ancient Greek philosopher, mathematician and scientist Thales of Miletus, who around 600 BCE described his experiments rubbing fur of animals on various materials such as amber creating static electricity. By the 18th century, it was understood that objects can carry positive or negative electric charge, that two objects carrying charge of the same sign repel each other, that two objects carrying charges of opposite sign attract one another, and that the strength of this force falls off as the square of the distance between them. Michael Faraday visualized this in terms of the charges interacting via the electric field. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field as well as an electric field are produced when the charge moves, creating an electric current with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole—the electromagnetic field. In 1820, Hans Christian Ørsted showed that an electric current can deflect a nearby compass needle, establishing that electricity and magnetism are closely related phenomena. Faraday then made the seminal observation that time-varying magnetic fields could induce electric currents in 1831.
In 1861, James Clerk Maxwell synthesized all the work to date on electrical and magnetic phenomena into a single mathematical theory, from which he then deduced that light is an electromagnetic wave. Maxwell's continuous field theory was very successful until evidence supporting the atomic model of matter emerged. Beginning in 1877, Hendrik Lorentz developed an atomic model of electromagnetism and in 1897 J. J. Thomson completed experiments that defined the electron. The Lorentz theory works for free charges in electromagnetic fields, but fails to predict the energy spectrum for bound charges in atoms and molecules. For that problem, quantum mechanics is needed, ultimately leading to the theory of quantum electrodynamics.
Practical applications of the new understanding of electromagnetic fields emerged in the late 1800s. The electrical generator and motor were invented using only the empirical findings like Faraday's and Ampere's laws combined with practical experience.
Mathematical description
There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field).
If only the electric field () is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field () is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.
With the advent of special relativity, physical laws became amenable to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws.
The behavior of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are:
Gauss's law
Gauss's law for magnetism
Faraday's law
Ampère–Maxwell law
where is the charge density, which is a function of time and position, is the vacuum permittivity, is the vacuum permeability, and is the current density vector, also a function of time and position. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors.
The Lorentz force law governs the interaction of the electromagnetic field with charged matter.
When a field travels across to different media, the behavior of the field changes according to the properties of the media.
Properties of the field
Electrostatics and magnetostatics
The Maxwell equations simplify when the charge density at each point in space does not change over time and all electric currents likewise remain constant. All of the time derivatives vanish from the equations, leaving two expressions that involve the electric field,
and
along with two formulae that involve the magnetic field:
and
These expressions are the basic equations of electrostatics, which focuses on situations where electrical charges do not move, and magnetostatics, the corresponding area of magnetic phenomena.
Transformations of electromagnetic fields
Whether a physical effect is attributable to an electric field or to a magnetic field is dependent upon the observer, in a way that special relativity makes mathematically precise. For example, suppose that a laboratory contains a long straight wire that carries an electrical current. In the frame of reference where the laboratory is at rest, the wire is motionless and electrically neutral: the current, composed of negatively charged electrons, moves against a background of positively charged ions, and the densities of positive and negative charges cancel each other out. A test charge near the wire would feel no electrical force from the wire. However, if the test charge is in motion parallel to the current, the situation changes. In the rest frame of the test charge, the positive and negative charges in the wire are moving at different speeds, and so the positive and negative charge distributions are Lorentz-contracted by different amounts. Consequently, the wire has a nonzero net charge density, and the test charge must experience a nonzero electric field and thus a nonzero force. In the rest frame of the laboratory, there is no electric field to explain the test charge being pulled towards or pushed away from the wire. So, an observer in the laboratory rest frame concludes that a field must be present.
In general, a situation that one observer describes using only an electric field will be described by an observer in a different inertial frame using a combination of electric and magnetic fields. Analogously, a phenomenon that one observer describes using only a magnetic field will be, in a relatively moving reference frame, described by a combination of fields. The rules for relating the fields required in different reference frames are the Lorentz transformations of the fields.
Thus, electrostatics and magnetostatics are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely a consequence of different frames of measurement. The fact that the two field variations can be reproduced just by changing the motion of the observer is further evidence that there is only a single actual field involved which is simply being observed differently.
Reciprocal behavior of electric and magnetic fields
The two Maxwell equations, Faraday's Law and the Ampère–Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as "a changing magnetic field inside a loop creates an electric voltage around the loop". This is the principle behind the electric generator.
Ampere's Law roughly states that "an electrical current around a loop creates a magnetic field through the loop". Thus, this law can be applied to generate a magnetic field and run an electric motor.
Behavior of the fields in the absence of charges or currents
Maxwell's equations can be combined to derive wave equations. The solutions of these equations take the form of an electromagnetic wave. In a volume of space not containing charges or currents (free space) – that is, where and are zero, the electric and magnetic fields satisfy these electromagnetic wave equations:
James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law. This unified the physical understanding of electricity, magnetism, and light: visible light is but one portion of the full range of electromagnetic waves, the electromagnetic spectrum.
Time-varying EM fields in Maxwell's equations
An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source. Such radiation can occur across a wide range of frequencies called the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles.
A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen.
A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field.
Changing dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances.
Changing dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as RFID tags, metal detectors, and MRI scanner coils at higher frequencies.
Health and safety
The potential effects of electromagnetic fields on human health vary widely depending on the frequency, intensity of the fields, and the length of the exposure. Low frequency, low intensity, and short duration exposure to electromagnetic radiation is generally considered safe. On the other hand, radiation from other parts of the electromagnetic spectrum, such as ultraviolet light and gamma rays, are known to cause significant harm in some circumstances.
| Physical sciences | Electrodynamics | null |
9737 | https://en.wikipedia.org/wiki/Eugenics | Eugenics | Eugenics ( ; ) is a set of beliefs and practices that aim to improve the genetic quality of a human population. Historically, eugenicists have attempted to alter the frequency of various human phenotypes by inhibiting the fertility of people and groups they considered inferior, or promoting that of those considered superior.
The contemporary history of eugenics began in the late 19th century, when a popular eugenics movement emerged in the United Kingdom, and then spread to many countries, including the United States, Canada, Australia, and most European countries (e.g. Sweden and Germany). In this period, people from across the political spectrum espoused eugenic ideas. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations' genetic stock.
Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, British-Indian scientist J. B. S. Haldane wrote in 1940 that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today. Early eugenicists were mostly concerned with factors of measured intelligence that often correlated strongly with social class.
Although it originated as a progressive social movement in the 19th century, in contemporary usage in the 21st century, the term is closely associated with scientific racism. New, liberal eugenics seeks to dissociate itself from old, authoritarian eugenics by rejecting coercive state programs and relying on parental choice.
Common distinctions
Eugenic programs included both positive measures, such as encouraging individuals deemed particularly "fit" to reproduce, and negative measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction.
In other words, positive eugenics is aimed at encouraging reproduction among the genetically advantaged, for example, the eminently intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit.
As opposed to "euthenics"
Historical eugenics
Ancient and medieval origins
Academic origins
The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, directly drawing on the recent work delineating natural selection by his half-cousin Charles Darwin. He published his observations and conclusions chiefly in his influential book Inquiries into Human Faculty and Its Development. Galton himself defined it as "the study of all agencies under human control which can improve or impair the racial quality of future generations". The first to systematically apply Darwinism theory to human relations, Galton believed that various desirable human qualities were also hereditary ones, although Darwin strongly disagreed with this elaboration of his theory.
Eugenics became an academic discipline at many colleges and universities and received funding from various sources. Organizations were formed to win public support for and to sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals. In 1909, the Anglican clergymen William Inge and James Peile both wrote for the Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes.
Three International Eugenics Conferences presented a global venue for eugenicists, with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies in the United States were first implemented by state-level legislators in the early 1900s. Eugenic policies also took root in France, Germany, and Great Britain. Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium, Brazil, Canada, Japan and Sweden.
Frederick Osborn's 1937 journal article "Development of a Eugenic Philosophy" framed eugenics as a social philosophy—a philosophy with implications for social order. That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits ("positive eugenics") or reduced rates of sexual reproduction or sterilization of people with less-desired or undesired traits ("negative eugenics").
In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbor Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "unfit" races.
Many leading British politicians subscribed to the theories of eugenics. Winston Churchill supported the British Eugenics Society and was an honorary vice president for the organization. Churchill believed that eugenics could solve "race deterioration" and reduce crime and poverty.
As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide. By 2014, gene selection (rather than "people selection") was made possible through advances in genome editing, leading to what is sometimes called new eugenics, also known as "neo-eugenics", "consumer eugenics", or "liberal eugenics"; which focuses on individual freedom and allegedly pulls away from racism, sexism or a focus on intelligence.
Early opposition
Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward, the English writer G. K. Chesterton, and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book Eugenics and Other Evils, and Franz Boas' 1916 article "Eugenics" (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement.
Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben. Other biologists who were themselves eugenicists, such as J. B. S. Haldane and R. A. Fisher, however, also expressed skepticism in the belief that sterilization of "defectives" (i.e. a purely negative eugenics) would lead to the disappearance of undesirable genetic traits.
Among institutions, the Catholic Church was an opponent of state-enforced sterilizations, but accepted isolating people with hereditary diseases so as not to let them reproduce. Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party. The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii. In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason."
In fact, more generally, "[m]uch of the opposition to eugenics during that era, at least in Europe, came from the right." The eugenicists' political successes in Germany and Scandinavia were not at all matched in such countries as Poland and Czechoslovakia, even though measures had been proposed there, largely because of the Catholic church's moderating influence.
Concerns over human devolution
Dysgenics
Compulsory sterilization
Eugenic feminism
North American eugenics
Eugenics in Mexico
Nazism and the decline of eugenics
The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rüdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", and therefore led to segregation, institutionalization, sterilization, and even mass murder. The Nazi policy of identifying German citizens deemed mentally or physically unfit and then systematically killing them with poison gas, referred to as the Aktion T4 campaign, is understood by historians to have paved the way for the Holocaust.
By the end of World War II, many eugenics laws were abandoned, having become associated with Nazi Germany. H. G. Wells, who had called for "the sterilization of failures" in 1904, stated in his 1940 book The Rights of Man: Or What Are We Fighting For? that among the human rights, which he believed should be available to all people, was "a prohibition on mutilation, sterilization, torture, and any bodily punishment". After World War II, the practice of "imposing measures intended to prevent births within [a national, ethnical, racial or religious] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide. The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons".
In Singapore
Lee Kuan Yew, the founding father of Singapore, actively promoted eugenics as late as 1983. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. For this purpose was introduced the "Graduate Mother Scheme" that incentivized graduate women to get married as much as the rest of their populace. The incentives were extremely unpopular and regarded as eugenic, and were seen as discriminatory towards Singapore's non-Chinese ethnic population. In 1985, the incentives were partly abandoned as ineffective, while the government matchmaking agency, the Social Development Network, remains active.
Modern eugenics
Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, sparking renewed interest in the topic.
Liberal eugenics, also called new eugenics, aims to make genetic interventions morally acceptable by rejecting coercive state programs and relying on parental choice. Bioethicist Nicholas Agar, who coined the term, argues for example that the state should only intervene to forbid interventions that excessively limit a child’s ability to shape their own future. Unlike "authoritarian" or "old" eugenics, liberal eugenics draws on modern scientific knowledge of genomics to enable informed choices aimed at improving well-being. Julien Savulescu further argues that some eugenic practices like prenatal screening for Down syndrome are already widely practiced, without being labeled "eugenics", as they are seen as enhancing freedom rather than restricting it.
Some critics, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a "back door to eugenics". This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products". The United Nations' International Bioethics Committee also noted that while human genetic engineering should not be confused with the 20th century eugenics movements, it nonetheless challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want or cannot afford the technology.
In 2025, geneticist Peter Visscher published a paper in Nature, arguing genome editing of human embryos and germ cells may become feasible in the 21st century, and raising ethical considerations in the context of previous eugenics movements. A response argued that human embryo genetic editing is "unsafe and unproven". Nature also published an editorial, stating: "The fear that polygenic gene editing could be used for eugenics looms large among them, and is, in part, why no country currently allows genome editing in a human embryo, even for single variants".
Contested scientific status
One general concern that many bring to the table, is that the reduced genetic diversity some argue to be a likely feature of long-term, species-wide eugenics plans, could eventually result in inbreeding depression, increased spread of infectious disease, and decreased resilience to changes in the environment.
Arguments for scientific validity
In his original lecture "Darwinism, Medical Progress and Eugenics", Karl Pearson claimed that everything concerning eugenics fell into the field of medicine. Anthropologist Aleš Hrdlička said in 1918 that "[t]he growing science of eugenics will essentially become applied anthropology." The economist John Maynard Keynes was a lifelong proponent of eugenics and described it as a branch of sociology.
In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.
Objections to scientific validity
Amanda Caleb, Professor of Medical Humanities at Geisinger Commonwealth School of Medicine, says "Eugenic laws and policies are now understood as part of a specious devotion to a pseudoscience that actively dehumanizes to support political agendas and not true science or medicine."
The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective.
Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Andrzej Pękalski, from the University of Wroclaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pękalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together.
While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point there is no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some conditions such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual, so eliminating these genes is undesirable in places where such diseases are common.
Edwin Black, journalist, historian, and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. This aspect of eugenics is often considered to be tainted with scientific racism and pseudoscience.
Contested ethical status
Contemporary ethical opposition
In a book directly addressed at socialist eugenicist J.B.S. Haldane and his once-influential Daedalus, Betrand Russell, had one serious objection of his own: eugenic policies might simply end up being used to reproduce existing power relations "rather than to make men happy."
Environmental ethicist Bill McKibben argued against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, he argues, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using Ming China, Tokugawa Japan and the contemporary Amish as examples.
Contemporary ethical advocacy
Bioethicist Stephen Wilkinsonhas said that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral.
Historian Nathaniel C. Comfort has claimed that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making process from the state to patients and their families.
In their book published in 2000, From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.
In science fiction
The novel Brave New World by the English author Aldous Huxley (1931), is a dystopian social science fiction novel which is set in a futuristic World State, whose citizens are environmentally engineered into an intelligence-based social hierarchy.
Various works by the author Robert A. Heinlein mention the Howard Foundation, a group which attempts to improve human longevity through selective breeding.
Among Frank Herbert's other works, the Dune series, starting with the eponymous 1965 novel, describes selective breeding by a powerful sisterhood, the Bene Gesserit, to produce a supernormal male being, the Kwisatz Haderach.
The Star Trek franchise features a race of genetically engineered humans which is known as "Augments", the most notable of them is Khan Noonien Singh. These "supermen" were the cause of the Eugenics Wars, a dark period in Earth's fictional history, before they were deposed and exiled. They appear in many of the franchise's story arcs, most frequently, they appear as villains.
The film Gattaca (1997) provides a fictional example of a dystopian society that uses eugenics to decide what people are capable of and their place in the world. The title alludes to the letters G, A, T and C, the four nucleobases of DNA, and depicts the possible consequences of genetic discrimination in the present societal framework. Relegated to the role of a cleaner owing to his genetically projected death at age 32 due to a heart condition (being told: "The only way you'll see the inside of a spaceship is if you were cleaning it"), the protagonist observes enhanced astronauts as they are demonstrating their superhuman athleticism. Although it was not a box office success, it was critically acclaimed and influenced the debate over human genetic engineering in the public consciousness. As to its accuracy, its production company, Sony Pictures, consulted with a gene therapy researcher and prominent critic of eugenics known to have stated that "[w]e should not step over the line that delineates treatment from enhancement", W. French Anderson, to ensure that the portrayal of science was realistic. Disputing their success in this mission, Philim Yam of Scientific American called the film "science bashing" and Nature's Kevin Davies called it a "surprisingly pedestrian affair", while molecular biologist Lee Silver described its extreme determinism as "a straw man".
In his 2018 book Blueprint, the behavioral geneticist Robert Plomin writes that while Gattaca warned of the dangers of genetic information being used by a totalitarian state, genetic testing could also favor better meritocracy in democratic societies which already administer a variety of standardized tests to select people for education and employment. He suggests that polygenic scores might supplement testing in a manner that is essentially free of biases.
| Biology and health sciences | Genetics | Biology |
9738 | https://en.wikipedia.org/wiki/Email | Email | Electronic mail (usually shortened to email; alternatively hyphenated e-mail) is a method of transmitting and receiving digital messages using electronic devices over a computer network. It was conceived in the late–20th century as the digital version of, or counterpart to, mail (hence e- + mail). Email is a ubiquitous and very widely used communication medium; in current use, an email address is often treated as a basic and necessary part of many processes in business, commerce, government, education, entertainment, and other spheres of daily life in most countries.
Email operates across computer networks, primarily the Internet, and also local area networks. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver, and store messages. Neither the users nor their computers are required to be online simultaneously; they need to connect, typically to a mail server or a webmail interface to send or receive messages or download it.
Originally a text-only ASCII communications medium, Internet email was extended by MIME to carry text in expanded character sets and multimedia content such as images. International email, with internationalized email addresses using UTF-8, is standardized but not widely adopted.
Terminology
The term electronic mail has been in use with its modern meaning since 1975, and variations of the shorter E-mail have been in use since 1979:
email is now the common form, and recommended by style guides. It is the form required by IETF Requests for Comments (RFC) and working groups. This spelling also appears in most dictionaries.
e-mail was originally the form favored in edited published American English and British English writing, and was formerly preferred by some style guides.
E-mail is sometimes used. The original usage in June 1979 occurred in the journal Electronics in reference to the United States Postal Service initiative called E-COM, which was developed in the late 1970s and operated in the early 1980s.
EMAIL was used by CompuServe starting in April 1981, which popularized the term.
EMail is a traditional form used in RFCs for the "Author's Address".
The service is often simply referred to as mail, and a single piece of electronic mail is called a message. The conventions for fields within emails—the "To", "From", "CC", "BCC" etc.—began with RFC-680 in 1975.
An Internet email consists of an envelope and content; the content consists of a header and a body.
History
Computer-based messaging between users of the same system became possible after the advent of time-sharing in the early 1960s, with a notable implementation by MIT's CTSS project in 1965. Most developers of early mainframes and minicomputers developed similar, but generally incompatible, mail applications. In 1971 the first ARPANET network mail was sent, introducing the now-familiar address syntax with the '@' symbol designating the user's system address. Over a series of RFCs, conventions were refined for sending mail messages over the File Transfer Protocol.
Proprietary electronic mail systems soon began to emerge. IBM, CompuServe and Xerox used in-house mail systems in the 1970s; CompuServe sold a commercial intraoffice mail product in 1978 to IBM and to Xerox from 1981. DEC's ALL-IN-1 and Hewlett-Packard's HPMAIL (later HP DeskManager) were released in 1982; development work on the former began in the late 1970s and the latter became the world's largest selling email system.
The Simple Mail Transfer Protocol (SMTP) was implemented on the ARPANET in 1983. LAN email systems emerged in the mid-1980s. For a time in the late 1980s and early 1990s, it seemed likely that either a proprietary commercial system or the X.400 email system, part of the Government Open Systems Interconnection Profile (GOSIP), would predominate. However, once the final restrictions on carrying commercial traffic over the Internet ended in 1995, a combination of factors made the current Internet suite of SMTP, POP3 and IMAP email protocols the standard (see Protocol Wars).
Operation
The following is a typical sequence of events that takes place when sender Alice transmits a message using a mail user agent (MUA) addressed to the email address of the recipient.
The MUA formats the message in email format and uses the submission protocol, a profile of the Simple Mail Transfer Protocol (SMTP), to send the message content to the local mail submission agent (MSA), in this case smtp.a.org.
The MSA determines the destination address provided in the SMTP protocol (not from the message header)—in this case, bob@b.org—which is a fully qualified domain address (FQDA). The part before the @ sign is the local part of the address, often the username of the recipient, and the part after the @ sign is a domain name. The MSA resolves a domain name to determine the fully qualified domain name of the mail server in the Domain Name System (DNS).
The DNS server for the domain b.org (ns.b.org) responds with any MX records listing the mail exchange servers for that domain, in this case mx.b.org, a message transfer agent (MTA) server run by the recipient's ISP.
smtp.a.org sends the message to mx.b.org using SMTP. This server may need to forward the message to other MTAs before the message reaches the final message delivery agent (MDA).
The MDA delivers it to the mailbox of user bob.
Bob's MUA picks up the message using either the Post Office Protocol (POP3) or the Internet Message Access Protocol (IMAP).
In addition to this example, alternatives and complications exist in the email system:
Alice or Bob may use a client connected to a corporate email system, such as IBM Lotus | Technology | Networks | null |
9758 | https://en.wikipedia.org/wiki/Era | Era | An era is a span of time defined for the purposes of chronology or historiography, as in the regnal eras in the history of a given monarchy, a calendar era used for a given calendar, or the geological eras defined for the history of Earth.
Comparable terms are Epoch, age, period, saeculum, aeon (Greek aion) and Sanskrit yuga.
Etymology
The word has been in use in English since 1615, and is derived from Late Latin aera "an era or epoch from which time is reckoned," probably identical to Latin æra "counters used for calculation," plural of æs "brass, money".
The Latin word use in chronology seems to have begun in 5th century Visigothic Spain, where it appears in the History of Isidore of Seville, and in later texts. The Spanish era is calculated from 38 BC, Before Christ, perhaps because of a tax (cfr. indiction) levied in that year, or due to a miscalculation of the Battle of Actium, which occurred in 31 BC.
Like epoch, "era" in English originally meant "the starting point of an age"; the meaning "system of chronological notation" is c. 1646; that of "historical period" is 1741.
Use in chronology
In chronology, an "era" is the highest level for the organization of the measurement of time. A "calendar era" indicates a span of many years which are numbered beginning at a specific reference date (epoch), which often marks the origin of a political state or cosmology, dynasty, ruler, the birth of a leader, or another significant historical or mythological event; it is generally called after its focus accordingly as in "Victorian era".
Geological era
In large-scale natural science, there is need for another time perspective, independent from human activity, and indeed spanning a far longer period (mainly prehistoric), where "geologic era" refers to well-defined time spans.
The next-larger division of geologic time is the eon. The Phanerozoic Eon, for example, is subdivided into eras. There are currently three eras defined in the Phanerozoic; the following table lists them from youngest to oldest (BP is an abbreviation for "before present").
The older Proterozoic and Archean eons are also divided into eras.
Cosmological era
For periods in the history of the universe, the term "epoch" is typically preferred, but "era" is used e.g. of the "Stelliferous Era".
Calendar eras
Calendar eras count the years since a particular date (epoch), often one with religious significance. Anno mundi (year of the world) refers to a group of calendar eras based on a calculation of the age of the world, assuming it was created as described in the Book of Genesis. In Jewish religious contexts one of the versions is still used, and many Eastern Orthodox religious calendars used another version until 1728. Hebrew year 5772 AM began at sunset on 28 September 2011 and ended on 16 September 2012. In the Western church, Anno Domini (AD also written CE), counting the years since the birth of Jesus on traditional calculations, was always dominant.
The Islamic calendar, which also has variants, counts years from the Hijra or emigration of the Islamic prophet Muhammad from Mecca to Medina, which occurred in 622 AD. The Islamic year is some days shorter than 365; January 2012 fell in 1433 AH ("After Hijra").
For a time ranging from 1872 to the Second World War, the Japanese used the imperial year system (kōki), counting from the year when the legendary Emperor Jimmu founded Japan, which occurred in 660 BC.
Many Buddhist calendars count from the death of the Buddha, which according to the most commonly used calculations was in 545–543 BCE or 483 BCE. Dates are given as "BE" for "Buddhist Era"; 2000 AD was 2543 BE in the Thai solar calendar.
Other calendar eras of the past counted from political events, such as the Seleucid era and the Ancient Roman ab urbe condita ("AUC"), counting from the foundation of the city.
Regnal eras
The word era also denotes the units used under a different, more arbitrary system where time is not represented as an endless continuum with a single reference year, but each unit starts counting from one again as if time starts again. The use of regnal years is a rather impractical system, and a challenge for historians if a single piece of the historical chronology is missing, and often reflects the preponderance in public life of an absolute ruler in many ancient cultures. Such traditions sometimes outlive the political power of the throne, and may even be based on mythological events or rulers who may not have existed (for example Rome numbering from the rule of Romulus and Remus). In a manner of speaking the use of the supposed date of the birth of Christ as a base year is a form of an era.
In East Asia, each emperor's reign may be subdivided into several reign periods, each being treated as a new era. The name of each was a motto or slogan chosen by the emperor. Different East Asian countries utilized slightly different systems, notably:
Chinese eras
Japanese era
Korean eras
Vietnamese eras
A similar practice survived in the United Kingdom until quite recently, but only for formal official writings: in daily life the ordinary year A.D. has been used for a long time, but Acts of Parliament were dated according to the years of the reign of the current monarch, so that "61 & 62 Vict c. 37" refers to the Local Government (Ireland) Act 1898 passed in the session of Parliament in the 61st/62nd year of the reign of Queen Victoria.
Historiography
"Era" can be used to refer to well-defined periods in historiography, such as the Roman era, Elizabethan era, Victorian era, etc.
Use of the term for more recent periods or topical history might include Soviet era, and "musical eras" in the history of modern popular music, such as the "big band era", "disco era", etc.
| Physical sciences | Time | Basics and measurement |
9763 | https://en.wikipedia.org/wiki/Exoplanet | Exoplanet | An exoplanet or extrasolar planet is a planet outside the Solar System. The first possible evidence of an exoplanet was noted in 1917 but was not then recognized as such. The first confirmed detection of an exoplanet was in 1992 around a pulsar, and the first detection around a main-sequence star was in 1995. A different planet, first detected in 1988, was confirmed in 2003. In collaboration with ground-based and other space-based observatories the James Webb Space Telescope (JWST) is expected to give more insight into exoplanet traits, such as their composition, environmental conditions, and potential for life.
There are many methods of detecting exoplanets. Transit photometry and Doppler spectroscopy have found the most, but these methods suffer from a clear observational bias favoring the detection of planets near the star; thus, 85% of the exoplanets detected are inside the tidal locking zone. In several cases, multiple planets have been observed around a star. About 1 in 5 Sun-like stars are estimated to have an "Earth-sized" planet in the habitable zone. Assuming there are 200 billion stars in the Milky Way, it can be hypothesized that there are 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if planets orbiting the numerous red dwarfs are included.
The least massive exoplanet known is Draugr (also known as PSR B1257+12 A or PSR B1257+12 b), which is about twice the mass of the Moon. The most massive exoplanet listed on the NASA Exoplanet Archive is HR 2562 b, about 30 times the mass of Jupiter. However, according to some definitions of a planet (based on the nuclear fusion of deuterium), it is too massive to be a planet and might be a brown dwarf. Known orbital times for exoplanets vary from less than an hour (for those closest to their star) to thousands of years. Some exoplanets are so far away from the star that it is difficult to tell whether they are gravitationally bound to it.
Almost all planets detected so far are within the Milky Way. However, there is evidence that extragalactic planets, exoplanets located in other galaxies, may exist. The nearest exoplanets are located 4.2 light-years (1.3 parsecs) from Earth and orbit Proxima Centauri, the closest star to the Sun.
The discovery of exoplanets has intensified interest in the search for extraterrestrial life. There is special interest in planets that orbit in a star's habitable zone (sometimes called "goldilocks zone"), where it is possible for liquid water, a prerequisite for life as we know it, to exist on the surface. However, the study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life.
Rogue planets are those that do not orbit any higher mass host. Such objects are considered a separate category of planetary-mass objects, especially if they are gas giants, often counted as sub-brown dwarfs. The rogue planets in the Milky Way possibly number in the billions or more.
Definition
IAU
The official definition of the term planet used by the International Astronomical Union (IAU) only covers the Solar System and thus does not apply to exoplanets. The IAU Working Group on Extrasolar Planets issued a position statement containing a working definition of "planet" in 2001 and which was modified in 2003. An exoplanet was defined by the following criteria:
This working definition was amended by the IAU's Commission F2: Exoplanets and the Solar System in August 2018. The official working definition of an exoplanet is now as follows:
Alternatives
The IAU's working definition is not always used. One alternate suggestion is that planets should be distinguished from brown dwarfs on the basis of their formation. It is widely thought that giant planets form through core accretion, which may sometimes produce planets with masses above the deuterium fusion threshold; massive planets of that sort may have already been observed. Brown dwarfs form like stars from the direct gravitational collapse of clouds of gas, and this formation mechanism also produces objects that are below the limit and can be as low as . Objects in this mass range that orbit their stars with wide separations of hundreds or thousands of Astronomical Units (AU) and have large star/object mass ratios likely formed as brown dwarfs; their atmospheres would likely have a composition more similar to their host star than accretion-formed planets, which would contain increased abundances of heavier elements. Most directly imaged planets as of April 2014 are massive and have wide orbits so probably represent the low-mass end of a brown dwarf formation. One study suggests that objects above formed through gravitational instability and should not be thought of as planets.
Also, the 13-Jupiter-mass cutoff does not have a precise physical significance. Deuterium fusion can occur in some objects with a mass below that cutoff. The amount of deuterium fused depends to some extent on the composition of the object. As of 2011, the Extrasolar Planets Encyclopaedia included objects up to 25 Jupiter masses, saying, "The fact that there is no special feature around in the observed mass spectrum reinforces the choice to forget this mass limit". As of 2016, this limit was increased to 60 Jupiter masses based on a study of mass–density relationships. The Exoplanet Data Explorer includes objects up to 24 Jupiter masses with the advisory: "The 13 Jupiter-mass distinction by the IAU Working Group is physically unmotivated for planets with rocky cores, and observationally problematic due to the sin i ambiguity." The NASA Exoplanet Archive includes objects with a mass (or minimum mass) equal to or less than 30 Jupiter masses. Another criterion for separating planets and brown dwarfs, rather than deuterium fusion, formation process or location, is whether the core pressure is dominated by Coulomb pressure or electron degeneracy pressure with the dividing line at around 5 Jupiter masses.
Nomenclature
The convention for naming exoplanets is an extension of the system used for designating multiple-star systems as adopted by the International Astronomical Union (IAU). For exoplanets orbiting a single star, the IAU designation is formed by taking the designated or proper name of its parent star, and adding a lower case letter. Letters are given in order of each planet's discovery around the parent star, so that the first planet discovered in a system is designated "b" (the parent star is considered "a") and later planets are given subsequent letters. If several planets in the same system are discovered at the same time, the closest one to the star gets the next letter, followed by the other planets in order of orbital size. A provisional IAU-sanctioned standard exists to accommodate the designation of circumbinary planets. A limited number of exoplanets have IAU-sanctioned proper names. Other naming systems exist.
History of detection
For centuries scientists, philosophers, and science fiction writers suspected that extrasolar planets existed, but there was no way of knowing whether they were real in fact, how common they were, or how similar they might be to the planets of the Solar System. Various detection claims made in the nineteenth century were rejected by astronomers.
The first evidence of a possible exoplanet, orbiting Van Maanen 2, was noted in 1917, but was not recognized as such. The astronomer Walter Sydney Adams, who later became director of the Mount Wilson Observatory, produced a spectrum of the star using Mount Wilson's 60-inch telescope. He interpreted the spectrum to be of an F-type main-sequence star, but it is now thought that such a spectrum could be caused by the residue of a nearby exoplanet that had been pulverized by the gravity of the star, the resulting dust then falling onto the star.
The first suspected scientific detection of an exoplanet occurred in 1988. Shortly afterwards, the first confirmation of detection came in 1992 when Aleksander Wolszczan announced the discovery of several terrestrial-mass planets orbiting the pulsar PSR B1257+12. The first confirmation of an exoplanet orbiting a main-sequence star was made in 1995, when a giant planet was found in a four-day orbit around the nearby star 51 Pegasi. Some exoplanets have been imaged directly by telescopes, but the vast majority have been detected through indirect methods, such as the transit method and the radial-velocity method. In February 2018, researchers using the Chandra X-ray Observatory, combined with a planet detection technique called microlensing, found evidence of planets in a distant galaxy, stating, "Some of these exoplanets are as (relatively) small as the moon, while others are as massive as Jupiter. Unlike Earth, most of the exoplanets are not tightly bound to stars, so they're actually wandering through space or loosely orbiting between stars. We can estimate that the number of planets in this [faraway] galaxy is more than a trillion."
Early speculations
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that fixed stars are similar to the Sun and are likewise accompanied by planets.
In the eighteenth century, the same possibility was mentioned by Isaac Newton in the "General Scholium" that concludes his Principia. Making a comparison to the Sun's planets, he wrote "And if the fixed stars are the centres of similar systems, they will all be constructed according to a similar design and subject to the dominion of One."
In 1938, D.Belorizky demonstrated that it was realistic to search for exo-Jupiters by using transit photometry.
In 1952, more than 40 years before the first hot Jupiter was discovered, Otto Struve wrote that there is no compelling reason that planets could not be much closer to their parent star than is the case in the Solar System, and proposed that Doppler spectroscopy and the transit method could detect super-Jupiters in short orbits.
Discredited claims
Claims of exoplanet detections have been made since the nineteenth century. Some of the earliest involve the binary star 70 Ophiuchi. In 1855, William Stephen Jacob at the East India Company's Madras Observatory reported that orbital anomalies made it "highly probable" that there was a "planetary body" in this system. In the 1890s, Thomas J. J. See of the University of Chicago and the United States Naval Observatory stated that the orbital anomalies proved the existence of a dark body in the 70 Ophiuchi system with a 36-year period around one of the stars. However, Forest Ray Moulton published a paper proving that a three-body system with those orbital parameters would be highly unstable.
During the 1950s and 1960s, Peter van de Kamp of Swarthmore College made another prominent series of detection claims, this time for planets orbiting Barnard's Star. Astronomers now generally regard all early reports of detection as erroneous.
In 1991, Andrew Lyne, M. Bailes and S. L. Shemar claimed to have discovered a pulsar planet in orbit around PSR 1829-10, using pulsar timing variations. The claim briefly received intense attention, but Lyne and his team soon retracted it.
Confirmed discoveries
As of , a total of confirmed exoplanets are listed in the NASA Exoplanet Archive, including a few that were confirmations of controversial claims from the late 1980s. The first published discovery to receive subsequent confirmation was made in 1988 by the Canadian astronomers Bruce Campbell, G. A. H. Walker, and Stephenson Yang of the University of Victoria and the University of British Columbia. Although they were cautious about claiming a planetary detection, their radial-velocity observations suggested that a planet orbits the star Gamma Cephei. Partly because the observations were at the very limits of instrumental capabilities at the time, astronomers remained skeptical for several years about this and other similar observations. It was thought some of the apparent planets might instead have been brown dwarfs, objects intermediate in mass between planets and stars. In 1990, additional observations were published that supported the existence of the planet orbiting Gamma Cephei, but subsequent work in 1992 again raised serious doubts. Finally, in 2003, improved techniques allowed the planet's existence to be confirmed.
On 9 January 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12. This discovery was confirmed, and is generally considered to be the first definitive detection of exoplanets. Follow-up observations solidified these results, and confirmation of a third planet in 1994 revived the topic in the popular press. These pulsar planets are thought to have formed from the unusual remnants of the supernova that produced the pulsar, in a second round of planet formation, or else to be the remaining rocky cores of gas giants that somehow survived the supernova and then decayed into their current orbits. As pulsars are aggressive stars, it was considered unlikely at the time that a planet may be able to be formed in their orbit.
In the early 1990s, a group of astronomers led by Donald Backer, who were studying what they thought was a binary pulsar (PSR B1620−26 b), determined that a third object was needed to explain the observed Doppler shifts. Within a few years, the gravitational effects of the planet on the orbit of the pulsar and white dwarf had been measured, giving an estimate of the mass of the third object that was too small for it to be a star. The conclusion that the third object was a planet was announced by Stephen Thorsett and his collaborators in 1993.
On 6 October 1995, Michel Mayor and Didier Queloz of the University of Geneva announced the first definitive detection of an exoplanet orbiting a main-sequence star, nearby G-type star 51 Pegasi. This discovery, made at the Observatoire de Haute-Provence, ushered in the modern era of exoplanetary discovery, and was recognized by a share of the 2019 Nobel Prize in Physics. Technological advances, most notably in high-resolution spectroscopy, led to the rapid detection of many new exoplanets: astronomers could detect exoplanets indirectly by measuring their gravitational influence on the motion of their host stars. More extrasolar planets were later detected by observing the variation in a star's apparent luminosity as an orbiting planet transited in front of it.
Initially, the most known exoplanets were massive planets that orbited very close to their parent stars. Astronomers were surprised by these "hot Jupiters", because theories of planetary formation had indicated that giant planets should only form at large distances from stars. But eventually more planets of other sorts were found, and it is now clear that hot Jupiters make up the minority of exoplanets. In 1999, Upsilon Andromedae became the first main-sequence star known to have multiple planets. Kepler-16 contains the first discovered planet that orbits a binary main-sequence star system.
On 26 February 2014, NASA announced the discovery of 715 newly verified exoplanets around 305 stars by the Kepler Space Telescope. These exoplanets were checked using a statistical technique called "verification by multiplicity". Before these results, most confirmed planets were gas giants comparable in size to Jupiter or larger because they were more easily detected, but the Kepler planets are mostly between the size of Neptune and the size of Earth.
On 23 July 2015, NASA announced Kepler-452b, a near-Earth-size planet orbiting the habitable zone of a G2-type star.
On 6 September 2018, NASA discovered an exoplanet about 145 light years away from Earth in the constellation Virgo. This exoplanet, Wolf 503b, is twice the size of Earth and was discovered orbiting a type of star known as an "Orange Dwarf". Wolf 503b completes one orbit in as few as six days because it is very close to the star. Wolf 503b is the only exoplanet that large that can be found near the so-called small planet radius gap. The gap, sometimes called the Fulton gap, is the observation that it is unusual to find exoplanets with sizes between 1.5 and 2 times the radius of the Earth.
In January 2020, scientists announced the discovery of TOI 700 d, the first Earth-sized planet in the habitable zone detected by TESS.
Candidate discoveries
As of January 2020, NASA's Kepler and TESS missions had identified 4374 planetary candidates yet to be confirmed, several of them being nearly Earth-sized and located in the habitable zone, some around Sun-like stars.
In September 2020, astronomers reported evidence, for the first time, of an extragalactic planet, M51-ULS-1b, detected by eclipsing a bright X-ray source (XRS), in the Whirlpool Galaxy (M51a).
Also in September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet unbounded by any star, and free floating in the Milky Way galaxy.
Detection methods
Direct imaging
Planets are extremely faint compared to their parent stars. For example, a Sun-like star is about a billion times brighter than the reflected light from any exoplanet orbiting it. It is difficult to detect such a faint light source, and furthermore, the parent star causes a glare that tends to wash it out. It is necessary to block the light from the parent star to reduce the glare while leaving the light from the planet detectable; doing so is a major technical challenge which requires extreme optothermal stability. All exoplanets that have been directly imaged are both large (more massive than Jupiter) and widely separated from their parent stars.
Specially designed direct-imaging instruments such as Gemini Planet Imager, VLT-SPHERE, and SCExAO will image dozens of gas giants, but the vast majority of known extrasolar planets have only been detected through indirect methods.
Indirect methods
Transit method
If a planet crosses (or transits) in front of its parent star's disk, then the observed brightness of the star drops by a small amount. The amount by which the star dims depends on its size and on the size of the planet, among other factors. Because the transit method requires that the planet's orbit intersect a line-of-sight between the host star and Earth, the probability that an exoplanet in a randomly oriented orbit will be observed to transit the star is somewhat small. The Kepler telescope used this method.
Radial velocity or Doppler method
As a planet orbits a star, the star also moves in its own small orbit around the system's center of mass. Variations in the star's radial velocity—that is, the speed with which it moves towards or away from Earth—can be detected from displacements in the star's spectral lines due to the Doppler effect. Extremely small radial-velocity variations can be observed, of 1 m/s or even somewhat less.
Transit timing variation (TTV)
When multiple planets are present, each one slightly perturbs the others' orbits. Small variations in the times of transit for one planet can thus indicate the presence of another planet, which itself may or may not transit. For example, variations in the transits of the planet Kepler-19b suggest the existence of a second planet in the system, the non-transiting Kepler-19c.
Transit duration variation (TDV)
When a planet orbits multiple stars or if the planet has moons, its transit time can significantly vary per transit. Although no new planets or moons have been discovered with this method, it is used to successfully confirm many transiting circumbinary planets.
Gravitational microlensing
Microlensing occurs when the gravitational field of a star acts like a lens, magnifying the light of a distant background star. Planets orbiting the lensing star can cause detectable anomalies in magnification as it varies over time. Unlike most other methods which have a detection bias towards planets with small (or for resolved imaging, large) orbits, the microlensing method is most sensitive to detecting planets around 1–10 AU away from Sun-like stars.
Astrometry
Astrometry consists of precisely measuring a star's position in the sky and observing the changes in that position over time. The motion of a star due to the gravitational influence of a planet may be observable. Because the motion is so small, however, this method was not very productive until the 2020s. It has produced only a few confirmed discoveries, though it has been successfully used to investigate the properties of planets found in other ways.
Pulsar timing
A pulsar, a small, dense remnant of a star that has exploded as a supernova, emits radio waves regularly as it rotates. If planets orbit the pulsar, the motion of the pulsar around the system's center of mass alters the pulsar's distance to Earth over time. As a result, the radio pulses from the pulsar arrive on Earth at a later or earlier time. This light travel delay due to the pulsar being physically closer or farther from Earth is known as a Roemer time delay. The first confirmed discovery of an extrasolar planet was made using this method. But as of 2011, it has not been very productive; five planets have been detected in this way, around three different pulsars.
Variable star timing (pulsation frequency)
Like pulsars, there are some other types of stars which exhibit periodic activity. Deviations from periodicity can sometimes be caused by a planet orbiting it. As of 2013, a few planets have been discovered with this method.
Reflection/emission modulations
When a planet orbits very close to a star, it catches a considerable amount of starlight. As the planet orbits the star, the amount of light changes due to planets having phases from Earth's viewpoint or planets glowing more from one side than the other due to temperature differences.
Relativistic beaming
Relativistic beaming measures the observed flux from the star due to its motion. The brightness of the star changes as the planet moves closer or further away from its host star.
Ellipsoidal variations
Massive planets close to their host stars can slightly deform the shape of the star. This causes the brightness of the star to slightly deviate depending on how it is rotated relative to Earth.
Polarimetry
With the polarimetry method, a polarized light reflected off the planet is separated from unpolarized light emitted from the star. No new planets have been discovered with this method, although a few already discovered planets have been detected with this method.
Circumstellar disks
Disks of space dust surround many stars, thought to originate from collisions among asteroids and comets. The dust can be detected because it absorbs starlight and re-emits it as infrared radiation. Features on the disks may suggest the presence of planets, though this is not considered a definitive detection method.
Formation and evolution
Planets may form within a few to tens (or more) of millions of years of their star forming.
The planets of the Solar System can only be observed in their current state, but observations of different planetary systems of varying ages allows us to observe planets at different stages of evolution. Available observations range from young proto-planetary disks where planets are still forming to planetary systems of over 10 Gyr old. When planets form in a gaseous protoplanetary disk, they accrete hydrogen/helium envelopes. These envelopes cool and contract over time and, depending on the mass of the planet, some or all of the hydrogen/helium is eventually lost to space. This means that even terrestrial planets may start off with large radii if they form early enough. An example is Kepler-51b which has only about twice the mass of Earth but is almost the size of Saturn, which is a hundred times the mass of Earth. Kepler-51b is quite young at a few hundred million years old.
Planet-hosting stars
There is at least one planet on average per star.
About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone.
Most known exoplanets orbit stars roughly similar to the Sun, i.e. main-sequence stars of spectral categories F, G, or K. Lower-mass stars (red dwarfs, of spectral category M) are less likely to have planets massive enough to be detected by the radial-velocity method. Despite this, several tens of planets around red dwarfs have been discovered by the Kepler space telescope, which uses the transit method to detect smaller planets.
Using data from Kepler, a correlation has been found between the metallicity of a star and the probability that the star hosts a giant planet, similar to the size of Jupiter. Stars with higher metallicity are more likely to have planets, especially giant planets, than stars with lower metallicity.
Some planets orbit one member of a binary star system, and several circumbinary planets have been discovered which orbit both members of a binary star. A few planets in triple star systems are known and one in the quadruple system Kepler-64.
Orbital and physical parameters
General features
Color and brightness
In 2013, the color of an exoplanet was determined for the first time. The best-fit albedo measurements of HD 189733b suggest that it is deep dark blue. Later that same year, the colors of several other exoplanets were determined, including GJ 504 b which visually has a magenta color, and Kappa Andromedae b, which if seen up close would appear reddish in color. Helium planets are expected to be white or grey in appearance.
The apparent brightness (apparent magnitude) of a planet depends on how far away the observer is, how reflective the planet is (albedo), and how much light the planet receives from its star, which depends on how far the planet is from the star and how bright the star is. So, a planet with a low albedo that is close to its star can appear brighter than a planet with a high albedo that is far from the star.
The darkest known planet in terms of geometric albedo is TrES-2b, a hot Jupiter that reflects less than 1% of the light from its star, making it less reflective than coal or black acrylic paint. Hot Jupiters are expected to be quite dark due to sodium and potassium in their atmospheres, but it is not known why TrES-2b is so dark—it could be due to an unknown chemical compound.
For gas giants, geometric albedo generally decreases with increasing metallicity or atmospheric temperature unless there are clouds to modify this effect. Increased cloud-column depth increases the albedo at optical wavelengths, but decreases it at some infrared wavelengths. Optical albedo increases with age, because older planets have higher cloud-column depths. Optical albedo decreases with increasing mass, because higher-mass giant planets have higher surface gravities, which produces lower cloud-column depths. Also, elliptical orbits can cause major fluctuations in atmospheric composition, which can have a significant effect.
There is more thermal emission than reflection at some near-infrared wavelengths for massive and/or young gas giants. So, although optical brightness is fully phase-dependent, this is not always the case in the near infrared.
Temperatures of gas giants reduce over time and with distance from their stars. Lowering the temperature increases optical albedo even without clouds. At a sufficiently low temperature, water clouds form, which further increase optical albedo. At even lower temperatures, ammonia clouds form, resulting in the highest albedos at most optical and near-infrared wavelengths.
Magnetic field
In 2014, a magnetic field around HD 209458 b was inferred from the way hydrogen was evaporating from the planet. It is the first (indirect) detection of a magnetic field on an exoplanet. The magnetic field is estimated to be about one-tenth as strong as Jupiter's.
The magnetic fields of exoplanets are thought to be detectable by their auroral radio emissions with sensitive low-frequency radio telescopes such as LOFAR, although they have yet to be found. The radio emissions could measure the rotation rate of the interior of an exoplanet, and may yield a more accurate way to measure exoplanet rotation than by examining the motion of clouds. However, the most sensitive radio search for auroral emissions, thus far, from nine exoplanets with Arecibo also did not result in any discoveries.
Earth's magnetic field results from its flowing liquid metallic core, but on massive super-Earths with high pressure, different compounds may form which do not match those created under terrestrial conditions. Compounds may form with greater viscosities and high melting temperatures, which could prevent the interiors from separating into different layers and so result in undifferentiated coreless mantles. Forms of magnesium oxide such as MgSi3O12 could be a liquid metal at the pressures and temperatures found in super-Earths and could generate a magnetic field in the mantles of super-Earths.
Hot Jupiters have been observed to have a larger radius than expected. This could be caused by the interaction between the stellar wind and the planet's magnetosphere creating an electric current through the planet that heats it up (Joule heating) causing it to expand. The more magnetically active a star is, the greater the stellar wind and the larger the electric current leading to more heating and expansion of the planet. This theory matches the observation that stellar activity is correlated with inflated planetary radii.
In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic hydrogen form. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields.
Although scientists previously announced that the magnetic fields of close-in exoplanets may cause increased stellar flares and starspots on their host stars, in 2019 this claim was demonstrated to be false in the HD 189733 system. The failure to detect "star-planet interactions" in the well-studied HD 189733 system calls other related claims of the effect into question. A later search for radio emissions from eight exoplanets that orbit within 0.1 astronomical units of their host stars, conducted by the Arecibo radio telescope also failed to find signs of these magnetic star-planet interactions.
In 2019, the strength of the surface magnetic fields of 4 hot Jupiters were estimated and ranged between 20 and 120 gauss compared to Jupiter's surface magnetic field of 4.3 gauss.
Plate tectonics
In 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths with one team saying that plate tectonics would be episodic or stagnant and the other team saying that plate tectonics is very likely on super-Earths even if the planet is dry.
If super-Earths have more than 80 times as much water as Earth, then they become ocean planets with all land completely submerged. However, if there is less water than this limit, then the deep water cycle will move enough water between the oceans and mantle to allow continents to exist.
Volcanism
Large surface temperature variations on 55 Cancri e have been attributed to possible volcanic activity releasing large clouds of dust which blanket the planet and block thermal emissions.
Rings
The star 1SWASP J140747.93-394542.6 was occulted by an object that is circled by a ring system much larger than Saturn's rings. However, the mass of the object is not known; it could be a brown dwarf or low-mass star instead of a planet.
The brightness of optical images of Fomalhaut b could be due to starlight reflecting off a circumplanetary ring system with a radius between 20 and 40 times that of Jupiter's radius, about the size of the orbits of the Galilean moons.
The rings of the Solar System's gas giants are aligned with their planet's equator. However, for exoplanets that orbit close to their star, tidal forces from the star would lead to the outermost rings of a planet being aligned with the planet's orbital plane around the star. A planet's innermost rings would still be aligned with the planet's equator so that if the planet has a tilted rotational axis, then the different alignments between the inner and outer rings would create a warped ring system.
Moons
In December 2013, a candidate exomoon of the rogue planet or red dwarf MOA-2011-BLG-262L was announced. On 3 October 2018, evidence suggesting a large exomoon orbiting Kepler-1625b was reported.
Atmospheres
Atmospheres have been detected around several exoplanets. The first to be observed was HD 209458 b in 2001.
As of February 2014, more than fifty transiting and five directly imaged exoplanet atmospheres have been observed, resulting in detection of molecular spectral features; observation of day–night temperature gradients; and constraints on vertical atmospheric structure. Also, an atmosphere has been detected on the non-transiting hot Jupiter Tau Boötis b.
In May 2017, glints of light from Earth, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere. The technology used to determine this may be useful in studying the atmospheres of distant worlds, including those of exoplanets.
Comet-like tails
KIC 12557548 b is a small rocky planet, very close to its star, that is evaporating and leaving a trailing tail of cloud and dust like a comet. The dust could be ash erupting from volcanos and escaping due to the small planet's low surface-gravity, or it could be from metals that are vaporized by the high temperatures of being so close to the star with the metal vapor then condensing into dust.
In June 2015, scientists reported that the atmosphere of GJ 436 b was evaporating, resulting in a giant cloud around the planet and, due to radiation from the host star, a long trailing tail long.
Insolation pattern
Tidally locked planets in a 1:1 spin-orbit resonance would have their star always shining directly overhead on one spot, which would be hot with the opposite hemisphere receiving no light and being freezing cold. Such a planet could resemble an eyeball, with the hotspot being the pupil. Planets with an eccentric orbit could be locked in other resonances. 3:2 and 5:2 resonances would result in a double-eyeball pattern with hotspots in both eastern and western hemispheres. Planets with both an eccentric orbit and a tilted axis of rotation would have more complicated insolation patterns.
Surface
Surface composition
Surface features can be distinguished from atmospheric features by comparing emission and reflection spectroscopy with transmission spectroscopy. Mid-infrared spectroscopy of exoplanets may detect rocky surfaces, and near-infrared may identify magma oceans or high-temperature lavas, hydrated silicate surfaces and water ice, giving an unambiguous method to distinguish between rocky and gaseous exoplanets.
Surface temperature
Measuring the intensity of the light it receives from its parent star can estimate the temperature of an exoplanet. For example, the planet OGLE-2005-BLG-390Lb is estimated to have a surface temperature of roughly −220 °C (50 K). However, such estimates may be substantially in error because they depend on the planet's usually unknown albedo, and because factors such as the greenhouse effect may introduce unknown complications. A few planets have had their temperature measured by observing the variation in infrared radiation as the planet moves around in its orbit and is eclipsed by its parent star. For example, the planet HD 189733b has been estimated to have an average temperature of 1,205 K (932 °C) on its dayside and 973 K (700 °C) on its nightside.
Habitability
As more planets are discovered, the field of exoplanetology continues to grow into a deeper study of extrasolar worlds, and will ultimately tackle the prospect of life on planets beyond the Solar System. At cosmic distances, life can only be detected if it is developed at a planetary scale and strongly modified the planetary environment, in such a way that the modifications cannot be explained by classical physico-chemical processes (out of equilibrium processes). For example, molecular oxygen () in the atmosphere of Earth is a result of photosynthesis by living plants and many kinds of microorganisms, so it can be used as an indication of life on exoplanets, although small amounts of oxygen could also be produced by non-biological means. Furthermore, a potentially habitable planet must orbit a stable star at a distance within which planetary-mass objects with sufficient atmospheric pressure can support liquid water at their surfaces.
Habitable zone
The habitable zone around a star is the region where the temperature is just right to allow liquid water to exist on the surface of a planet; that is, not too close to the star for the water to evaporate and not too far away from the star for the water to freeze. The heat produced by stars varies depending on the size and age of the star, so that the habitable zone can be at different distances for different stars. Also, the atmospheric conditions on the planet influence the planet's ability to retain heat so that the location of the habitable zone is also specific to each type of planet: desert planets (also known as dry planets), with very little water, will have less water vapor in the atmosphere than Earth and so have a reduced greenhouse effect, meaning that a desert planet could maintain oases of water closer to its star than Earth is to the Sun. The lack of water also means there is less ice to reflect heat into space, so the outer edge of desert-planet habitable zones is further out. Rocky planets with a thick hydrogen atmosphere could maintain surface water much further out than the Earth–Sun distance. Planets with larger mass have wider habitable zones because gravity reduces the water cloud column depth which reduces the greenhouse effect of water vapor, thus moving the inner edge of the habitable zone closer to the star.
Planetary rotation rate is one of the major factors determining the circulation of the atmosphere and hence the pattern of clouds: slowly rotating planets create thick clouds that reflect more and so can be habitable much closer to their star. Earth with its current atmosphere would be habitable in Venus's orbit, if it had Venus's slow rotation. If Venus lost its water ocean due to a runaway greenhouse effect, it is likely to have had a higher rotation rate in the past. Alternatively, Venus never had an ocean because water vapor was lost to space during its formation and could have had its slow rotation throughout its history.
Tidally locked planets (a.k.a. "eyeball" planets) can be habitable closer to their star than previously thought due to the effect of clouds: at high stellar flux, strong convection produces thick water clouds near the substellar point that greatly increase the planetary albedo and reduce surface temperatures.
Planets in the habitable zones of stars with low metallicity are more habitable for complex life on land than high metallicity stars because the stellar spectrum of high metallicity stars is less likely to cause the formation of ozone thus enabling more ultraviolet rays to reach the planet's surface.
Habitable zones have usually been defined in terms of surface temperature, however over half of Earth's biomass is from subsurface microbes, and the temperature increases with depth, so the subsurface can be conducive for microbial life when the surface is frozen and if this is considered, the habitable zone extends much further from the star, even rogue planets could have liquid water at sufficient depths underground. In an earlier era of the universe the temperature of the cosmic microwave background would have allowed any rocky planets that existed to have liquid water on their surface regardless of their distance from a star. Jupiter-like planets might not be habitable, but they could have habitable moons.
Ice ages and snowball states
The outer edge of the habitable zone is where planets are completely frozen, but planets well inside the habitable zone can periodically become frozen. If orbital fluctuations or other causes produce cooling, then this creates more ice, but ice reflects sunlight causing even more cooling, creating a feedback loop until the planet is completely or nearly completely frozen. When the surface is frozen, this stops carbon dioxide weathering, resulting in a build-up of carbon dioxide in the atmosphere from volcanic emissions. This creates a greenhouse effect which thaws the planet again. Planets with a large axial tilt are less likely to enter snowball states and can retain liquid water further from their star. Large fluctuations of axial tilt can have even more of a warming effect than a fixed large tilt. Paradoxically, planets orbiting cooler stars, such as red dwarfs, are less likely to enter snowball states because the infrared radiation emitted by cooler stars is mostly at wavelengths that are absorbed by ice which heats it up.
Tidal heating
If a planet has an eccentric orbit, then tidal heating can provide another source of energy besides stellar radiation. This means that eccentric planets in the radiative habitable zone can be too hot for liquid water. Tides also circularize orbits over time, so there could be planets in the habitable zone with circular orbits that have no water because they used to have eccentric orbits. Eccentric planets further out than the habitable zone would still have frozen surfaces, but the tidal heating could create a subsurface ocean similar to Europa's. In some planetary systems, such as in the Upsilon Andromedae system, the eccentricity of orbits is maintained or even periodically varied by perturbations from other planets in the system. Tidal heating can cause outgassing from the mantle, contributing to the formation and replenishment of an atmosphere.
Potentially habitable planets
A review in 2015 identified exoplanets Kepler-62f, Kepler-186f and Kepler-442b as the best candidates for being potentially habitable. These are at a distance of 1200, 490 and 1,120 light-years away, respectively. Of these, Kepler-186f is in similar size to Earth with its 1.2-Earth-radius measure, and it is located towards the outer edge of the habitable zone around its red dwarf star.
When looking at the nearest terrestrial exoplanet candidates, Proxima Centauri b is about 4.2 light-years away. Its equilibrium temperature is estimated to be .
Earth-size planets
In November 2013, it was estimated that 22±8% of Sun-like stars in the Milky Way galaxy may have an Earth-sized planet in the habitable zone. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earths, rising to 40 billion if red dwarfs are included.
Kepler-186f, a 1.2-Earth-radius planet in the habitable zone of a red dwarf, was reported in April 2014.
Proxima Centauri b, a planet in the habitable zone of Proxima Centauri, the nearest known star to the solar system with an estimated minimum mass of 1.27 times the mass of the Earth.
In February 2013, researchers speculated that up to 6% of small red dwarfs may have Earth-size planets. This suggests that the closest one to the Solar System could be 13 light-years away. The estimated distance increases to 21 light-years when a 95% confidence interval is used. In March 2013, a revised estimate gave an occurrence rate of 50% for Earth-size planets in the habitable zone of red dwarfs.
At 1.63 times Earth's radius Kepler-452b is the first discovered near-Earth-size planet in the "habitable zone" around a G2-type Sun-like star (July 2015).
Planetary system
Exoplanets are often members of planetary systems of multiple planets around a star. The planets interact with each other gravitationally and sometimes form resonant systems where the orbital periods of the planets are in integer ratios. The Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance.
Some hot Jupiters orbit their stars in the opposite direction to their stars' rotation. One proposed explanation is that hot Jupiters tend to form in dense clusters, where perturbations are more common and gravitational capture of planets by neighboring stars is possible.
Search projects
ANDES – The ArmazoNes High Dispersion Echelle Spectrograph, a planet finding and planet characterisation spectrograph, is expected to be fitted onto ESO's ELT 39.3m telescope. ANDES was formally known as HIRES, which itself was created after a merger of the consortia behind the earlier CODEX (optical high-resolution) and SIMPLE (near-infrared high-resolution) spectrograph concepts.
CoRoT – Space telescope that found the first transiting rocky planet.
ESPRESSO – A rocky planet-finding, and stable spectroscopic observing, spectrograph mounted on ESO's 4 × 8.2 m VLT telescope, sited on the levelled summit of Cerro Paranal in the Atacama Desert of northern Chile.
HARPS – High-precision echelle planet-finding spectrograph installed on the ESO's 3.6m telescope at La Silla Observatory in Chile.
Kepler – Mission to look for large numbers of exoplanets using the transit method.
TESS – To search for new exoplanets; rotating so by the end of its two-year mission it will have observed stars from all over the sky. It is expected to find at least 3,000 new exoplanets.
| Physical sciences | Planetary science | null |
9765 | https://en.wikipedia.org/wiki/Equuleus | Equuleus | Equuleus is a faint constellation located just north of the celestial equator. Its name is Latin for "little horse", a foal. It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is the second smallest of the modern constellations (after Crux), spanning only 72 square degrees. It is also very faint, having no stars brighter than the fourth magnitude.
Notable features
Stars
The brightest star in Equuleus is α Equulei, traditionally called Kitalpha, a yellow star magnitude 3.9, 186 light-years from Earth. Its traditional name means "the section of the horse".
There are few variable stars in Equuleus. Only around 25 are known, most of which are faint. γ Equulei is an α2 CVn variable star, ranging between magnitudes 4.58 and 4.77 over a period of around 12½ minutes. It is a white star 115 light-years from Earth, and has an optical companion of magnitude 6.1, 6 Equulei. It is divisible in binoculars. 6 Equulei is an astrometric binary system itself, with an apparent magnitude of 6.07. R Equulei is a Mira variable that ranges between magnitudes 8.0 and 15.7 over nearly 261 days. It has a spectral type of M3e-M4e and has an average B-V colour index of +1.41.
Equuleus contains some double stars of interest. γ Equulei consists of a primary star with a magnitude around 4.7 (slightly variable) and a secondary star of magnitude 11.6, separated by 2 arcseconds. ε Equulei is a triple star also designated 1 Equulei. The system, 197 light-years away, has a primary of magnitude 5.4 that is itself a binary star; its components are of magnitude 6.0 and 6.3 and have a period of 101 years. The secondary is of magnitude 7.4 and is visible in small telescopes. The components of the primary are becoming closer together and will not be divisible in amateur telescopes beginning in 2015. δ Equulei is a binary star with an orbital period of 5.7 years, which at one time was the shortest known orbital period for an optical binary. The two components of the system are never more than 0.35 arcseconds apart.
Deep-sky objects
Due to its small size and its distance from the plane of the Milky Way, Equuleus is rather devoid of deep sky objects. Some very faint galaxies in the NGC catalog between magnitudes 13 and 15 include NGC 7015, NGC 7040, and NGC 7046. NGC 7045 is a triple star that was mistaken as a nebula by its discoverer, John Herschel. Other faint galaxies in the IC Catalog include IC 1360, IC 1361, IC 1364, IC 1367, IC 1375, and IC 5083. IC 1365 is a group of galaxies. The magnitudes of these objects vary from 14.5 to 15.5, making them hard to see in even the largest of amateur telescopes.
Mythology
In Greek mythology, one myth associates Equuleus with the foal Celeris (meaning "swiftness" or "speed"), who was the offspring or brother of the winged horse Pegasus. Celeris was given to Castor by Mercury. Other myths say that Equuleus is the horse struck from Poseidon's trident, during the contest between him and Athena when deciding which would be the superior. Because this section of stars rises before Pegasus, it is often called Equus Primus, or the First Horse. Equuleus is also linked to the story of Philyra and Saturn.
Created by Hipparchus and included by Ptolemy, it abuts Pegasus; unlike the larger horse, it is depicted as a horse's head alone.
Equivalents
In Chinese astronomy, the stars that correspond to Equuleus are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ).
| Physical sciences | Other | Astronomy |
9770 | https://en.wikipedia.org/wiki/Eclipse | Eclipse | An eclipse is an astronomical event which occurs when an astronomical object or spacecraft is temporarily obscured, by passing into the shadow of another body or by having another body pass between it and the viewer. This alignment of three celestial objects is known as a syzygy. An eclipse is the result of either an occultation (completely hidden) or a transit (partially hidden). A "deep eclipse" (or "deep occultation") is when a small astronomical object is behind a bigger one.
The term eclipse is most often used to describe either a solar eclipse, when the Moon's shadow crosses the Earth's surface, or a lunar eclipse, when the Moon moves into the Earth's shadow. However, it can also refer to such events beyond the Earth–Moon system: for example, a planet moving into the shadow cast by one of its moons, a moon passing into the shadow cast by its host planet, or a moon passing into the shadow of another moon. A binary star system can also produce eclipses if the plane of the orbit of its constituent stars intersects the observer's position.
For the special cases of solar and lunar eclipses, these only happen during an "eclipse season", the two times of each year when the plane of the Earth's orbit around the Sun crosses with the plane of the Moon's orbit around the Earth and the line defined by the intersecting planes points near the Sun. The type of solar eclipse that happens during each season (whether total, annular, hybrid, or partial) depends on apparent sizes of the Sun and Moon. If the orbit of the Earth around the Sun and the Moon's orbit around the Earth were both in the same plane with each other, then eclipses would happen every month. There would be a lunar eclipse at every full moon, and a solar eclipse at every new moon. It is because of the non-planar differences that eclipses are not a common event. If both orbits were perfectly circular, then each eclipse would be the same type every month.
Lunar eclipses can be viewed from the entire nightside half of the Earth. But solar eclipses, particularly total eclipses occurring at any one particular point on the Earth's surface, are very rare events that can be many decades apart.
Etymology
The term is derived from the ancient Greek noun (), which means 'the abandonment', 'the downfall', or 'the darkening of a heavenly body', which is derived from the verb () which means 'to abandon', 'to darken', or 'to cease to exist', a combination of prefix (), from preposition (), 'out', and of verb (), 'to be absent'.
Umbra, penumbra and antumbra
For any two objects in space, a line can be extended from the first through the second. The latter object will block some amount of light being emitted by the former, creating a region of shadow around the axis of the line. Typically these objects are moving with respect to each other and their surroundings, so the resulting shadow will sweep through a region of space, only passing through any particular location in the region for a fixed interval of time. As viewed from such a location, this shadowing event is known as an eclipse.
Typically the cross-section of the objects involved in an astronomical eclipse is roughly disk-shaped. The region of an object's shadow during an eclipse is divided into three parts:
The umbra (Latin for 'shadow'), within which the object completely covers the light source. For the Sun, this light source is the photosphere.
The antumbra (from Latin ante, 'before, in front of', plus umbra) extending beyond the tip of the umbra, within which the object is completely in front of the light source but too small to completely cover it.
The penumbra (from the Latin paene, 'almost, nearly', plus umbra), within which the object is only partially in front of the light source.
A total eclipse occurs when the observer is within the umbra, an annular eclipse when the observer is within the antumbra, and a partial eclipse when the observer is within the penumbra. During a lunar eclipse only the umbra and penumbra are applicable, because the antumbra of the Sun-Earth system lies far beyond the Moon. Analogously, Earth's apparent diameter from the viewpoint of the Moon is nearly four times that of the Sun and thus cannot produce an annular eclipse. The same terms may be used analogously in describing other eclipses, e.g., the antumbra of Deimos crossing Mars, or Phobos entering Mars's penumbra.
The first contact occurs when the eclipsing object's disc first starts to impinge on the light source; second contact is when the disc moves completely within the light source; third contact when it starts to move out of the light; and fourth or last contact when it finally leaves the light source's disc entirely.
For spherical bodies, when the occulting object is smaller than the star, the length (L) of the umbra's cone-shaped shadow is given by:
where Rs is the radius of the star, Ro is the occulting object's radius, and r is the distance from the star to the occulting object. For Earth, on average L is equal to 1.384 km, which is much larger than the Moon's semimajor axis of 3.844 km. Hence the umbral cone of the Earth can completely envelop the Moon during a lunar eclipse. If the occulting object has an atmosphere, however, some of the luminosity of the star can be refracted into the volume of the umbra. This occurs, for example, during an eclipse of the Moon by the Earth—producing a faint, ruddy illumination of the Moon even at totality.
On Earth, the shadow cast during an eclipse moves very approximately at 1 km per sec. This depends on the location of the shadow on the Earth and the angle in which it is moving.
Eclipse cycles
An eclipse cycle takes place when eclipses in a series are separated by a certain interval of time. This happens when the orbital motions of the bodies form repeating harmonic patterns. A particular instance is the saros, which results in a repetition of a solar or lunar eclipse every 6,585.3 days, or a little over 18 years. Because this is not a whole number of days, successive eclipses will be visible from different parts of the world. In one saros period there are 239.0 anomalistic periods, 241.0 sidereal periods, 242.0 nodical periods, and 223.0 synodic periods. Although the orbit of the Moon does not give exact integers, the numbers of orbit cycles are close enough to integers to give strong similarity for eclipses spaced at 18.03 yr intervals.
Earth–Moon system
An eclipse involving the Sun, Earth, and Moon can occur only when they are nearly in a straight line, allowing one to be hidden behind another, viewed from the third. Because the orbital plane of the Moon is tilted with respect to the orbital plane of the Earth (the ecliptic), eclipses can occur only when the Moon is close to the intersection of these two planes (the nodes). The Sun, Earth and nodes are aligned twice a year (during an eclipse season), and eclipses can occur during a period of about two months around these times. There can be from four to seven eclipses in a calendar year, which repeat according to various eclipse cycles, such as a saros.
Between 1901 and 2100 there are the maximum of seven eclipses in:
four (penumbral) lunar and three solar eclipses: 1908, 2038.
four solar and three lunar eclipses: 1918, 1973, 2094.
five solar and two lunar eclipses: 1934.
Excluding penumbral lunar eclipses, there are a maximum of seven eclipses in:
1591, 1656, 1787, 1805, 1918, 1935, 1982, and 2094.
Solar eclipse
As observed from the Earth, a solar eclipse occurs when the Moon passes in front of the Sun. The type of solar eclipse event depends on the distance of the Moon from the Earth during the event. A total solar eclipse occurs when the Earth intersects the umbra portion of the Moon's shadow. When the umbra does not reach the surface of the Earth, the Sun is only partially occulted, resulting in an annular eclipse. Partial solar eclipses occur when the viewer is inside the penumbra.
The eclipse magnitude is the fraction of the Sun's diameter that is covered by the Moon. For a total eclipse, this value is always greater than or equal to one. In both annular and total eclipses, the eclipse magnitude is the ratio of the angular sizes of the Moon to the Sun.
Solar eclipses are relatively brief events that can only be viewed in totality along a relatively narrow track. Under the most favorable circumstances, a total solar eclipse can last for 7 minutes, 31 seconds, and can be viewed along a track that is up to 250 km wide. However, the region where a partial eclipse can be observed is much larger. The Moon's umbra will advance eastward at a rate of 1,700 km/h, until it no longer intersects the Earth's surface.
During a solar eclipse, the Moon can sometimes perfectly cover the Sun because its apparent size is nearly the same as the Sun's when viewed from the Earth. A total solar eclipse is in fact an occultation while an annular solar eclipse is a transit.
When observed at points in space other than from the Earth's surface, the Sun can be eclipsed by bodies other than the Moon. Two examples include when the crew of Apollo 12 observed the Earth to eclipse the Sun in 1969 and when the Cassini probe observed Saturn to eclipse the Sun in 2006.
Lunar eclipse
Lunar eclipses occur when the Moon passes through the Earth's shadow. This happens only during a full moon, when the Moon is on the far side of the Earth from the Sun. Unlike a solar eclipse, an eclipse of the Moon can be observed from nearly an entire hemisphere. For this reason it is much more common to observe a lunar eclipse from a given location. A lunar eclipse lasts longer, taking several hours to complete, with totality itself usually averaging anywhere from about 30 minutes to over an hour.
There are three types of lunar eclipses: penumbral, when the Moon crosses only the Earth's penumbra; partial, when the Moon crosses partially into the Earth's umbra; and total, when the Moon crosses entirely into the Earth's umbra. Total lunar eclipses pass through all three phases. Even during a total lunar eclipse, however, the Moon is not completely dark. Sunlight refracted through the Earth's atmosphere enters the umbra and provides a faint illumination. Much as in a sunset, the atmosphere tends to more strongly scatter light with shorter wavelengths, so the illumination of the Moon by refracted light has a red hue, thus the phrase 'Blood Moon' is often found in descriptions of such lunar events as far back as eclipses are recorded.
Historical record
Records of solar eclipses have been kept since ancient times. Eclipse dates can be used for chronological dating of historical records. A Syrian clay tablet, in the Ugaritic language, records a solar eclipse which occurred on March 5, 1223, B.C., while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 B.C. Positing classical-era astronomers' use of Babylonian eclipse records mostly from the 13th century BC provides a feasible and mathematically consistent explanation for the Greek finding all three lunar mean motions (synodic, anomalistic, draconitic) to a precision of about one part in a million or better. Chinese historical records of solar eclipses date back over 3,000 years and have been used to measure changes in the Earth's rate of spin.
The first person to give scientific explanation on eclipses was Anaxagoras [c500BC - 428BC]. Anaxagoras stated that the Moon shines by reflected light from the Sun.
In 5th century AD, solar and lunar eclipses were scientifically explained by Aryabhata, in his treatise Aryabhatiya. Aryabhata states that the Moon and planets shine by reflected sunlight and explains eclipses in terms of shadows cast by and falling on Earth. Aryabhata provides the computation and the size of the eclipsed part during an eclipse. Indian computations were very accurate that 18th-century French scientist Guillaume Le Gentil, during a visit to Pondicherry, India, found the Indian computations of the duration of the lunar eclipse of 30 August 1765 to be short by only 41 seconds, whereas Le Gentil's charts were long by 68 seconds.
By the 1600s, European astronomers were publishing books with diagrams explaining how lunar and solar eclipses occurred. In order to disseminate this information to a broader audience and decrease fear of the consequences of eclipses, booksellers printed broadsides explaining the event either using the science or via astrology.
Eclipses in mythology and religion
The American author Gene Weingarten described the tension between belief and eclipses thus: "I am a devout atheist but can't explain why the moon is exactly the right size, and gets positioned so precisely between the Earth and the sun, that total solar eclipses are perfect. It bothers me."
The Graeco-Roman historian Cassius Dio, writing between AD 211–229, relates the anecdote that Emperor Claudius considered it necessary to prevent disturbance among the Roman population by publishing a prediction for a solar eclipse which would fall on his birthday anniversary [1 August in the year AD 45]. In this context, Cassius Dio provides a detailed explanation of solar and lunar eclipses.
Typically in mythology, eclipses were understood to be one variation or another of a spiritual battle between the sun and evil forces or spirits of darkness. More specifically, in Norse mythology, it is believed that there is a wolf by the name of Fenrir that is in constant pursuit of the Sun, and eclipses are thought to occur when the wolf successfully devours the divine Sun. Other Norse tribes believed that there are two wolves by the names of Sköll and Hati that are in pursuit of the Sun and the Moon, known by the names of Sol and Mani, and these tribes believed that an eclipse occurs when one of the wolves successfully eats either the Sun or the Moon.
In most types of mythologies and certain religions, eclipses were seen as a sign that the gods were angry and that danger was soon to come, so people often altered their actions in an effort to dissuade the gods from unleashing their wrath. In the Hindu religion, for example, people often sing religious hymns for protection from the evil spirits of the eclipse, and many people of the Hindu religion refuse to eat during an eclipse to avoid the effects of the evil spirits. Hindu people living in India will also wash off in the Ganges River, which is believed to be spiritually cleansing, directly following an eclipse to clean themselves of the evil spirits. In early Judaism and Christianity, eclipses were viewed as signs from God, and some eclipses were seen as a display of God's greatness or even signs of cycles of life and death. However, more ominous eclipses such as a blood moon were believed to be a divine sign that God would soon destroy their enemies.
Other planets and dwarf planets
Gas giants
The gas giant planets have many moons and thus frequently display eclipses. The most striking involve Jupiter, which has four large moons and a low axial tilt, making eclipses more frequent as these bodies pass through the shadow of the larger planet. Transits occur with equal frequency. It is common to see the larger moons casting circular shadows upon Jupiter's cloudtops.
The eclipses of the Galilean moons by Jupiter became accurately predictable once their orbital elements were known. During the 1670s, it was discovered that these events were occurring about 17 minutes later than expected when Jupiter was on the far side of the Sun. Ole Rømer deduced that the delay was caused by the time needed for light to travel from Jupiter to the Earth. This was used to produce the first estimate of the speed of light.
The timing of the Jovian satellite eclipses was also used to calculate an observer's longitude upon the Earth. By knowing the expected time when an eclipse would be observed at a standard longitude (such as Greenwich), the time difference could be computed by accurately observing the local time of the eclipse. The time difference gives the longitude of the observer because every hour of difference corresponded to 15° around the Earth's equator. This technique was used, for example, by Giovanni D. Cassini in 1679 to re-map France.
On the other three gas giants (Saturn, Uranus and Neptune) eclipses only occur at certain periods during the planet's orbit, due to their higher inclination between the orbits of the moon and the orbital plane of the planet. The moon Titan, for example, has an orbital plane tilted about 1.6° to Saturn's equatorial plane. But Saturn has an axial tilt of nearly 27°. The orbital plane of Titan only crosses the line of sight to the Sun at two points along Saturn's orbit. As the orbital period of Saturn is 29.7 years, an eclipse is only possible about every 15 years.
Mars
On Mars, only partial solar eclipses (transits) are possible, because neither of its moons is large enough, at their respective orbital radii, to cover the Sun's disc as seen from the surface of the planet. Eclipses of the moons by Mars are not only possible, but commonplace, with hundreds occurring each Earth year. There are also rare occasions when Deimos is eclipsed by Phobos. Martian eclipses have been photographed from both the surface of Mars and from orbit.
Pluto
Pluto, with its proportionately largest moon Charon, is also the site of many eclipses. A series of such mutual eclipses occurred between 1985 and 1990. These daily events led to the first accurate measurements of the physical parameters of both objects.
Mercury and Venus
Eclipses are impossible on Mercury and Venus, which have no moons. However, as seen from the Earth, both have been observed to transit across the face of the Sun. Transits of Venus occur in pairs separated by an interval of eight years, but each pair of events happen less than once a century. According to NASA, the next pair of Venus transits will occur on December 10, 2117, and December 8, 2125. Transits of Mercury are much more common, occurring 13 times each century, on average.
Eclipsing binaries
A binary star system consists of two stars that orbit around their common centre of mass. The movements of both stars lie on a common orbital plane in space. When this plane is very closely aligned with the location of an observer, the stars can be seen to pass in front of each other. The result is a type of extrinsic variable star system called an eclipsing binary.
The maximum luminosity of an eclipsing binary system is equal to the sum of the luminosity contributions from the individual stars. When one star passes in front of the other, the luminosity of the system is seen to decrease. The luminosity returns to normal once the two stars are no longer in alignment.
The first eclipsing binary star system to be discovered was Algol, a star system in the constellation Perseus. Normally this star system has a visual magnitude of 2.1. However, every 2.867 days the magnitude decreases to 3.4 for more than nine hours. This is caused by the passage of the dimmer member of the pair in front of the brighter star. The concept that an eclipsing body caused these luminosity variations was introduced by John Goodricke in 1783.
Types
Sun – Moon – Earth: Solar eclipse | annular eclipse | hybrid eclipse | partial eclipse
Sun – Earth – Moon: Lunar eclipse | penumbral eclipse | partial lunar eclipse | central lunar eclipse
Sun – Phobos – Mars: Transit of Phobos from Mars | Solar eclipses on Mars
Sun – Deimos – Mars: Transit of Deimos from Mars | Solar eclipses on Mars
Other types: Solar eclipses on Jupiter | Solar eclipses on Saturn | Solar eclipses on Uranus | Solar eclipses on Neptune | Solar eclipses on Pluto
| Physical sciences | Celestial mechanics | null |
9775 | https://en.wikipedia.org/wiki/Endoplasmic%20reticulum | Endoplasmic reticulum | The endoplasmic reticulum (ER) is a part of a transportation system of the eukaryotic cell, and has many other important functions such as protein folding. It is a type of organelle made up of two subunits – rough endoplasmic reticulum (RER), and smooth endoplasmic reticulum (SER). The endoplasmic reticulum is found in most eukaryotic cells and forms an interconnected network of flattened, membrane-enclosed sacs known as cisternae (in the RER), and tubular structures in the SER. The membranes of the ER are continuous with the outer nuclear membrane. The endoplasmic reticulum is not found in red blood cells, or spermatozoa.
The two types of ER share many of the same proteins and engage in certain common activities such as the synthesis of certain lipids and cholesterol. Different types of cells contain different ratios of the two types of ER depending on the activities of the cell. RER is found mainly toward the nucleus of the cell and SER towards the cell membrane or plasma membrane of cell.
The outer (cytosolic) face of the RER is studded with ribosomes that are the sites of protein synthesis. The RER is especially prominent in cells such as hepatocytes. The SER lacks ribosomes and functions in lipid synthesis but not metabolism, the production of steroid hormones, and detoxification. The SER is especially abundant in mammalian liver and gonad cells.
The ER was observed by light microscopy by Garnier in 1897, who coined the term ergastoplasm. The lacy membranes of the endoplasmic reticulum were first seen by electron microscopy in 1945 by Keith R. Porter, Albert Claude, and Ernest F. Fullam. Later, the word reticulum, which means "network", was applied by Porter in 1953 to describe this fabric of membranes.
Structure
The general structure of the endoplasmic reticulum is a network of membranes called cisternae. These sac-like structures are held together by the cytoskeleton. The phospholipid membrane encloses the cisternal space (or lumen), which is continuous with the perinuclear space but separate from the cytosol. The functions of the endoplasmic reticulum can be summarized as the synthesis and export of proteins and membrane lipids, but varies between ER and cell type and cell function. The quantity of both rough and smooth endoplasmic reticulum in a cell can slowly interchange from one type to the other, depending on the changing metabolic activities of the cell. Transformation can include embedding of new proteins in membrane as well as structural changes. Changes in protein content may occur without noticeable structural changes.
Rough endoplasmic reticulum
The surface of the rough endoplasmic reticulum (often abbreviated RER or rough ER; also called granular endoplasmic reticulum) is studded with protein-manufacturing ribosomes giving it a "rough" appearance (hence its name). The binding site of the ribosome on the rough endoplasmic reticulum is the translocon. However, the ribosomes are not a stable part of this organelle's structure as they are constantly being bound and released from the membrane. A ribosome only binds to the RER once a specific protein-nucleic acid complex forms in the cytosol. This special complex forms when a free ribosome begins translating the mRNA of a protein destined for the secretory pathway. The first 5–30 amino acids polymerized encode a signal peptide, a molecular message that is recognized and bound by a signal recognition particle (SRP). Translation pauses and the ribosome complex binds to the RER translocon where translation continues with the nascent (new) protein forming into the RER lumen and/or membrane. The protein is processed in the ER lumen by an enzyme (a signal peptidase), which removes the signal peptide. Ribosomes at this point may be released back into the cytosol; however, non-translating ribosomes are also known to stay associated with translocons.
The membrane of the rough endoplasmic reticulum is in the form of large double-membrane sheets that are located near, and continuous with, the outer layer of the nuclear envelope. The double membrane sheets are stacked and connected through several right- or left-handed helical ramps, the "Terasaki ramps", giving rise to a structure resembling a parking garage. Although there is no continuous membrane between the endoplasmic reticulum and the Golgi apparatus, membrane-bound transport vesicles shuttle proteins between these two compartments. Vesicles are surrounded by coating proteins called COPI and COPII. COPII targets vesicles to the Golgi apparatus and COPI marks them to be brought back to the rough endoplasmic reticulum. The rough endoplasmic reticulum works in concert with the Golgi complex to target new proteins to their proper destinations. The second method of transport out of the endoplasmic reticulum involves areas called membrane contact sites, where the membranes of the endoplasmic reticulum and other organelles are held closely together, allowing the transfer of lipids and other small molecules.
The rough endoplasmic reticulum is key in multiple functions:
Manufacture of lysosomal enzymes with a mannose-6-phosphate marker added in the cis-Golgi network.
Manufacture of secreted proteins, either secreted constitutively with no tag or secreted in a regulatory manner involving clathrin and paired basic amino acids in the signal peptide.
Integral membrane proteins that stay embedded in the membrane as vesicles exit and bind to new membranes. Rab proteins are key in targeting the membrane; SNAP and SNARE proteins are key in the fusion event.
Initial glycosylation as assembly continues. This is N-linked (O-linking occurs in the Golgi).
N-linked glycosylation: If the protein is properly folded, oligosaccharyltransferase recognizes the AA sequence NXS or NXT (with the S/T residue phosphorylated) and adds a 14-sugar backbone (2-N-acetylglucosamine, 9-branching mannose, and 3-glucose at the end) to the side-chain nitrogen of Asn.
Smooth endoplasmic reticulum
In most cells the smooth endoplasmic reticulum (abbreviated SER) is scarce. Instead there are areas where the ER is partly smooth and partly rough, this area is called the transitional ER. The transitional ER gets its name because it contains ER exit sites. These are areas where the transport vesicles which contain lipids and proteins made in the ER, detach from the ER and start moving to the Golgi apparatus. Specialized cells can have a lot of smooth endoplasmic reticulum and in these cells the smooth ER has many functions. It synthesizes lipids, phospholipids, and steroids. Cells which secrete these products, such as those in the testes, ovaries, and sebaceous glands have an abundance of smooth endoplasmic reticulum. It also carries out the metabolism of carbohydrates, detoxification of natural metabolism products and of alcohol and drugs, attachment of receptors on cell membrane proteins, and steroid metabolism. In muscle cells, it regulates calcium ion concentration. Smooth endoplasmic reticulum is found in a variety of cell types (both animal and plant), and it serves different functions in each. The smooth endoplasmic reticulum also contains the enzyme glucose-6-phosphatase, which converts glucose-6-phosphate to glucose, a step in gluconeogenesis. It is connected to the nuclear envelope and consists of tubules that are located near the cell periphery. These tubes sometimes branch forming a network that is reticular in appearance. In some cells, there are dilated areas like the sacs of rough endoplasmic reticulum. The network of smooth endoplasmic reticulum allows for an increased surface area to be devoted to the action or storage of key enzymes and the products of these enzymes.
Sarcoplasmic reticulum
The sarcoplasmic reticulum (SR), from the Greek σάρξ sarx ("flesh"), is smooth ER found in muscle cells. The only structural difference between this organelle and the smooth endoplasmic reticulum is the composition of proteins they have, both bound to their membranes and drifting within the confines of their lumens. This fundamental difference is indicative of their functions: The endoplasmic reticulum synthesizes molecules, while the sarcoplasmic reticulum stores calcium ions and pumps them out into the sarcoplasm when the muscle fiber is stimulated. After their release from the sarcoplasmic reticulum, calcium ions interact with contractile proteins that utilize ATP to shorten the muscle fiber. The sarcoplasmic reticulum plays a major role in excitation-contraction coupling.
Functions
The endoplasmic reticulum serves many general functions, including the folding of protein molecules in sacs called cisternae and the transport of synthesized proteins in vesicles to the Golgi apparatus. Rough endoplasmic reticulum is also involved in protein synthesis. Correct folding of newly made proteins is made possible by several endoplasmic reticulum chaperone proteins, including protein disulfide isomerase (PDI), ERp29, the Hsp70 family member BiP/Grp78, calnexin, calreticulin, and the peptidylprolyl isomerase family. Only properly folded proteins are transported from the rough ER to the Golgi apparatus – unfolded proteins cause an unfolded protein response as a stress response in the ER. Disturbances in redox regulation, calcium regulation, glucose deprivation, and viral infection or the over-expression of proteins can lead to endoplasmic reticulum stress response (ER stress), a state in which the folding of proteins slows, leading to an increase in unfolded proteins. This stress is emerging as a potential cause of damage in hypoxia/ischemia, insulin resistance, and other disorders.
Protein transport
Secretory proteins, mostly glycoproteins, are moved across the endoplasmic reticulum membrane. Proteins that are transported by the endoplasmic reticulum throughout the cell are marked with an address tag called a signal sequence. The N-terminus (one end) of a polypeptide chain (i.e., a protein) contains a few amino acids that work as an address tag, which are removed when the polypeptide reaches its destination. Nascent peptides reach the ER via the translocon, a membrane-embedded multiprotein complex. Proteins that are destined for places outside the endoplasmic reticulum are packed into transport vesicles and moved along the cytoskeleton toward their destination. In human fibroblasts, the ER is always co-distributed with microtubules and the depolymerisation of the latter cause its co-aggregation with mitochondria, which are also associated with the ER.
The endoplasmic reticulum is also part of a protein sorting pathway. It is, in essence, the transportation system of the eukaryotic cell. The majority of its resident proteins are retained within it through a retention motif. This motif is composed of four amino acids at the end of the protein sequence. The most common retention sequences are KDEL for lumen-located proteins and KKXX for transmembrane proteins. However, variations of KDEL and KKXX do occur, and other sequences can also give rise to endoplasmic reticulum retention. It is not known whether such variation can lead to sub-ER localizations. There are three KDEL (1, 2 and 3) receptors in mammalian cells, and they have a very high degree of sequence identity. The functional differences between these receptors remain to be established.
Bioenergetics regulation of ER ATP supply by a CaATiER mechanism
The endoplasmic reticulum does not harbor an ATP-regeneration machinery, and therefore requires ATP import from mitochondria. The imported ATP is vital for the ER to carry out its house keeping cellular functions, such as for protein folding and trafficking.
The ER ATP transporter, SLC35B1/AXER, was recently cloned and characterized, and the mitochondria supply ATP to the ER through a Ca2+-antagonized transport into the ER (CaATiER) mechanism. The CaATiER mechanism shows sensitivity to cytosolic Ca2+ ranging from high nM to low μM range, with the Ca2+-sensing element yet to be identified and validated.
Clinical significance
Increased and supraphysiological ER stress in pancreatic β cells disrupts normal insulin secretion, leading to hyperinsulinemia and consequently peripheral insulin resistance associated with obesity in humans. Human clinical trials also suggested a causal link between obesity-induced increase in insulin secretion and peripheral insulin resistance.
Abnormalities in XBP1 lead to a heightened endoplasmic reticulum stress response and subsequently causes a higher susceptibility for inflammatory processes that may even contribute to Alzheimer's disease. In the colon, XBP1 anomalies have been linked to the inflammatory bowel diseases including Crohn's disease.
The unfolded protein response (UPR) is a cellular stress response related to the endoplasmic reticulum. The UPR is activated in response to an accumulation of unfolded or misfolded proteins in the lumen of the endoplasmic reticulum. The UPR functions to restore normal function of the cell by halting protein translation, degrading misfolded proteins, and activating the signaling pathways that lead to increasing the production of molecular chaperones involved in protein folding. Sustained overactivation of the UPR has been implicated in prion diseases as well as several other neurodegenerative diseases and the inhibition of the UPR could become a treatment for those diseases.
| Biology and health sciences | Organelles and other cell parts | null |
9804 | https://en.wikipedia.org/wiki/Electric%20charge | Electric charge | Electric charge (symbol q, sometimes Q) is a physical property of matter that causes it to experience a force when placed in an electromagnetic field. Electric charge can be positive or negative. Like charges repel each other and unlike charges attract each other. An object with no net charge is referred to as electrically neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects.
Electric charge is a conserved property: the net charge of an isolated system, the quantity of positive charge minus the amount of negative charge, cannot change. Electric charge is carried by subatomic particles. In ordinary matter, negative charge is carried by electrons, and positive charge is carried by the protons in the nuclei of atoms. If there are more electrons than protons in a piece of matter, it will have a negative charge, if there are fewer it will have a positive charge, and if there are equal numbers it will be neutral. Charge is quantized: it comes in integer multiples of individual small units called the elementary charge, e, about which is the smallest charge that can exist freely. Particles called quarks have smaller charges, multiples of e, but they are found only combined in particles that have a charge that is an integer multiple of e. In the Standard Model, charge is an absolutely conserved quantum number. The proton has a charge of +e, and the electron has a charge of −e.
Today, a negative charge is defined as the charge carried by an electron and a positive charge is that carried by a proton. Before these particles were discovered, a positive charge was defined by Benjamin Franklin as the charge acquired by a glass rod when it is rubbed with a silk cloth.
Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (a combination of an electric and a magnetic field) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental interactions in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics.
The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (A⋅h). In physics and chemistry it is common to use the elementary charge (e) as a unit. Chemistry also uses the Faraday constant, which is the charge of one mole of elementary charges.
Overview
Charge is the fundamental property of matter that exhibits electrostatic attraction or repulsion in the presence of other matter with charge. Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge e; we say that electric charge is quantized. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil drop experiment demonstrated this fact directly, and measured the elementary charge. It has been discovered that one type of particle, quarks, have fractional charges of either − or +, but it is believed they always occur in multiples of integral charge; free-standing quarks have never been observed.
By convention, the charge of an electron is negative, −e, while that of a proton is positive, +e. Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them. The charge of an antiparticle equals that of the corresponding particle, but with opposite sign.
The electric charge of a macroscopic object is the sum of the electric charges of the particles that it is made up of. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral.
An ion is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). Monatomic ions are formed from single atoms, while polyatomic ions are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge.
During the formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral ionic compounds electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral.
Sometimes macroscopic objects contain ions distributed throughout the material, rigidly bound in place, giving an overall net positive or negative charge to the object. Also, macroscopic objects made of conductive elements can more or less easily (depending on the element) take on or give off electrons, and then maintain a net negative or positive charge indefinitely. When the net electric charge of an object is non-zero and motionless, the phenomenon is known as static electricity. This can easily be produced by rubbing two dissimilar materials together, such as rubbing amber with fur or glass with silk. In this way, non-conductive materials can be charged to a significant degree, either positively or negatively. Charge taken from one material is moved to the other material, leaving an opposite charge of the same magnitude behind. The law of conservation of charge always applies, giving the object from which a negative charge is taken a positive charge of the same magnitude, and vice versa.
Even when an object's net charge is zero, the charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases, the object is said to be polarized. The charge due to polarization is known as bound charge, while the charge on an object produced by electrons gained or lost from outside the object is called free charge. The motion of electrons in conductive metals in a specific direction is known as electric current.
Unit
The SI unit of quantity of electric charge is the coulomb (symbol: C). The coulomb is defined as the quantity of charge that passes through the cross section of an electrical conductor carrying one ampere for one second. This unit was proposed in 1946 and ratified in 1948. The lowercase symbol q is often used to denote a quantity of electric charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer.
The elementary charge is defined as a fundamental constant in the SI. The value for elementary charge, when expressed in SI units, is exactly
After discovering the quantized character of charge, in 1891, George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. J. J. Thomson subsequently discovered the particle that we now call the electron in 1897. The unit is today referred to as , , or simply denoted e, with the charge of an electron being −e. The charge of an isolated system should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a continuous quantity. In some contexts it is meaningful to speak of fractions of an elementary charge; for example, in the fractional quantum Hall effect.
The unit faraday is sometimes used in electrochemistry. One faraday is the magnitude of the charge of one mole of elementary charges, i.e.
History
From ancient times, people were familiar with four types of phenomena that today would all be explained using the concept of electric charge: (a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 to c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump, but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect.
In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon.
In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of De Magnete by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the Neo-Latin word electrica (from (ēlektron), the Greek word for amber). The Latin word was translated into English as . Gilbert is also credited with the term electrical, while the term electricity came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge".
Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies.
In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia.
Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in Mémoires de l'Académie Royale des Sciences), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745).
Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium.
Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter and coined the term itself (as well as battery and some others); for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was charged and when it had a deficit it was charged. He identified the term with vitreous electricity and with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A historian of science argues that Watson missed a subtle difference between his ideas and Franklin's, so that Watson misinterpreted his ideas as being similar to Franklin's. In any case, there was no animosity between Watson and Franklin, and the Franklin model of electrical action, formulated in early 1747, eventually became widely accepted at that time. After Franklin's work, effluvia-based explanations were rarely put forward.
It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge.
Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path.
In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity).
In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body.
In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state.
In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at the microscopic level.
Role of charge in static electricity
Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects.
Electrification by sliding
When a piece of glass and a piece of resin—neither of which exhibit any electrical properties—are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other.
A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena:
The two pieces of glass repel each other.
Each piece of glass attracts each piece of resin.
The two pieces of resin repel each other.
This attraction and repulsion is an electrical phenomenon, and the bodies that exhibit them are said to be electrified, or electrically charged. Bodies may be electrified in many other ways, as well as by sliding. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts.
If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be vitreously electrified, and if it attracts the glass and repels the resin it is said to be resinously electrified. All electrified bodies are either vitreously or resinously electrified.
An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention—just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand.
Role of charge in electric current
Electric current is the flow of electric charge through an object. The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the conventional current without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations.
At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron holes that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma.
Beware that, in the common and important case of metallic wires, the direction of the conventional current is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. This is a source of confusion for beginners.
Conservation of electric charge
The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge-current continuity equation. More generally, the rate of change in charge density ρ within a volume of integration V is equal to the area integral over the current density J through the closed surface S = ∂V, which is in turn equal to the net current I:
Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result:
The charge transferred between times and is obtained by integrating both sides:
where I is the net outward current through a closed surface and q is the electric charge contained within the volume defined by the surface.
Relativistic invariance
Aside from the properties described in articles about electromagnetism, electric charge is a relativistic invariant. This means that any particle that has electric charge q has the same electric charge regardless of how fast it is travelling. This property has been experimentally verified by showing that the electric charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as that of two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus).
| Physical sciences | Electrostatics | null |
9813 | https://en.wikipedia.org/wiki/Extinction%20event | Extinction event | An extinction event (also known as a mass extinction or biotic crisis) is a widespread and rapid decrease in the biodiversity on Earth. Such an event is identified by a sharp fall in the diversity and abundance of multicellular organisms. It occurs when the rate of extinction increases with respect to the background extinction rate and the rate of speciation. Estimates of the number of major mass extinctions in the last 540 million years range from as few as five to more than twenty. These differences stem from disagreement as to what constitutes a "major" extinction event, and the data chosen to measure past diversity.
The "Big Five" mass extinctions
In a landmark paper published in 1982, Jack Sepkoski and David M. Raup identified five particular geological intervals with excessive diversity loss. They were originally identified as outliers on a general trend of decreasing extinction rates during the Phanerozoic, but as more stringent statistical tests have been applied to the accumulating data, it has been established that in the current, Phanerozoic Eon, multicellular animal life has experienced at least five major and many minor mass extinctions. The "Big Five" cannot be so clearly defined, but rather appear to represent the largest (or some of the largest) of a relatively smooth continuum of extinction events. All of the five in the Phanerozoic Eon were anciently preceded by the presumed far more extensive mass extinction of microbial life during the Great Oxidation Event (a.k.a. Oxygen Catastrophe) early in the Proterozoic Eon. At the end of the Ediacaran and just before the Cambrian explosion, yet another Proterozoic extinction event (of unknown magnitude) is speculated to have ushered in the Phanerozoic.
Despite the common presentation focusing only on these five events, no measure of extinction shows any definite line separating them from the many other Phanerozoic extinction events that appear only slightly lesser catastrophes; further, using different methods of calculating an extinction's impact can lead to other events featuring in the top five.
Fossil records of older events are more difficult to interpret. This is because:
Older fossils are more difficult to find, as they are usually buried at a considerable depth.
Dating of older fossils is more difficult.
Productive fossil beds are researched more than unproductive ones, therefore leaving certain periods unresearched.
Prehistoric environmental events can disturb the deposition process.
Marine fossils tend to be better preserved than their more sought-after land-based counterparts, but the deposition and preservation of fossils on land is more erratic.
It has been suggested that the apparent variations in marine biodiversity may actually be an artifact, with abundance estimates directly related to quantity of rock available for sampling from different time periods. However, statistical analysis shows that this can only account for 50% of the observed pattern, and other evidence such as fungal spikes (geologically rapid increase in fungal abundance) provides reassurance that most widely accepted extinction events are real. A quantification of the rock exposure of Western Europe indicates that many of the minor events for which a biological explanation has been sought are most readily explained by sampling bias.
Sixth mass extinction
Research completed after the seminal 1982 paper (Sepkoski and Raup) has concluded that a sixth mass extinction event due to human activities is currently under way:
Extinctions by severity
Extinction events can be tracked by several methods, including geological change, ecological impact, extinction vs. origination (speciation) rates, and most commonly diversity loss among taxonomic units. Most early papers used families as the unit of taxonomy, based on compendiums of marine animal families by Sepkoski (1982, 1992). Later papers by Sepkoski and other authors switched to genera, which are more precise than families and less prone to taxonomic bias or incomplete sampling relative to species. These are several major papers estimating loss or ecological impact from fifteen commonly-discussed extinction events. Different methods used by these papers are described in the following section. The "Big Five" mass extinctions are bolded.
Graphed but not discussed by Sepkoski (1996), considered continuous with the Late Devonian mass extinction At the time considered continuous with the end-Permian mass extinction Includes late Norian time slices Diversity loss of both pulses calculated together Pulses extend over adjacent time slices, calculated separately Considered ecologically significant, but not analyzed directly Excluded due to a lack of consensus on Late Triassic chronology
The study of major extinction events
Breakthrough studies in the 1980s–1990s
For much of the 20th century, the study of mass extinctions was hampered by insufficient data. Mass extinctions, though acknowledged, were considered mysterious exceptions to the prevailing gradualistic view of prehistory, where slow evolutionary trends define faunal changes. The first breakthrough was published in 1980 by a team led by Luis Alvarez, who discovered trace metal evidence for an asteroid impact at the end of the Cretaceous period. The Alvarez hypothesis for the end-Cretaceous extinction gave mass extinctions, and catastrophic explanations, newfound popular and scientific attention.
Another landmark study came in 1982, when a paper written by David M. Raup and Jack Sepkoski was published in the journal Science. This paper, originating from a compendium of extinct marine animal families developed by Sepkoski, identified five peaks of marine family extinctions which stand out among a backdrop of decreasing extinction rates through time. Four of these peaks were statistically significant: the Ashgillian (end-Ordovician), Late Permian, Norian (end-Triassic), and Maastrichtian (end-Cretaceous). The remaining peak was a broad interval of high extinction smeared over the later half of the Devonian, with its apex in the Frasnian stage.
Through the 1980s, Raup and Sepkoski continued to elaborate and build upon their extinction and origination data, defining a high-resolution biodiversity curve (the "Sepkoski curve") and successive evolutionary faunas with their own patterns of diversification and extinction. Though these interpretations formed a strong basis for subsequent studies of mass extinctions, Raup and Sepkoski also proposed a more controversial idea in 1984: a 26-million-year periodic pattern to mass extinctions. Two teams of astronomers linked this to a hypothetical brown dwarf in the distant reaches of the solar system, inventing the "Nemesis hypothesis" which has been strongly disputed by other astronomers.
Around the same time, Sepkoski began to devise a compendium of marine animal genera, which would allow researchers to explore extinction at a finer taxonomic resolution. He began to publish preliminary results of this in-progress study as early as 1986, in a paper which identified 29 extinction intervals of note. By 1992, he also updated his 1982 family compendium, finding minimal changes to the diversity curve despite a decade of new data. In 1996, Sepkoski published another paper which tracked marine genera extinction (in terms of net diversity loss) by stage, similar to his previous work on family extinctions. The paper filtered its sample in three ways: all genera (the entire unfiltered sample size), multiple-interval genera (only those found in more than one stage), and "well-preserved" genera (excluding those from groups with poor or understudied fossil records). Diversity trends in marine animal families were also revised based on his 1992 update.
Revived interest in mass extinctions led many other authors to re-evaluate geological events in the context of their effects on life. A 1995 paper by Michael Benton tracked extinction and origination rates among both marine and continental (freshwater & terrestrial) families, identifying 22 extinction intervals and no periodic pattern. Overview books by O.H. Walliser (1996) and A. Hallam and P.B. Wignall (1997) summarized the new extinction research of the previous two decades. One chapter in the former source lists over 60 geological events which could conceivably be considered global extinctions of varying sizes. These texts, and other widely circulated publications in the 1990s, helped to establish the popular image of mass extinctions as a "big five" alongside many smaller extinctions through prehistory.
New data on genera: Sepkoski's compendium
Though Sepkoski died in 1999, his marine genera compendium was formally published in 2002. This prompted a new wave of studies into the dynamics of mass extinctions. These papers utilized the compendium to track origination rates (the rate that new species appear or speciate) parallel to extinction rates in the context of geological stages or substages. A review and re-analysis of Sepkoski's data by Bambach (2006) identified 18 distinct mass extinction intervals, including 4 large extinctions in the Cambrian. These fit Sepkoski's definition of extinction, as short substages with large diversity loss and overall high extinction rates relative to their surroundings.
Bambach et al. (2004) considered each of the "Big Five" extinction intervals to have a different pattern in the relationship between origination and extinction trends. Moreover, background extinction rates were broadly variable and could be separated into more severe and less severe time intervals. Background extinctions were least severe relative to the origination rate in the middle Ordovician-early Silurian, late Carboniferous-Permian, and Jurassic-recent. This argues that the Late Ordovician, end-Permian, and end-Cretaceous extinctions were statistically significant outliers in biodiversity trends, while the Late Devonian and end-Triassic extinctions occurred in time periods which were already stressed by relatively high extinction and low origination.
Computer models run by Foote (2005) determined that abrupt pulses of extinction fit the pattern of prehistoric biodiversity much better than a gradual and continuous background extinction rate with smooth peaks and troughs. This strongly supports the utility of rapid, frequent mass extinctions as a major driver of diversity changes. Pulsed origination events are also supported, though to a lesser degree which is largely dependent on pulsed extinctions.
Similarly, Stanley (2007) used extinction and origination data to investigate turnover rates and extinction responses among different evolutionary faunas and taxonomic groups. In contrast to previous authors, his diversity simulations show support for an overall exponential rate of biodiversity growth through the entire Phanerozoic.
Tackling biases in the fossil record
As data continued to accumulate, some authors began to re-evaluate Sepkoski's sample using methods meant to account for sampling biases. As early as 1982, a paper by Phillip W. Signor and Jere H. Lipps noted that the true sharpness of extinctions was diluted by the incompleteness of the fossil record. This phenomenon, later called the Signor-Lipps effect, notes that a species' true extinction must occur after its last fossil, and that origination must occur before its first fossil. Thus, species which appear to die out just prior to an abrupt extinction event may instead be a victim of the event, despite an apparent gradual decline looking at the fossil record alone. A model by Foote (2007) found that many geological stages had artificially inflated extinction rates due to Signor-Lipps "backsmearing" from later stages with extinction events.
Other biases include the difficulty in assessing taxa with high turnover rates or restricted occurrences, which cannot be directly assessed due to a lack of fine-scale temporal resolution. Many paleontologists opt to assess diversity trends by randomized sampling and rarefaction of fossil abundances rather than raw temporal range data, in order to account for all of these biases. But that solution is influenced by biases related to sample size. One major bias in particular is the "Pull of the recent", the fact that the fossil record (and thus known diversity) generally improves closer to the modern day. This means that biodiversity and abundance for older geological periods may be underestimated from raw data alone.
Alroy (2010) attempted to circumvent sample size-related biases in diversity estimates using a method he called "shareholder quorum subsampling" (SQS). In this method, fossils are sampled from a "collection" (such as a time interval) to assess the relative diversity of that collection. Every time a new species (or other taxon) enters the sample, it brings over all other fossils belonging to that species in the collection (its "share" of the collection). For example, a skewed collection with half its fossils from one species will immediately reach a sample share of 50% if that species is the first to be sampled. This continues, adding up the sample shares until a "coverage" or "quorum" is reached, referring to a pre-set desired sum of share percentages. At that point, the number of species in the sample are counted. A collection with more species is expected to reach a sample quorum with more species, thus accurately comparing the relative diversity change between two collections without relying on the biases inherent to sample size.
Alroy also elaborated on three-timer algorithms, which are meant to counteract biases in estimates of extinction and origination rates. A given taxon is a "three-timer" if it can be found before, after, and within a given time interval, and a "two-timer" if it overlaps with a time interval on one side. Counting "three-timers" and "two-timers" on either end of a time interval, and sampling time intervals in sequence, can together be combined into equations to predict extinction and origination with less bias. In subsequent papers, Alroy continued to refine his equations to improve lingering issues with precision and unusual samples.
McGhee et al. (2013), a paper which primarily focused on ecological effects of mass extinctions, also published new estimates of extinction severity based on Alroy's methods. Many extinctions were significantly more impactful under these new estimates, though some were less prominent.
Stanley (2016) was another paper which attempted to remove two common errors in previous estimates of extinction severity. The first error was the unjustified removal of "singletons", genera unique to only a single time slice. Their removal would mask the influence of groups with high turnover rates or lineages cut short early in their diversification. The second error was the difficulty in distinguishing background extinctions from brief mass extinction events within the same short time interval. To circumvent this issue, background rates of diversity change (extinction/origination) were estimated for stages or substages without mass extinctions, and then assumed to apply to subsequent stages with mass extinctions. For example, the Santonian and Campanian stages were each used to estimate diversity changes in the Maastrichtian prior to the K-Pg mass extinction. Subtracting background extinctions from extinction tallies had the effect of reducing the estimated severity of the six sampled mass extinction events. This effect was stronger for mass extinctions which occurred in periods with high rates of background extinction, like the Devonian.
Uncertainty in the Proterozoic and earlier eons
Because most diversity and biomass on Earth is microbial, and thus difficult to measure via fossils, extinction events placed on-record are those that affect the easily observed, biologically complex component of the biosphere rather than the total diversity and abundance of life. For this reason, well-documented extinction events are confined to the Phanerozoic eon – with the sole exception of the Oxygen Catastrophe in the Proterozoic – since before the Phanerozoic, all living organisms were either microbial, or if multicellular then soft-bodied. Perhaps due to the absence of a robust microbial fossil record, mass extinctions might only seem to be mainly a Phanerozoic phenomenon, with merely the observable extinction rates appearing low before large complex organisms with hard body parts arose.
Extinction occurs at an uneven rate. Based on the fossil record, the background rate of extinctions on Earth is about two to five taxonomic families of marine animals every million years.
The Oxygen Catastrophe, which occurred around 2.45 billion years ago in the Paleoproterozoic, is plausible as the first-ever major extinction event. It was perhaps also the worst-ever, in some sense, but with the Earth's ecology just before that time so poorly understood, and the concept of prokaryote genera so different from genera of complex life, that it would be difficult to meaningfully compare it to any of the "Big Five" even if Paleoproterozoic life were better known.
Since the Cambrian explosion, five further major mass extinctions have significantly exceeded the background extinction rate. The most recent and best-known, the Cretaceous–Paleogene extinction event, which occurred approximately Ma (million years ago), was a large-scale mass extinction of animal and plant species in a geologically short period of time. In addition to the five major Phanerozoic mass extinctions, there are numerous lesser ones, and the ongoing mass extinction caused by human activity is sometimes called the sixth mass extinction.
Evolutionary importance
Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the newly dominant group is "superior" to the old but usually because an extinction event eliminates the old, dominant group and makes way for the new one, a process known as adaptive radiation.
For example, mammaliaformes ("almost mammals") and then mammals existed throughout the reign of the dinosaurs, but could not compete in the large terrestrial vertebrate niches that dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. The dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans. Similarly, within Synapsida, the replacement of taxa that originated in the earliest, Pennsylvanian and Cisuralian evolutionary radiation (often still called "pelycosaurs", though this is a paraphyletic group) by therapsids occurred around the Kungurian/Roadian transition, which is often called Olson's extinction (which may be a slow decline over 20 Ma rather than a dramatic, brief event).
Another point of view put forward in the Escalation hypothesis predicts that species in ecological niches with more organism-to-organism conflict will be less likely to survive extinctions. This is because the very traits that keep a species numerous and viable under fairly static conditions become a burden once population levels fall among competing organisms during the dynamics of an extinction event.
Furthermore, many groups that survive mass extinctions do not recover in numbers or diversity, and many of these go into long-term decline, and these are often referred to as "Dead Clades Walking".
However, clades that survive for a considerable period of time after a mass extinction, and which were reduced to only a few species, are likely to have experienced a rebound effect called the "push of the past".
Darwin was firmly of the opinion that biotic interactions, such as competition for food and space – the 'struggle for existence' – were of considerably greater importance in promoting evolution and extinction than changes in the physical environment. He expressed this in The Origin of Species:
"Species are produced and exterminated by slowly acting causes ... and the most import of all causes of organic change is one which is almost independent of altered ... physical conditions, namely the mutual relation of organism to organism – the improvement of one organism entailing the improvement or extermination of others".
Patterns in frequency
Various authors have suggested that extinction events occurred periodically, every 26 to 30 million years, or that diversity fluctuates episodically about every 62 million years. Various ideas, mostly regarding astronomical influences, attempt to explain the supposed pattern, including the presence of a hypothetical companion star to the Sun, oscillations in the galactic plane, or passage through the Milky Way's spiral arms. However, other authors have concluded that the data on marine mass extinctions do not fit with the idea that mass extinctions are periodic, or that ecosystems gradually build up to a point at which a mass extinction is inevitable. Many of the proposed correlations have been argued to be spurious or lacking statistical significance. Others have argued that there is strong evidence supporting periodicity in a variety of records, and additional evidence in the form of coincident periodic variation in nonbiological geochemical variables such as Strontium isotopes, flood basalts, anoxic events, orogenies, and evaporite deposition. One explanation for this proposed cycle is carbon storage and release by oceanic crust, which exchanges carbon between the atmosphere and mantle.
Mass extinctions are thought to result when a long-term stress is compounded by a short-term shock. Over the course of the Phanerozoic, individual taxa appear to have become less likely to suffer extinction, which may reflect more robust food webs, as well as fewer extinction-prone species, and other factors such as continental distribution. However, even after accounting for sampling bias, there does appear to be a gradual decrease in extinction and origination rates during the Phanerozoic. This may represent the fact that groups with higher turnover rates are more likely to become extinct by chance; or it may be an artefact of taxonomy: families tend to become more speciose, therefore less prone to extinction, over time; and larger taxonomic groups (by definition) appear earlier in geological time.
It has also been suggested that the oceans have gradually become more hospitable to life over the last 500 million years, and thus less vulnerable to mass extinctions,
but susceptibility to extinction at a taxonomic level does not appear to make mass extinctions more or less probable.
Causes
There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed.
Identifying causes of specific mass extinctions
A good theory for a particular mass extinction should:
explain all of the losses, not just focus on a few groups (such as dinosaurs);
explain why particular groups of organisms died out and why others survived;
provide mechanisms that are strong enough to cause a mass extinction but not a total extinction;
be based on events or processes that can be shown to have happened, not just inferred from the extinction.
It may be necessary to consider combinations of causes. For example, the marine aspect of the end-Cretaceous extinction appears to have been caused by several processes that partially overlapped in time and may have had different levels of significance in different parts of the world.
Arens and West (2006) proposed a "press / pulse" model in which mass extinctions generally require two types of cause: long-term pressure on the eco-system ("press") and a sudden catastrophe ("pulse") towards the end of the period of pressure.
Their statistical analysis of marine extinction rates throughout the Phanerozoic suggested that neither long-term pressure alone nor a catastrophe alone was sufficient to cause a significant increase in the extinction rate.
Most widely supported explanations
MacLeod (2001) summarized the relationship between mass extinctions and events that are most often cited as causes of mass extinctions, using data from Courtillot, Jaeger & Yang et al. (1996), Hallam (1992) and Grieve & Pesonen (1992):
Flood basalt events (giant volcanic eruptions): 11 occurrences, all associated with significant extinctions But Wignall (2001) concluded that only five of the major extinctions coincided with flood basalt eruptions and that the main phase of extinctions started before the eruptions.
Sea-level falls: 12, of which seven were associated with significant extinctions.
Asteroid impacts: one large impact is associated with a mass extinction, that is, the Cretaceous–Paleogene extinction event; there have been many smaller impacts but they are not associated with significant extinctions, or cannot be dated precisely enough. The impact that created the Siljan Ring either was just before the Late Devonian Extinction or coincided with it.
The most commonly suggested causes of mass extinctions are listed below.
Flood basalt events
The formation of large igneous provinces by flood basalt events could have:
produced dust and particulate aerosols, which inhibited photosynthesis and thus caused food chains to collapse both on land and at sea
emitted sulfur oxides that were precipitated as acid rain and poisoned many organisms, contributing further to the collapse of food chains
emitted carbon dioxide and thus possibly causing sustained global warming once the dust and particulate aerosols dissipated.
Flood basalt events occur as pulses of activity punctuated by dormant periods. As a result, they are likely to cause the climate to oscillate between cooling and warming, but with an overall trend towards warming as the carbon dioxide they emit can stay in the atmosphere for hundreds of years.
Flood basalt events have been implicated as the cause of many major extinction events. It is speculated that massive volcanism caused or contributed to the Kellwasser Event, the End-Guadalupian Extinction Event, the End-Permian Extinction Event, the Smithian-Spathian Extinction, the Triassic-Jurassic Extinction Event, the Toarcian Oceanic Anoxic Event, the Cenomanian-Turonian Oceanic Anoxic Event, the Cretaceous-Palaeogene Extinction Event, and the Palaeocene-Eocene Thermal Maximum. The correlation between gigantic volcanic events expressed in the large igneous provinces and mass extinctions was shown for the last 260 million years. Recently such possible correlation was extended across the whole Phanerozoic Eon.
Sea-level fall
These are often clearly marked by worldwide sequences of contemporaneous sediments that show all or part of a transition from sea-bed to tidal zone to beach to dry land – and where there is no evidence that the rocks in the relevant areas were raised by geological processes such as orogeny. Sea-level falls could reduce the continental shelf area (the most productive part of the oceans) sufficiently to cause a marine mass extinction, and could disrupt weather patterns enough to cause extinctions on land. But sea-level falls are very probably the result of other events, such as sustained global cooling or the sinking of the mid-ocean ridges.
Sea-level falls are associated with most of the mass extinctions, including all of the "Big Five"—End-Ordovician, Late Devonian, End-Permian, End-Triassic, and End-Cretaceous, along with the more recently recognised Capitanian mass extinction of comparable severity to the Big Five.
A 2008 study, published in the journal Nature, established a relationship between the speed of mass extinction events and changes in sea level and sediment. The study suggests changes in ocean environments related to sea level exert a driving influence on rates of extinction, and generally determine the composition of life in the oceans.
Extraterrestrial threats
Impact events
The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and/or global forest fires.
Most paleontologists now agree that an asteroid did hit the Earth about 66 Ma, but there is lingering dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event. Nonetheless, in October 2019, researchers reported that the Cretaceous Chicxulub asteroid impact that resulted in the extinction of non-avian dinosaurs 66 Ma, also rapidly acidified the oceans, producing ecological collapse and long-lasting effects on the climate, and was a key reason for end-Cretaceous mass extinction.
The Permian-Triassic extinction event has also been hypothesised to have been caused by an asteroid impact that formed the Araguainha crater due to the estimated date of the crater's formation overlapping with the end-Permian extinction event. However, this hypothesis has been widely challenged, with the impact hypothesis being rejected by most researchers.
According to the Shiva hypothesis, the Earth is subject to increased asteroid impacts about once every 27 million years because of the Sun's passage through the plane of the Milky Way galaxy, thus causing extinction events at 27 million year intervals. Some evidence for this hypothesis has emerged in both marine and non-marine contexts. Alternatively, the Sun's passage through the higher density spiral arms of the galaxy could coincide with mass extinction on Earth, perhaps due to increased impact events. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on maps of the spiral structure of the Milky Way in CO molecular line emission has failed to find a correlation.
A nearby nova, supernova or gamma ray burst
A nearby gamma-ray burst (less than 6000 light-years away) would be powerful enough to destroy the Earth's ozone layer, leaving organisms vulnerable to ultraviolet radiation from the Sun. Gamma ray bursts are fairly rare, occurring only a few times in a given galaxy per million years.
It has been suggested that a gamma ray burst caused the End-Ordovician extinction, while a supernova has been proposed as the cause of the Hangenberg event. A supernova within 25 light-years would strip Earth of its atmosphere. Today there is in the Solar System's neighbourhood no critical star capable to produce a supernova dangerous to life on Earth.
Global cooling
Sustained and significant global cooling could kill many polar and temperate species and force others to migrate towards the equator; reduce the area available for tropical species; often make the Earth's climate more arid on average, mainly by locking up more of the planet's water in ice and snow. The glaciation cycles of the current ice age are believed to have had only a very mild impact on biodiversity, so the mere existence of a significant cooling is not sufficient on its own to explain a mass extinction.
It has been suggested that global cooling caused or contributed to the End-Ordovician, Permian–Triassic, Late Devonian extinctions, and possibly others. Sustained global cooling is distinguished from the temporary climatic effects of flood basalt events or impacts.
Global warming
This would have the opposite effects: expand the area available for tropical species; kill temperate species or force them to migrate towards the poles; possibly cause severe extinctions of polar species; often make the Earth's climate wetter on average, mainly by melting ice and snow and thus increasing the volume of the water cycle. It might also cause anoxic events in the oceans (see below).
Global warming as a cause of mass extinction is supported by several recent studies.
The most dramatic example of sustained warming is the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. It has also been suggested to have caused the Triassic–Jurassic extinction event, during which 20% of all marine families became extinct. Furthermore, the Permian–Triassic extinction event has been suggested to have been caused by warming.
Clathrate gun hypothesis
Clathrates are composites in which a lattice of one substance forms a cage around another. Methane clathrates (in which water molecules are the cage) form on continental shelves. These clathrates are likely to break up rapidly and release the methane if the temperature rises quickly or the pressure on them drops quickly – for example in response to sudden global warming or a sudden drop in sea level or even earthquakes. Methane is a much more powerful greenhouse gas than carbon dioxide, so a methane eruption ("clathrate gun") could cause rapid global warming or make it much more severe if the eruption was itself caused by global warming.
The most likely signature of such a methane eruption would be a sudden decrease in the ratio of carbon-13 to carbon-12 in sediments, since methane clathrates are low in carbon-13; but the change would have to be very large, as other events can also reduce the percentage of carbon-13.
It has been suggested that "clathrate gun" methane eruptions were involved in the end-Permian extinction ("the Great Dying") and in the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions.
Anoxic events
Anoxic events are situations in which the middle and even the upper layers of the ocean become deficient or totally lacking in oxygen. Their causes are complex and controversial, but all known instances are associated with severe and sustained global warming, mostly caused by sustained massive volcanism.
It has been suggested that anoxic events caused or contributed to the Ordovician–Silurian, late Devonian, Capitanian, Permian–Triassic, and Triassic–Jurassic extinctions, as well as a number of lesser extinctions (such as the Ireviken, Lundgreni, Mulde, Lau, Smithian-Spathian, Toarcian, and Cenomanian–Turonian events). On the other hand, there are widespread black shale beds from the mid-Cretaceous that indicate anoxic events but are not associated with mass extinctions.
The bio-availability of essential trace elements (in particular selenium) to potentially lethal lows has been shown to coincide with, and likely have contributed to, at least three mass extinction events in the oceans, that is, at the end of the Ordovician, during the Middle and Late Devonian, and at the end of the Triassic. During periods of low oxygen concentrations very soluble selenate (Se6+) is converted into much less soluble selenide (Se2-), elemental Se and organo-selenium complexes. Bio-availability of selenium during these extinction events dropped to about 1% of the current oceanic concentration, a level that has been proven lethal to many extant organisms.
British oceanologist and atmospheric scientist, Andrew Watson, explained that, while the Holocene epoch exhibits many processes reminiscent of those that have contributed to past anoxic events, full-scale ocean anoxia would take "thousands of years to develop".
Hydrogen sulfide emissions from the seas
Kump, Pavlov and Arthur (2005) have proposed that during the Permian–Triassic extinction event the warming also upset the oceanic balance between photosynthesising plankton and deep-water sulfate-reducing bacteria, causing massive emissions of hydrogen sulfide, which poisoned life on both land and sea and severely weakened the ozone layer, exposing much of the life that still remained to fatal levels of UV radiation.
Oceanic overturn
Oceanic overturn is a disruption of thermo-haline circulation that lets surface water (which is more saline than deep water because of evaporation) sink straight down, bringing anoxic deep water to the surface and therefore killing most of the oxygen-breathing organisms that inhabit the surface and middle depths. It may occur either at the beginning or the end of a glaciation, although an overturn at the start of a glaciation is more dangerous because the preceding warm period will have created a larger volume of anoxic water.
Unlike other oceanic catastrophes such as regressions (sea-level falls) and anoxic events, overturns do not leave easily identified "signatures" in rocks and are theoretical consequences of researchers' conclusions about other climatic and marine events.
It has been suggested that oceanic overturn caused or contributed to the late Devonian and Permian–Triassic extinctions.
Geomagnetic reversal
One theory is that periods of increased geomagnetic reversals will weaken Earth's magnetic field long enough to expose the atmosphere to the solar winds, causing oxygen ions to escape the atmosphere in a rate increased by 3–4 orders, resulting in a disastrous decrease in oxygen.
Plate tectonics
Movement of the continents into some configurations can cause or contribute to extinctions in several ways: by initiating or ending ice ages; by changing ocean and wind currents and thus altering climate; by opening seaways or land bridges that expose previously isolated species to competition for which they are poorly adapted (for example, the extinction of most of South America's native ungulates and all of its large metatherians after the creation of a land bridge between North and South America). Occasionally continental drift creates a super-continent that includes the vast majority of Earth's land area, which in addition to the effects listed above is likely to reduce the total area of continental shelf (the most species-rich part of the ocean) and produce a vast, arid continental interior that may have extreme seasonal variations.
Another theory is that the creation of the super-continent Pangaea contributed to the End-Permian mass extinction. Pangaea was almost fully formed at the transition from mid-Permian to late-Permian, and the "Marine genus diversity" diagram at the top of this article shows a level of extinction starting at that time, which might have qualified for inclusion in the "Big Five" if it were not overshadowed by the "Great Dying" at the end of the Permian.
Other hypotheses
Many other hypotheses have been proposed, such as the spread of a new disease, or simple out-competition following an especially successful biological innovation. But all have been rejected, usually for one of the following reasons: they require events or processes for which there is no evidence; they assume mechanisms that are contrary to the available evidence; they are based on other theories that have been rejected or superseded.
Scientists have been concerned that human activities could cause more plants and animals to become extinct than any point in the past. Along with human-made changes in climate (see above), some of these extinctions could be caused by overhunting, overfishing, invasive species, or habitat loss. A study published in May 2017 in Proceedings of the National Academy of Sciences argued that a "biological annihilation" akin to a sixth mass extinction event is underway as a result of anthropogenic causes, such as over-population and over-consumption. The study suggested that as much as 50% of the number of animal individuals that once lived on Earth were already extinct, threatening the basis for human existence too.
Future biosphere extinction/sterilization
The eventual warming and expanding of the Sun, combined with the eventual decline of atmospheric carbon dioxide, could actually cause an even greater mass extinction, having the potential to wipe out even microbes (in other words, the Earth would be completely sterilized): rising global temperatures caused by the expanding Sun would gradually increase the rate of weathering, which would in turn remove more and more CO2 from the atmosphere. When CO2 levels get too low (perhaps at 50 ppm), most plant life will die out, although simpler plants like grasses and mosses can survive much longer, until levels drop to 10 ppm.
With all photosynthetic organisms gone, atmospheric oxygen can no longer be replenished, and it is eventually removed by chemical reactions in the atmosphere, perhaps from volcanic eruptions. Eventually the loss of oxygen will cause all remaining aerobic life to die out via asphyxiation, leaving behind only simple anaerobic prokaryotes. When the Sun becomes 10% brighter in about a billion years, Earth will suffer a moist greenhouse effect resulting in its oceans boiling away, while the Earth's liquid outer core cools due to the inner core's expansion and causes the Earth's magnetic field to shut down. In the absence of a magnetic field, charged particles from the Sun will deplete the atmosphere and further increase the Earth's temperature to an average of around 420 K (147 °C, 296 °F) in 2.8 billion years, causing the last remaining life on Earth to die out. This is the most extreme instance of a climate-caused extinction event. Since this will only happen late in the Sun's life, it would represent the final mass extinction in Earth's history (albeit a very long extinction event).
Effects and recovery
The effects of mass extinction events varied widely. After a major extinction event, usually only weedy species survive due to their ability to live in diverse habitats. Later, species diversify and occupy empty niches. Generally, it takes millions of years for biodiversity to recover after extinction events. In the most severe mass extinctions it may take 15 to 30 million years.
The worst Phanerozoic event, the Permian–Triassic extinction, devastated life on Earth, killing over 90% of species. Life seemed to recover quickly after the P-T extinction, but this was mostly in the form of disaster taxa, such as the hardy Lystrosaurus. The most recent research indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to successive waves of extinction that inhibited recovery, as well as prolonged environmental stress that continued into the Early Triassic. Recent research indicates that recovery did not begin until the start of the mid-Triassic, four to six million years after the extinction;
and some writers estimate that the recovery was not complete until 30 million years after the P-T extinction, that is, in the late Triassic. Subsequent to the P-T extinction, there was an increase in provincialization, with species occupying smaller ranges – perhaps removing incumbents from niches and setting the stage for an eventual rediversification.
The effects of mass extinctions on plants are somewhat harder to quantify, given the biases inherent in the plant fossil record. Some mass extinctions (such as the end-Permian) were equally catastrophic for plants, whereas others, such as the end-Devonian, did not affect the flora.
In media
The term extinction level event (ELE) has been used in media. The 1998 film Deep Impact describes a potential comet strike of earth as an E.L.E.
| Physical sciences | Geological history | null |
9837 | https://en.wikipedia.org/wiki/Ethylene | Ethylene | Ethylene (IUPAC name: ethene) is a hydrocarbon which has the formula or . It is a colourless, flammable gas with a faint "sweet and musky" odour when pure. It is the simplest alkene (a hydrocarbon with carbon–carbon double bonds).
Ethylene is widely used in the chemical industry, and its worldwide production (over 150 million tonnes in 2016) exceeds that of any other organic compound. Much of this production goes toward creating polythene, which is a widely used plastic containing polymer chains of ethylene units in various chain lengths. Production emits greenhouse gases, including methane from feedstock production and carbon dioxide from any non-sustainable energy used.
Ethylene is also an important natural plant hormone and is used in agriculture to induce ripening of fruits. The hydrate of ethylene is ethanol.
Structure and properties
This hydrocarbon has four hydrogen atoms bound to a pair of carbon atoms that are connected by a double bond. All six atoms that comprise ethylene are coplanar. The H-C-H angle is 117.4°, close to the 120° for ideal sp² hybridized carbon. The molecule is also relatively weak: rotation about the C-C bond is a very low energy process that requires breaking the π-bond by supplying heat at 50 °C.
The π-bond in the ethylene molecule is responsible for its useful reactivity. The double bond is a region of high electron density, thus it is susceptible to attack by electrophiles. Many reactions of ethylene are catalyzed by transition metals, which bind transiently to the ethylene using both the π and π* orbitals.
Being a simple molecule, ethylene is spectroscopically simple. Its UV-vis spectrum is still used as a test of theoretical methods.
Uses
Major industrial reactions of ethylene include in order of scale: 1) polymerization, 2) oxidation, 3) halogenation and hydrohalogenation, 4) alkylation, 5) hydration, 6) oligomerization, and 7) hydroformylation. In the United States and Europe, approximately 90% of ethylene is used to produce ethylene oxide, ethylene dichloride, ethylbenzene and polyethylene. Most of the reactions with ethylene are electrophilic addition.
Polymerization
Polyethylene production uses more than half of the world's ethylene supply. Polyethylene, also called polyethene and polythene, is the world's most widely used plastic. It is primarily used to make films in packaging, carrier bags and trash liners. Linear alpha-olefins, produced by oligomerization (formation of short-chain molecules) are used as precursors, detergents, plasticisers, synthetic lubricants, additives, and also as co-monomers in the production of polyethylenes.
Oxidation
Ethylene is oxidized to produce ethylene oxide, a key raw material in the production of surfactants and detergents by ethoxylation. Ethylene oxide is also hydrolyzed to produce ethylene glycol, widely used as an automotive antifreeze as well as higher molecular weight glycols, glycol ethers, and polyethylene terephthalate.
Ethylene oxidation in the presence of a palladium catalyst can form acetaldehyde. This conversion remains a major industrial process (10M kg/y). The process proceeds via the initial complexation of ethylene to a Pd(II) center.
Halogenation and hydrohalogenation
Major intermediates from the halogenation and hydrohalogenation of ethylene include ethylene dichloride, ethyl chloride, and ethylene dibromide. The addition of chlorine entails "oxychlorination", i.e. chlorine itself is not used. Some products derived from this group are polyvinyl chloride, trichloroethylene, perchloroethylene, methyl chloroform, polyvinylidene chloride and copolymers, and ethyl bromide.
Alkylation
Major chemical intermediates from the alkylation with ethylene is ethylbenzene, precursor to styrene. Styrene is used principally in polystyrene for packaging and insulation, as well as in styrene-butadiene rubber for tires and footwear. On a smaller scale, ethyltoluene, ethylanilines, 1,4-hexadiene, and aluminium alkyls. Products of these intermediates include polystyrene, unsaturated polyesters and ethylene-propylene terpolymers.
Oxo reaction
The hydroformylation (oxo reaction) of ethylene results in propionaldehyde, a precursor to propionic acid and n-propyl alcohol.
Hydration
Ethylene has long represented the major nonfermentative precursor to ethanol. The original method entailed its conversion to diethyl sulfate, followed by hydrolysis. The main method practiced since the mid-1990s is the direct hydration of ethylene catalyzed by solid acid catalysts:
C2H4 + H2O → CH3CH2OH
Dimerization to butenes
Ethylene is dimerized by hydrovinylation to give n-butenes using processes licensed by Lummus or IFP. The Lummus process produces mixed n-butenes (primarily 2-butenes) while the IFP process produces 1-butene. 1-Butene is used as a comonomer in the production of certain kinds of polyethylene.
Fruit and flowering
Ethylene is a hormone that affects the ripening and flowering of many plants. It is widely used to control freshness in horticulture and fruits. The scrubbing of naturally occurring ethylene delays ripening. Adsorption of ethylene by nets coated in titanium dioxide gel has also been shown to be effective.
Niche uses
An example of a niche use is as an anesthetic agent (in an 85% ethylene/15% oxygen ratio). Another use is as a welding gas. It is also used as a refrigerant gas for low temperature applications under the name R-1150.
Production
Global ethylene production was 107 million tonnes in 2005, 109 million tonnes in 2006, 138 million tonnes in 2010, and 141 million tonnes in 2011. By 2013, ethylene was produced by at least 117 companies in 32 countries. To meet the ever-increasing demand for ethylene, sharp increases in production facilities are added globally, particularly in the Mideast and in China. Production emits greenhouse gas, namely significant amounts of carbon dioxide.
Industrial process
Ethylene is produced by several methods in the petrochemical industry. A primary method is steam cracking (SC) where hydrocarbons and steam are heated to 750–950 °C. This process converts large hydrocarbons into smaller ones and introduces unsaturation. When ethane is the feedstock, ethylene is the product. Ethylene is separated from the resulting mixture by repeated compression and distillation. In Europe and Asia, ethylene is obtained mainly from cracking naphtha, gasoil and condensates with the coproduction of propylene, C4 olefins and aromatics (pyrolysis gasoline). Other technologies employed for the production of ethylene include Fischer-Tropsch synthesis and methanol-to-olefins (MTO).
Laboratory synthesis
Although of great value industrially, ethylene is rarely synthesized in the laboratory and is ordinarily purchased. It can be produced via dehydration of ethanol with sulfuric acid or in the gas phase with aluminium oxide or activated alumina.
Biosynthesis
Ethylene is produced from methionine in nature. The immediate precursor is 1-aminocyclopropane-1-carboxylic acid.
Ligand
Ethylene is a fundamental ligand in transition metal alkene complexes. One of the first organometallic compounds, Zeise's salt is a complex of ethylene. Useful reagents containing ethylene include Pt(PPh3)2(C2H4) and Rh2Cl2(C2H4)4. The Rh-catalysed hydroformylation of ethylene is conducted on an industrial scale to provide propionaldehyde.
History
Some geologists and scholars believe that the famous Greek Oracle at Delphi (the Pythia) went into her trance-like state as an effect of ethylene rising from ground faults.
Ethylene appears to have been discovered by Johann Joachim Becher, who obtained it by heating ethanol with sulfuric acid; he mentioned the gas in his Physica Subterranea (1669). Joseph Priestley also mentions the gas in his Experiments and observations relating to the various branches of natural philosophy: with a continuation of the observations on air (1779), where he reports that Jan Ingenhousz saw ethylene synthesized in the same way by a Mr. Enée in Amsterdam in 1777 and that Ingenhousz subsequently produced the gas himself. The properties of ethylene were studied in 1795 by four Dutch chemists, Johann Rudolph Deimann, Adrien Paets van Troostwyck, Anthoni Lauwerenburgh and Nicolas Bondt, who found that it differed from hydrogen gas and that it contained both carbon and hydrogen. This group also discovered that ethylene could be combined with chlorine to produce the Dutch oil, 1,2-dichloroethane; this discovery gave ethylene the name used for it at that time, olefiant gas (oil-making gas.) The term olefiant gas is in turn the etymological origin of the modern word "olefin", the class of hydrocarbons in which ethylene is the first member.
In the mid-19th century, the suffix -ene (an Ancient Greek root added to the end of female names meaning "daughter of") was widely used to refer to a molecule or part thereof that contained one fewer hydrogen atoms than the molecule being modified. Thus, ethylene () was the "daughter of ethyl" (). The name ethylene was used in this sense as early as 1852.
In 1866, the German chemist August Wilhelm von Hofmann proposed a system of hydrocarbon nomenclature in which the suffixes -ane, -ene, -ine, -one, and -une were used to denote the hydrocarbons with 0, 2, 4, 6, and 8 fewer hydrogens than their parent alkane. In this system, ethylene became ethene. Hofmann's system eventually became the basis for the Geneva nomenclature approved by the International Congress of Chemists in 1892, which remains at the core of the IUPAC nomenclature. However, by that time, the name ethylene was deeply entrenched, and it remains in wide use today, especially in the chemical industry.
Following experimentation by Luckhardt, Crocker, and Carter at the University of Chicago, ethylene was used as an anesthetic. It remained in use through the 1940s use even while chloroform was being phased out. Its pungent odor and its explosive nature limit its use today.
Nomenclature
The 1979 IUPAC nomenclature rules made an exception for retaining the non-systematic name ethylene; however, this decision was reversed in the 1993 rules, and it remains unchanged in the newest 2013 recommendations, so the IUPAC name is now ethene. In the IUPAC system, the name ethylene is reserved for the divalent group -CH2CH2-. Hence, names like ethylene oxide and ethylene dibromide are permitted, but the use of the name ethylene for the two-carbon alkene is not. Nevertheless, use of the name ethylene for H2C=CH2 (and propylene for H2C=CHCH3) is still prevalent among chemists in North America.
Greenhouse gas emissions
"A key factor affecting petrochemicals life-cycle emissions is the methane intensity of feedstocks, especially in the production segment." Emissions from cracking of naptha and natural gas (common in the US as gas is cheap there) depend a lot on the source of energy (for example gas burnt to provide high temperatures) but that from naptha is certainly more per kg of feedstock. Both steam cracking and production from natural gas via ethane are estimated to emit 1.8 to 2kg of CO2 per kg ethylene produced, totalling over 260 million tonnes a year. This is more than all other manufactured chemicals except cement and ammonia. According to a 2022 report using renewable or nuclear energy could cut emissions by almost half.
Safety
Like all hydrocarbons, ethylene is a combustible asphyxiant. It is listed as an IARC group 3 agent, since there is no current evidence that it causes cancer in humans.
| Physical sciences | Hydrocarbons | null |
9845 | https://en.wikipedia.org/wiki/JavaScript | JavaScript | JavaScript (), often abbreviated as JS, is a programming language and core technology of the Web, alongside HTML and CSS. 99% of websites use JavaScript on the client side for webpage behavior.
Web browsers have a dedicated JavaScript engine that executes the client code. These engines are also utilized in some servers and a variety of apps. The most popular runtime system for non-browser usage is Node.js.
JavaScript is a high-level, often just-in-time compiled language that conforms to the ECMAScript standard. It has dynamic typing, prototype-based object-orientation, and first-class functions. It is multi-paradigm, supporting event-driven, functional, and imperative programming styles. It has application programming interfaces (APIs) for working with text, dates, regular expressions, standard data structures, and the Document Object Model (DOM).
The ECMAScript standard does not include any input/output (I/O), such as networking, storage, or graphics facilities. In practice, the web browser or other runtime system provides JavaScript APIs for I/O.
Although Java and JavaScript are similar in name and syntax, the two languages are distinct and differ greatly in design.
History
Creation at Netscape
The first popular web browser with a graphical user interface, Mosaic, was released in 1993. Accessible to non-technical people, it played a prominent role in the rapid growth of the early World Wide Web. The lead developers of Mosaic then founded the Netscape corporation, which released a more polished browser, Netscape Navigator, in 1994. This quickly became the most-used.
During these formative years of the Web, web pages could only be static, lacking the capability for dynamic behavior after the page was loaded in the browser. There was a desire in the flourishing web development scene to remove this limitation, so in 1995, Netscape decided to add a programming language to Navigator. They pursued two routes to achieve this: collaborating with Sun Microsystems to embed the Java language, while also hiring Brendan Eich to embed the Scheme language.
The goal was a "language for the masses", "to help nonprogrammers create dynamic, interactive Web sites". Netscape management soon decided that the best option was for Eich to devise a new language, with syntax similar to Java and less like Scheme or other extant scripting languages. Although the new language and its interpreter implementation were called LiveScript when first shipped as part of a Navigator beta in September 1995, the name was changed to JavaScript for the official release in December.
The choice of the JavaScript name has caused confusion, implying that it is directly related to Java. At the time, the dot-com boom had begun and Java was a popular new language, so Eich considered the JavaScript name a marketing ploy by Netscape.
Adoption by Microsoft
Microsoft debuted Internet Explorer in 1995, leading to a browser war with Netscape. On the JavaScript front, Microsoft created its own interpreter called JScript.
Microsoft first released JScript in 1996, alongside initial support for CSS and extensions to HTML. Each of these implementations was noticeably different from their counterparts in Netscape Navigator. These differences made it difficult for developers to make their websites work well in both browsers, leading to widespread use of "best viewed in Netscape" and "best viewed in Internet Explorer" logos for several years.
The rise of JScript
In November 1996, Netscape submitted JavaScript to Ecma International, as the starting point for a standard specification that all browser vendors could conform to. This led to the official release of the first ECMAScript language specification in June 1997.
The standards process continued for a few years, with the release of ECMAScript 2 in June 1998 and ECMAScript 3 in December 1999. Work on ECMAScript 4 began in 2000.
However, the effort to fully standardize the language was undermined by Microsoft gaining an increasingly dominant position in the browser market. By the early 2000s, Internet Explorer's market share reached 95%. This meant that JScript became the de facto standard for client-side scripting on the Web.
Microsoft initially participated in the standards process and implemented some proposals in its JScript language, but eventually it stopped collaborating on ECMA work. Thus ECMAScript 4 was mothballed.
Growth and standardization
During the period of Internet Explorer dominance in the early 2000s, client-side scripting was stagnant. This started to change in 2004, when the successor of Netscape, Mozilla, released the Firefox browser. Firefox was well received by many, taking significant market share from Internet Explorer.
In 2005, Mozilla joined ECMA International, and work started on the ECMAScript for XML (E4X) standard. This led to Mozilla working jointly with Macromedia (later acquired by Adobe Systems), who were implementing E4X in their ActionScript 3 language, which was based on an ECMAScript 4 draft. The goal became standardizing ActionScript 3 as the new ECMAScript 4. To this end, Adobe Systems released the Tamarin implementation as an open source project. However, Tamarin and ActionScript 3 were too different from established client-side scripting, and without cooperation from Microsoft, ECMAScript 4 never reached fruition.
Meanwhile, very important developments were occurring in open-source communities not affiliated with ECMA work. In 2005, Jesse James Garrett released a white paper in which he coined the term Ajax and described a set of technologies, of which JavaScript was the backbone, to create web applications where data can be loaded in the background, avoiding the need for full page reloads. This sparked a renaissance period of JavaScript, spearheaded by open-source libraries and the communities that formed around them. Many new libraries were created, including jQuery, Prototype, Dojo Toolkit, and MooTools.
Google debuted its Chrome browser in 2008, with the V8 JavaScript engine that was faster than its competition. The key innovation was just-in-time compilation (JIT), so other browser vendors needed to overhaul their engines for JIT.
In July 2008, these disparate parties came together for a conference in Oslo. This led to the eventual agreement in early 2009 to combine all relevant work and drive the language forward. The result was the ECMAScript 5 standard, released in December 2009.
Reaching maturity
Ambitious work on the language continued for several years, culminating in an extensive collection of additions and refinements being formalized with the publication of ECMAScript 6 in 2015.
The creation of Node.js in 2009 by Ryan Dahl sparked a significant increase in the usage of JavaScript outside of web browsers. Node combines the V8 engine, an event loop, and I/O APIs, thereby providing a stand-alone JavaScript runtime system. As of 2018, Node had been used by millions of developers, and npm had the most modules of any package manager in the world.
The ECMAScript draft specification is currently maintained openly on GitHub, and editions are produced via regular annual snapshots. Potential revisions to the language are vetted through a comprehensive proposal process. Now, instead of edition numbers, developers check the status of upcoming features individually.
The current JavaScript ecosystem has many libraries and frameworks, established programming practices, and substantial usage of JavaScript outside of web browsers. Plus, with the rise of single-page applications and other JavaScript-heavy websites, several transpilers have been created to aid the development process.
Trademark
"JavaScript" is a trademark of Oracle Corporation in the United States. The trademark was originally issued to Sun Microsystems on 6 May 1997, and was transferred to Oracle when they acquired Sun in 2009.
A letter was circulated in September 2024, spearheaded by Ryan Dahl, calling on Oracle to free the JavaScript trademark. Brendan Eich, the original creator of JavaScript, was among the over 14,000 signatories who supported the initiative.
Website client-side usage
JavaScript is the dominant client-side scripting language of the Web, with 99% of all websites using it for this purpose. Scripts are embedded in or included from HTML documents and interact with the DOM.
All major web browsers have a built-in JavaScript engine that executes the code on the user's device.
Examples of scripted behavior
Loading new web page content without reloading the page, via Ajax or a WebSocket. For example, users of social media can send and receive messages without leaving the current page.
Web page animations, such as fading objects in and out, resizing, and moving them.
Playing browser games.
Controlling the playback of streaming media.
Generating pop-up ads or alert boxes.
Validating input values of a web form before the data is sent to a web server.
Logging data about the user's behavior then sending it to a server. The website owner can use this data for analytics, ad tracking, and personalization.
Redirecting a user to another page.
Storing and retrieving data on the user's device, via the storage or IndexedDB standards.
Libraries and frameworks
Over 80% of websites use a third-party JavaScript library or web framework as part of their client-side scripting.
jQuery is by far the most-used. Other notable ones include Angular, Bootstrap, Lodash, Modernizr, React, Underscore, and Vue. Multiple options can be used in conjunction, such as jQuery and Bootstrap.
However, the term "Vanilla JS" was coined for websites not using any libraries or frameworks at all, instead relying entirely on standard JavaScript functionality.
Other usage
The use of JavaScript has expanded beyond its web browser roots. JavaScript engines are now embedded in a variety of other software systems, both for server-side website deployments and non-browser applications.
Initial attempts at promoting server-side JavaScript usage were Netscape Enterprise Server and Microsoft's Internet Information Services, but they were small niches. Server-side usage eventually started to grow in the late 2000s, with the creation of Node.js and other approaches.
Electron, Cordova, React Native, and other application frameworks have been used to create many applications with behavior implemented in JavaScript. Other non-browser applications include Adobe Acrobat support for scripting PDF documents and GNOME Shell extensions written in JavaScript.
JavaScript has been used in some embedded systems, usually by leveraging Node.js.
Execution
JavaScript engine
Runtime system
A JavaScript engine must be embedded within a runtime system (such as a web browser or a standalone system) to enable scripts to interact with the broader environment. The runtime system includes the necessary APIs for input/output operations, such as networking, storage, and graphics, and provides the ability to import scripts.
JavaScript is a single-threaded language. The runtime processes messages from a queue one at a time, and it calls a function associated with each new message, creating a call stack frame with the function's arguments and local variables. The call stack shrinks and grows based on the function's needs. When the call stack is empty upon function completion, JavaScript proceeds to the next message in the queue. This is called the event loop, described as "run to completion" because each message is fully processed before the next message is considered. However, the language's concurrency model describes the event loop as non-blocking: program I/O is performed using events and callback functions. This means, for example, that JavaScript can process a mouse click while waiting for a database query to return information.
The notable standalone runtimes are Node.js, Deno, and Bun.
Features
The following features are common to all conforming ECMAScript implementations unless explicitly specified otherwise.
Imperative and structured
JavaScript supports much of the structured programming syntax from C (e.g., if statements, while loops, switch statements, do while loops, etc.). One partial exception is scoping: originally JavaScript only had function scoping with var; block scoping was added in ECMAScript 2015 with the keywords let and const. Like C, JavaScript makes a distinction between expressions and statements. One syntactic difference from C is automatic semicolon insertion, which allow semicolons (which terminate statements) to be omitted.
Weakly typed
JavaScript is weakly typed, which means certain types are implicitly cast depending on the operation used.
The binary + operator casts both operands to a string unless both operands are numbers. This is because the addition operator doubles as a concatenation operator
The binary - operator always casts both operands to a number
Both unary operators (+, -) always cast the operand to a number. However, + always casts to Number (binary64) while - preserves BigInt (integer)
Values are cast to strings like the following:
Strings are left as-is
Numbers are converted to their string representation
Arrays have their elements cast to strings after which they are joined by commas (,)
Other objects are converted to the string [object Object] where Object is the name of the constructor of the object
Values are cast to numbers by casting to strings and then casting the strings to numbers. These processes can be modified by defining toString and valueOf functions on the prototype for string and number casting respectively.
JavaScript has received criticism for the way it implements these conversions as the complexity of the rules can be mistaken for inconsistency. For example, when adding a number to a string, the number will be cast to a string before performing concatenation, but when subtracting a number from a string, the string is cast to a number before performing subtraction.
Often also mentioned is {} + [] resulting in 0 (number). This is misleading: the {} is interpreted as an empty code block instead of an empty object, and the empty array is cast to a number by the remaining unary + operator. If the expression is wrapped in parentheses - ({} + []) – the curly brackets are interpreted as an empty object and the result of the expression is "[object Object]" as expected.
Dynamic
Typing
JavaScript is dynamically typed like most other scripting languages. A type is associated with a value rather than an expression. For example, a variable initially bound to a number may be reassigned to a string. JavaScript supports various ways to test the type of objects, including duck typing.
Run-time evaluation
JavaScript includes an eval function that can execute statements provided as strings at run-time.
Object-orientation (prototype-based)
Prototypal inheritance in JavaScript is described by Douglas Crockford as:
In JavaScript, an object is an associative array, augmented with a prototype (see below); each key provides the name for an object property, and there are two syntactical ways to specify such a name: dot notation (obj.x = 10) and bracket notation (obj['x'] = 10). A property may be added, rebound, or deleted at run-time. Most properties of an object (and any property that belongs to an object's prototype inheritance chain) can be enumerated using a for...in loop.
Prototypes
JavaScript uses prototypes where many other object-oriented languages use classes for inheritance. It is possible to simulate many class-based features with prototypes in JavaScript.
Functions as object constructors
Functions double as object constructors, along with their typical role. Prefixing a function call with new will create an instance of a prototype, inheriting properties and methods from the constructor (including properties from the Object prototype). ECMAScript 5 offers the Object.create method, allowing explicit creation of an instance without automatically inheriting from the Object prototype (older environments can assign the prototype to null). The constructor's prototype property determines the object used for the new object's internal prototype. New methods can be added by modifying the prototype of the function used as a constructor. JavaScript's built-in constructors, such as Array or Object, also have prototypes that can be modified. While it is possible to modify the Object prototype, it is generally considered bad practice because most objects in JavaScript will inherit methods and properties from the Object prototype, and they may not expect the prototype to be modified.
Functions as methods
Unlike in many object-oriented languages, in JavaScript there is no distinction between a function definition and a method definition. Rather, the distinction occurs during function calling. When a function is called as a method of an object, the function's local this keyword is bound to that object for that invocation.
Functional
JavaScript functions are first-class; a function is considered to be an object. As such, a function may have properties and methods, such as .call() and .bind().
Lexical closure
A nested function is a function defined within another function. It is created each time the outer function is invoked.
In addition, each nested function forms a lexical closure: the lexical scope of the outer function (including any constant, local variable, or argument value) becomes part of the internal state of each inner function object, even after execution of the outer function concludes.
Anonymous function
JavaScript also supports anonymous functions.
Delegative
JavaScript supports implicit and explicit delegation.
Functions as roles (Traits and Mixins)
JavaScript natively supports various function-based implementations of Role patterns like Traits and Mixins. Such a function defines additional behavior by at least one method bound to the this keyword within its function body. A Role then has to be delegated explicitly via call or apply to objects that need to feature additional behavior that is not shared via the prototype chain.
Object composition and inheritance
Whereas explicit function-based delegation does cover composition in JavaScript, implicit delegation already happens every time the prototype chain is walked in order to, e.g., find a method that might be related to but is not directly owned by an object. Once the method is found it gets called within this object's context. Thus inheritance in JavaScript is covered by a delegation automatism that is bound to the prototype property of constructor functions.
Miscellaneous
Zero-based numbering
JavaScript is a zero-index language.
Variadic functions
An indefinite number of parameters can be passed to a function. The function can access them through formal parameters and also through the local arguments object. Variadic functions can also be created by using the bind method.
Array and object literals
Like in many scripting languages, arrays and objects (associative arrays in other languages) can each be created with a succinct shortcut syntax. In fact, these literals form the basis of the JSON data format.
Regular expressions
In a manner similar to Perl, JavaScript also supports regular expressions, which provide a concise and powerful syntax for text manipulation that is more sophisticated than the built-in string functions.
Promises and Async/await
JavaScript supports promises and Async/await for handling asynchronous operations.
Promises
A built-in Promise object provides functionality for handling promises and associating handlers with an asynchronous action's eventual result. Recently, the JavaScript specification introduced combinator methods, which allow developers to combine multiple JavaScript promises and do operations based on different scenarios. The methods introduced are: Promise.race, Promise.all, Promise.allSettled and Promise.any.
Async/await
Async/await allows an asynchronous, non-blocking function to be structured in a way similar to an ordinary synchronous function. Asynchronous, non-blocking code can be written, with minimal overhead, structured similarly to traditional synchronous, blocking code.
Vendor-specific extensions
Historically, some JavaScript engines supported these non-standard features:
conditional catch clauses (like Java)
array comprehensions and generator expressions (like Python)
concise function expressions (function(args) expr; this experimental syntax predated arrow functions)
ECMAScript for XML (E4X), an extension that adds native XML support to ECMAScript (unsupported in Firefox since version 21)
Syntax
Variables in JavaScript can be defined using either the var, let or const keywords. Variables defined without keywords will be defined at the global scope.
Arrow functions were first introduced in 6th Edition – ECMAScript 2015. They shorten the syntax for writing functions in JavaScript. Arrow functions are anonymous, so a variable is needed to refer to them in order to invoke them after their creation, unless surrounded by parenthesis and executed immediately.
Here is an example of JavaScript syntax.
// Declares a function-scoped variable named `x`, and implicitly assigns the
// special value `undefined` to it. Variables without value are automatically
// set to undefined.
// var is generally considered bad practice and let and const are usually preferred.
var x;
// Variables can be manually set to `undefined` like so
let x2 = undefined;
// Declares a block-scoped variable named `y`, and implicitly sets it to
// `undefined`. The `let` keyword was introduced in ECMAScript 2015.
let y;
// Declares a block-scoped, un-reassignable variable named `z`, and sets it to
// a string literal. The `const` keyword was also introduced in ECMAScript 2015,
// and must be explicitly assigned to.
// The keyword `const` means constant, hence the variable cannot be reassigned
// as the value is `constant`.
const z = "this value cannot be reassigned!";
// Declares a global-scoped variable and assigns 3. This is generally considered
// bad practice, and will not work if strict mode is on.
t = 3;
// Declares a variable named `myNumber`, and assigns a number literal (the value
// `2`) to it.
let myNumber = 2;
// Reassigns `myNumber`, setting it to a string literal (the value `"foo"`).
// JavaScript is a dynamically-typed language, so this is legal.
myNumber = "foo";
Note the comments in the examples above, all of which were preceded with two forward slashes.
More examples can be found at the Wikibooks page on JavaScript syntax examples.
Security
JavaScript and the DOM provide the potential for malicious authors to deliver scripts to run on a client computer via the Web. Browser authors minimize this risk using two restrictions. First, scripts run in a sandbox in which they can only perform Web-related actions, not general-purpose programming tasks like creating files. Second, scripts are constrained by the same-origin policy: scripts from one website do not have access to information such as usernames, passwords, or cookies sent to another site. Most JavaScript-related security bugs are breaches of either the same origin policy or the sandbox.
There are subsets of general JavaScript—ADsafe, Secure ECMAScript (SES)—that provide greater levels of security, especially on code created by third parties (such as advertisements). Closure Toolkit is another project for safe embedding and isolation of third-party JavaScript and HTML.
Content Security Policy is the main intended method of ensuring that only trusted code is executed on a Web page.
Cross-site scripting
A common JavaScript-related security problem is cross-site scripting (XSS), a violation of the same-origin policy. XSS vulnerabilities occur when an attacker can cause a target Website, such as an online banking website, to include a malicious script in the webpage presented to a victim. The script in this example can then access the banking application with the privileges of the victim, potentially disclosing secret information or transferring money without the victim's authorization. One important solution to XSS vulnerabilities is HTML sanitization.
Some browsers include partial protection against reflected XSS attacks, in which the attacker provides a URL including malicious script. However, even users of those browsers are vulnerable to other XSS attacks, such as those where the malicious code is stored in a database. Only correct design of Web applications on the server-side can fully prevent XSS.
XSS vulnerabilities can also occur because of implementation mistakes by browser authors.
Cross-site request forgery
Another cross-site vulnerability is cross-site request forgery (CSRF). In CSRF, code on an attacker's site tricks the victim's browser into taking actions the user did not intend at a target site (like transferring money at a bank). When target sites rely solely on cookies for request authentication, requests originating from code on the attacker's site can carry the same valid login credentials of the initiating user. In general, the solution to CSRF is to require an authentication value in a hidden form field, and not only in the cookies, to authenticate any request that might have lasting effects. Checking the HTTP Referrer header can also help.
"JavaScript hijacking" is a type of CSRF attack in which a <script> tag on an attacker's site exploits a page on the victim's site that returns private information such as JSON or JavaScript. Possible solutions include:
requiring an authentication token in the POST and GET parameters for any response that returns private information.
Misplaced trust in the client
Developers of client-server applications must recognize that untrusted clients may be under the control of attackers. The author of an application should not assume that their JavaScript code will run as intended (or at all) because any secret embedded in the code could be extracted by a determined adversary. Some implications are:
Website authors cannot perfectly conceal how their JavaScript operates because the raw source code must be sent to the client. The code can be obfuscated, but obfuscation can be reverse-engineered.
JavaScript form validation only provides convenience for users, not security. If a site verifies that the user agreed to its terms of service, or filters invalid characters out of fields that should only contain numbers, it must do so on the server, not only the client.
Scripts can be selectively disabled, so JavaScript cannot be relied on to prevent operations such as right-clicking on an image to save it.
It is considered very bad practice to embed sensitive information such as passwords in JavaScript because it can be extracted by an attacker.
Prototype pollution is a runtime vulnerability in which attackers can overwrite arbitrary properties in an object's prototype.
Misplaced trust in developers
Package management systems such as npm and Bower are popular with JavaScript developers. Such systems allow a developer to easily manage their program's dependencies upon other developers' program libraries. Developers trust that the maintainers of the libraries will keep them secure and up to date, but that is not always the case. A vulnerability has emerged because of this blind trust. Relied-upon libraries can have new releases that cause bugs or vulnerabilities to appear in all programs that rely upon the libraries. Inversely, a library can go unpatched with known vulnerabilities out in the wild. In a study done looking over a sample of 133,000 websites, researchers found 37% of the websites included a library with at least one known vulnerability. "The median lag between the oldest library version used on each website and the newest available version of that library is 1,177 days in ALEXA, and development of some libraries still in active use ceased years ago." Another possibility is that the maintainer of a library may remove the library entirely. This occurred in March 2016 when Azer Koçulu removed his repository from npm. This caused tens of thousands of programs and websites depending upon his libraries to break.
Browser and plugin coding errors
JavaScript provides an interface to a wide range of browser capabilities, some of which may have flaws such as buffer overflows. These flaws can allow attackers to write scripts that would run any code they wish on the user's system. This code is not by any means limited to another JavaScript application. For example, a buffer overrun exploit can allow an attacker to gain access to the operating system's API with superuser privileges.
These flaws have affected major browsers including Firefox, Internet Explorer, and Safari.
Plugins, such as video players, Adobe Flash, and the wide range of ActiveX controls enabled by default in Microsoft Internet Explorer, may also have flaws exploitable via JavaScript (such flaws have been exploited in the past).
In Windows Vista, Microsoft has attempted to contain the risks of bugs such as buffer overflows by running the Internet Explorer process with limited privileges. Google Chrome similarly confines its page renderers to their own "sandbox".
Sandbox implementation errors
Web browsers are capable of running JavaScript outside the sandbox, with the privileges necessary to, for example, create or delete files. Such privileges are not intended to be granted to code from the Web.
Incorrectly granting privileges to JavaScript from the Web has played a role in vulnerabilities in both Internet Explorer and Firefox. In Windows XP Service Pack 2, Microsoft demoted JScript's privileges in Internet Explorer.
Microsoft Windows allows JavaScript source files on a computer's hard drive to be launched as general-purpose, non-sandboxed programs (see: Windows Script Host). This makes JavaScript (like VBScript) a theoretically viable vector for a Trojan horse, although JavaScript Trojan horses are uncommon in practice.
Hardware vulnerabilities
In 2015, a JavaScript-based proof-of-concept implementation of a rowhammer attack was described in a paper by security researchers.
In 2017, a JavaScript-based attack via browser was demonstrated that could bypass ASLR. It is called "ASLR⊕Cache" or AnC.
In 2018, the paper that announced the Spectre attacks against Speculative Execution in Intel and other processors included a JavaScript implementation.
Development tools
Important tools have evolved with the language.
Every major web browser has built-in web development tools, including a JavaScript debugger.
Static program analysis tools, such as ESLint and JSLint, scan JavaScript code for conformance to a set of standards and guidelines.
Some browsers have built-in profilers. Stand-alone profiling libraries have also been created, such as benchmark.js and jsbench.
Many text editors have syntax highlighting support for JavaScript code.
Related technologies
Java
A common misconception is that JavaScript is directly related to Java. Both indeed have a C-like syntax (the C language being their most immediate common ancestor language). They are also typically sandboxed, and JavaScript was designed with Java's syntax and standard library in mind. In particular, all Java keywords were reserved in original JavaScript, JavaScript's standard library follows Java's naming conventions, and JavaScript's and objects are based on classes from Java 1.0.
Both languages first appeared in 1995, but Java was developed by James Gosling of Sun Microsystems and JavaScript by Brendan Eich of Netscape Communications.
The differences between the two languages are more prominent than their similarities. Java has static typing, while JavaScript's typing is dynamic. Java is loaded from compiled bytecode, while JavaScript is loaded as human-readable source code. Java's objects are class-based, while JavaScript's are prototype-based. Finally, Java did not support functional programming until Java 8, while JavaScript has done so from the beginning, being influenced by Scheme.
JSON
JSON is a data format derived from JavaScript; hence the name JavaScript Object Notation. It is a widely used format supported by many other programming languages.
Transpilers
Many websites are JavaScript-heavy, so transpilers have been created to convert code written in other languages, which can aid the development process.
TypeScript and CoffeeScript are two notable languages that transpile to JavaScript.
WebAssembly
WebAssembly is a newer language with a bytecode format designed to complement JavaScript, especially the performance-critical portions of web page scripts. All of the major JavaScript engines support WebAssembly, which runs in the same sandbox as regular JavaScript code.
asm.js is a subset of JavaScript that served as the forerunner of WebAssembly.
| Technology | Programming | null |
9877 | https://en.wikipedia.org/wiki/Erg | Erg | The erg is a unit of energy equal to 10−7joules (100nJ). It is not an SI unit, instead originating from the centimetre–gram–second system of units (CGS). Its name is derived from (), a Greek word meaning 'work' or 'task'.
An erg is the amount of work done by a force of one dyne exerted for a distance of one centimetre. In the CGS base units, it is equal to one gram centimetre-squared per second-squared (g⋅cm2/s2). It is thus equal to 10−7 joules or 100 nanojoules (nJ) in SI units.
1 erg = =
1 erg = = =
1 erg = =
1 erg = =
1 erg =
History
In 1864, Rudolf Clausius proposed the Greek word () for the unit of energy, work and heat. In 1873, a committee of the British Association for the Advancement of Science, including British physicists James Clerk Maxwell and William Thomson recommended the general adoption of the centimetre, the gramme, and the second as fundamental units (C.G.S. System of Units). To distinguish derived units, they recommended using the prefix "C.G.S. unit of ..." and requested that the word erg or ergon be strictly limited to refer to the C.G.S. unit of energy.
In 1922, chemist William Draper Harkins proposed the name micri-erg as a convenient unit to measure the surface energy of molecules in surface chemistry. It would equate to 10−14 erg, the equivalent to 10−21 joule.
The erg is not a part of the International System of Units (SI), which has been recommended since 1 January 1978 when the European Economic Community ratified a directive of 1971 that implemented SI as agreed by the General Conference of Weights and Measures. It is the unit of energy in Gaussian units, which are widely used in astrophysics, applications involving microscopic problems and relativistic electrodynamics, and sometimes in mechanics.
| Physical sciences | Energy | Basics and measurement |
9890 | https://en.wikipedia.org/wiki/Electron%20counting | Electron counting | In chemistry, electron counting is a formalism for assigning a number of valence electrons to individual atoms in a molecule. It is used for classifying compounds and for explaining or predicting their electronic structure and bonding. Many rules in chemistry rely on electron-counting:
Octet rule is used with Lewis structures for main group elements, especially the lighter ones such as carbon, nitrogen, and oxygen,
18-electron rule in inorganic chemistry and organometallic chemistry of transition metals,
Hückel's rule for the π-electrons of aromatic compounds,
Polyhedral skeletal electron pair theory for polyhedral cluster compounds, including transition metals and main group elements and mixtures thereof, such as boranes.
Atoms are called "electron-deficient" when they have too few electrons as compared to their respective rules, or "hypervalent" when they have too many electrons. Since these compounds tend to be more reactive than compounds that obey their rule, electron counting is an important tool for identifying the reactivity of molecules. While the counting formalism considers each atom separately, these individual atoms (with their hypothetical assigned charge) do not generally exist as free species.
Counting rules
Two methods of electron counting are "neutral counting" and "ionic counting". Both approaches give the same result (and can therefore be used to verify one's calculation).
The neutral counting approach assumes the molecule or fragment being studied consists of purely covalent bonds. It was popularized by Malcolm Green along with the L and X ligand notation. It is usually considered easier especially for low-valent transition metals.
The "ionic counting" approach assumes purely ionic bonds between atoms.
It is important, though, to be aware that most chemical species exist between the purely covalent and ionic extremes.
Neutral counting
Neutral counting assumes each bond is equally split between two atoms.
This method begins with locating the central atom on the periodic table and determining the number of its valence electrons. One counts valence electrons for main group elements differently from transition metals, which use d electron count.
E.g. in period 2: B, C, N, O, and F have 3, 4, 5, 6, and 7 valence electrons, respectively.
E.g. in period 4: K, Ca, Sc, Ti, V, Cr, Fe, Ni have 1, 2, 3, 4, 5, 6, 8, 10 valence electrons respectively.
One is added for every halide or other anionic ligand which binds to the central atom through a sigma bond.
Two is added for every lone pair bonding to the metal (e.g. each Lewis base binds with a lone pair). Unsaturated hydrocarbons such as alkenes and alkynes are considered Lewis bases. Similarly Lewis and Bronsted acids (protons) contribute nothing.
One is added for each homoelement bond.
One is added for each negative charge, and one is subtracted for each positive charge.
Ionic counting
Ionic counting assumes unequal sharing of electrons in the bond. The more electronegative atom in the bond gains electron lost from the less electronegative atom.
This method begins by calculating the number of electrons of the element, assuming an oxidation state.
E.g. for a Fe2+ has 6 electrons
S2− has 8 electrons
Two is added for every halide or other anionic ligand which binds to the metal through a sigma bond.
Two is added for every lone pair bonding to the metal (e.g. each phosphine ligand can bind with a lone pair). Similarly Lewis and Bronsted acids (protons) contribute nothing.
For unsaturated ligands such as alkenes, one electron is added for each carbon atom binding to the metal.
Electrons donated by common fragments
"Special cases"
The numbers of electrons "donated" by some ligands depends on the geometry of the metal-ligand ensemble. An example of this complication is the M–NO entity. When this grouping is linear, the NO ligand is considered to be a three-electron ligand. When the M–NO subunit is strongly bent at N, the NO is treated as a pseudohalide and is thus a one electron (in the neutral counting approach). The situation is not very different from the η3 versus the η1 allyl. Another unusual ligand from the electron counting perspective is sulfur dioxide.
Examples
H2O
For a water molecule (H2O), using both neutral counting and ionic counting result in a total of 8 electrons.
The neutral counting method assumes each OH bond is split equally (each atom gets one electron from the bond). Thus both hydrogen atoms have an electron count of one. The oxygen atom has 6 valence electrons. The total electron count is 8, which agrees with the octet rule.
With the ionic counting method, the more electronegative oxygen will gain electrons donated by the two hydrogen atoms in the two OH bonds to become O2-. It now has 8 total valence electrons, which obeys the octet rule.
CH4, for the central C
neutral counting: C contributes 4 electrons, each H radical contributes one each: 4 + 4 × 1 = 8 valence electrons
ionic counting: C4− contributes 8 electrons, each proton contributes 0 each: 8 + 4 × 0 = 8 electrons.
Similar for H:
neutral counting: H contributes 1 electron, the C contributes 1 electron (the other 3 electrons of C are for the other 3 hydrogens in the molecule): 1 + 1 × 1 = 2 valence electrons.
ionic counting: H contributes 0 electrons (H+), C4− contributes 2 electrons (per H), 0 + 1 × 2 = 2 valence electrons
conclusion: Methane follows the octet-rule for carbon, and the duet rule for hydrogen, and hence is expected to be a stable molecule (as we see from daily life)
H2S, for the central S
neutral counting: S contributes 6 electrons, each hydrogen radical contributes one each: 6 + 2 × 1 = 8 valence electrons
ionic counting: S2− contributes 8 electrons, each proton contributes 0: 8 + 2 × 0 = 8 valence electrons
conclusion: with an octet electron count (on sulfur), we can anticipate that H2S would be pseudo-tetrahedral if one considers the two lone pairs.
SCl2, for the central S
neutral counting: S contributes 6 electrons, each chlorine radical contributes one each: 6 + 2 × 1 = 8 valence electrons
ionic counting: S2+ contributes 4 electrons, each chloride anion contributes 2: 4 + 2 × 2 = 8 valence electrons
conclusion: see discussion for H2S above. Both SCl2 and H2S follow the octet rule - the behavior of these molecules is however quite different.
SF6, for the central S
neutral counting: S contributes 6 electrons, each fluorine radical contributes one each: 6 + 6 × 1 = 12 valence electrons
ionic counting: S6+ contributes 0 electrons, each fluoride anion contributes 2: 0 + 6 × 2 = 12 valence electrons
conclusion: ionic counting indicates a molecule lacking lone pairs of electrons, therefore its structure will be octahedral, as predicted by VSEPR. One might conclude that this molecule would be highly reactive - but the opposite is true: SF6 is inert, and it is widely used in industry because of this property.
RuCl2(bpy)2
RuCl2(bpy)2 is an octahedral metal complex with two bidentate 2,2′-Bipyridine (bpy) ligands and two chloride ligands.
In the neutral counting method, the Ruthenium of the complex is treated as Ru(0). It has 8 d electrons to contribute to the electron count. The two bpy ligands are L-type ligand neutral ligands, thus contributing two electrons each. The two chloride ligands hallides and thus 1 electron donors, donating 1 electron each to the electron count. The total electron count of RuCl2(bpy)2 is 18.
In the ionic counting method, the Ruthenium of the complex is treated as Ru(II). It has 6 d electrons to contribute to the electron count. The two bpy ligands are L-type ligand neutral ligands, thus contributing two electrons each. The two chloride ligands are anionic ligands, thus donating 2 electrons each to the electron count. The total electron count of RuCl2(bpy)2 is 18, agreeing with the result of neural counting.
TiCl4, for the central Ti
neutral counting: Ti contributes 4 electrons, each chlorine radical contributes one each: 4 + 4 × 1 = 8 valence electrons
ionic counting: Ti4+ contributes 0 electrons, each chloride anion contributes two each: 0 + 4 × 2 = 8 valence electrons
conclusion: Having only 8e (vs. 18 possible), we can anticipate that TiCl4 will be a good Lewis acid. Indeed, it reacts (in some cases violently) with water, alcohols, ethers, amines.
Fe(CO)5
neutral counting: Fe contributes 8 electrons, each CO contributes 2 each: 8 + 2 × 5 = 18 valence electrons
ionic counting: Fe(0) contributes 8 electrons, each CO contributes 2 each: 8 + 2 × 5 = 18 valence electrons
conclusions: this is a special case, where ionic counting is the same as neutral counting, all fragments being neutral. Since this is an 18-electron complex, it is expected to be isolable compound.
Ferrocene, (C5H5)2Fe, for the central Fe:
neutral counting: Fe contributes 8 electrons, the 2 cyclopentadienyl-rings contribute 5 each: 8 + 2 × 5 = 18 electrons
ionic counting: Fe2+ contributes 6 electrons, the two aromatic cyclopentadienyl rings contribute 6 each: 6 + 2 × 6 = 18 valence electrons on iron.
conclusion: Ferrocene is expected to be an isolable compound.
| Physical sciences | Bonding | Chemistry |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.